From 0d4fa34aeeda72d335e5cabf8f8d0a3ddca31381 Mon Sep 17 00:00:00 2001 From: Matt Williams Date: Fri, 8 Dec 2023 17:37:12 -0800 Subject: [PATCH] Added mention of the NOPRUNE env var Signed-off-by: Matt Williams --- docs/faq.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/faq.md b/docs/faq.md index 28631a79..14b91b81 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -95,6 +95,10 @@ The manifest lists all the layers used in this model. You will see a `media type To modify where models are stored, you can use the `OLLAMA_MODELS` environment variable. Note that on Linux this means defining `OLLAMA_MODELS` in a drop-in `/etc/systemd/system/ollama.service.d` service file, reloading systemd, and restarting the ollama service. +### I downloaded most of a model yesterday, but it's gone today. What happened? + +When the Ollama server starts, it looks for fragments of models that still exist on the system and cleans them out. If you have an Internet connection that can't complete a model download all at once, this can be frustrating. Adding the OLLAMA_NOPRUNE environment variable will prevent the server from pruning incomplete files. + ## Does Ollama send my prompts and answers back to Ollama.ai to use in any way? No. Anything you do with Ollama, such as generate a response from the model, stays with you. We don't collect any data about how you use the model. You are always in control of your own data.