From 4fc10acce9500e5b039e080926a9d100018fe8df Mon Sep 17 00:00:00 2001 From: Jiayu Liu Date: Mon, 2 Oct 2023 02:51:01 +0800 Subject: [PATCH] add some missing code directives in docs (#664) --- docs/development.md | 8 ++++---- docs/faq.md | 5 ++--- docs/linux.md | 16 ++++++++-------- docs/modelfile.md | 20 ++++++++++---------- 4 files changed, 24 insertions(+), 25 deletions(-) diff --git a/docs/development.md b/docs/development.md index 85cf34c6..bb18be16 100644 --- a/docs/development.md +++ b/docs/development.md @@ -10,25 +10,25 @@ Install required tools: - go version 1.20 or higher - gcc version 11.4.0 or higher -``` +```bash brew install go cmake gcc ``` Get the required libraries: -``` +```bash go generate ./... ``` Then build ollama: -``` +```bash go build . ``` Now you can run `ollama`: -``` +```bash ./ollama ``` diff --git a/docs/faq.md b/docs/faq.md index e71b7484..9d369f1d 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -2,13 +2,13 @@ ## How can I expose the Ollama server? -``` +```bash OLLAMA_HOST=0.0.0.0:11435 ollama serve ``` By default, Ollama allows cross origin requests from `127.0.0.1` and `0.0.0.0`. To support more origins, you can use the `OLLAMA_ORIGINS` environment variable: -``` +```bash OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve ``` @@ -16,4 +16,3 @@ OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve * macOS: Raw model data is stored under `~/.ollama/models`. * Linux: Raw model data is stored under `/usr/share/ollama/.ollama/models` - diff --git a/docs/linux.md b/docs/linux.md index e473907f..8ba8bc45 100644 --- a/docs/linux.md +++ b/docs/linux.md @@ -2,7 +2,7 @@ > Note: A one line installer for Ollama is available by running: > -> ``` +> ```bash > curl https://ollama.ai/install.sh | sh > ``` @@ -10,7 +10,7 @@ Ollama is distributed as a self-contained binary. Download it to a directory in your PATH: -``` +```bash sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama sudo chmod +x /usr/bin/ollama ``` @@ -19,13 +19,13 @@ sudo chmod +x /usr/bin/ollama Start Ollama by running `ollama serve`: -``` +```bash ollama serve ``` Once Ollama is running, run a model in another terminal session: -``` +```bash ollama run llama2 ``` @@ -35,7 +35,7 @@ ollama run llama2 Verify that the drivers are installed by running the following command, which should print details about your GPU: -``` +```bash nvidia-smi ``` @@ -43,7 +43,7 @@ nvidia-smi Create a user for Ollama: -``` +```bash sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama ``` @@ -68,7 +68,7 @@ WantedBy=default.target Then start the service: -``` +```bash sudo systemctl daemon-reload sudo systemctl enable ollama ``` @@ -77,7 +77,7 @@ sudo systemctl enable ollama To view logs of Ollama running as a startup service, run: -``` +```bash journalctl -u ollama ``` diff --git a/docs/modelfile.md b/docs/modelfile.md index ade9c7c0..f6c20b8a 100644 --- a/docs/modelfile.md +++ b/docs/modelfile.md @@ -44,7 +44,7 @@ INSTRUCTION arguments An example of a model file creating a mario blueprint: -``` +```modelfile FROM llama2 # sets the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 @@ -70,13 +70,13 @@ More examples are available in the [examples directory](../examples). The FROM instruction defines the base model to use when creating a model. -``` +```modelfile FROM : ``` #### Build from llama2 -``` +```modelfile FROM llama2 ``` @@ -85,7 +85,7 @@ A list of available base models: #### Build from a bin file -``` +```modelfile FROM ./ollama-model.bin ``` @@ -95,7 +95,7 @@ This bin file location should be specified as an absolute path or relative to th The EMBED instruction is used to add embeddings of files to a model. This is useful for adding custom data that the model can reference when generating an answer. Note that currently only text files are supported, formatted with each line as one embedding. -``` +```modelfile FROM : EMBED .txt EMBED .txt @@ -106,7 +106,7 @@ EMBED /*.txt The `PARAMETER` instruction defines a parameter that can be set when the model is run. -``` +```modelfile PARAMETER ``` @@ -142,7 +142,7 @@ PARAMETER | `{{ .Prompt }}` | The incoming prompt, this is not specified in the model file and will be set based on input. | | `{{ .First }}` | A boolean value used to render specific template information for the first generation of a session. | -``` +```modelfile TEMPLATE """ {{- if .First }} ### System: @@ -162,7 +162,7 @@ SYSTEM """""" The `SYSTEM` instruction specifies the system prompt to be used in the template, if applicable. -``` +```modelfile SYSTEM """""" ``` @@ -170,7 +170,7 @@ SYSTEM """""" The `ADAPTER` instruction specifies the LoRA adapter to apply to the base model. The value of this instruction should be an absolute path or a path relative to the Modelfile and the file must be in a GGML file format. The adapter should be tuned from the base model otherwise the behaviour is undefined. -``` +```modelfile ADAPTER ./ollama-lora.bin ``` @@ -178,7 +178,7 @@ ADAPTER ./ollama-lora.bin The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed. -``` +```modelfile LICENSE """ """