Compare commits

...

3 Commits

Author SHA1 Message Date
Bruce MacDonald
71f0b6c478 remove dividers 2024-02-21 10:37:38 -05:00
Bruce MacDonald
e9a381c559 remove links 2024-02-20 17:25:58 -05:00
Bruce MacDonald
ae7c89eb87 API doc formatting updates
- in preparation for rendering on ollama.com
2024-02-20 16:40:08 -05:00

View File

@ -52,15 +52,15 @@ Advanced parameters (optional):
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API - `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`) - `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
#### JSON mode > **JSON mode**
>
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#generate-request-json-mode) below. > Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#generate-request-json-mode) below.
>
> Note: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace. > **Note**: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
### Examples ### Examples
#### Generate request (Streaming) #### Streaming
##### Request ##### Request
@ -113,7 +113,7 @@ To calculate how fast the response is generated in tokens per second (token/s),
} }
``` ```
#### Request (No streaming) #### No Streaming
##### Request ##### Request
@ -147,7 +147,7 @@ If `stream` is set to `false`, the response will be a single JSON object:
} }
``` ```
#### Request (JSON mode) #### JSON Mode
> When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON. > When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON.
@ -199,11 +199,11 @@ The value of `response` will be a string containing JSON similar to:
} }
``` ```
#### Request (with images) #### Images (Multimodal)
To submit images to multimodal models such as `llava` or `bakllava`, provide a list of base64-encoded `images`: To submit images to multimodal models such as `llava` or `bakllava`, provide a list of base64-encoded `images`:
#### Request ##### Request
```shell ```shell
curl http://localhost:11434/api/generate -d '{ curl http://localhost:11434/api/generate -d '{
@ -214,7 +214,7 @@ curl http://localhost:11434/api/generate -d '{
}' }'
``` ```
#### Response ##### Response
``` ```
{ {
@ -232,7 +232,7 @@ curl http://localhost:11434/api/generate -d '{
} }
``` ```
#### Request (Raw Mode) #### Raw Mode
In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context. In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context.
@ -247,7 +247,24 @@ curl http://localhost:11434/api/generate -d '{
}' }'
``` ```
#### Request (Reproducible outputs) ##### Response
```json
{
"model": "mistral",
"created_at": "2023-11-03T15:36:02.583064Z",
"response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
"done": true,
"total_duration": 8493852375,
"load_duration": 6589624375,
"prompt_eval_count": 14,
"prompt_eval_duration": 119039000,
"eval_count": 110,
"eval_duration": 1779061000
}
```
#### Reproducible Outputs
For reproducible outputs, set `temperature` to 0 and `seed` to a number: For reproducible outputs, set `temperature` to 0 and `seed` to a number:
@ -281,7 +298,7 @@ curl http://localhost:11434/api/generate -d '{
} }
``` ```
#### Generate request (With options) #### Options
If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override. If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.
@ -347,7 +364,7 @@ curl http://localhost:11434/api/generate -d '{
} }
``` ```
#### Load a model #### Load a Model
If an empty prompt is provided, the model will be loaded into memory. If an empty prompt is provided, the model will be loaded into memory.
@ -401,7 +418,7 @@ Advanced parameters (optional):
### Examples ### Examples
#### Chat Request (Streaming) #### Streaming
##### Request ##### Request
@ -452,7 +469,7 @@ Final response:
} }
``` ```
#### Chat request (No streaming) #### No Streaming
##### Request ##### Request
@ -489,7 +506,7 @@ curl http://localhost:11434/api/chat -d '{
} }
``` ```
#### Chat request (With History) #### With Chat History
Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting. Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.
@ -547,7 +564,7 @@ Final response:
} }
``` ```
#### Chat request (with images) #### Images (Multimodal)
##### Request ##### Request
@ -587,7 +604,7 @@ curl http://localhost:11434/api/chat -d '{
} }
``` ```
#### Chat request (Reproducible outputs) #### Reproducible Outputs
##### Request ##### Request
@ -644,7 +661,7 @@ Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `m
### Examples ### Examples
#### Create a new model #### Create a New Model
Create a new model from a `Modelfile`. Create a new model from a `Modelfile`.
@ -675,7 +692,7 @@ A stream of JSON objects. Notice that the final JSON object shows a `"status": "
{"status":"success"} {"status":"success"}
``` ```
### Check if a Blob Exists #### Check if a Blob Exists
```shell ```shell
HEAD /api/blobs/:digest HEAD /api/blobs/:digest
@ -683,12 +700,10 @@ HEAD /api/blobs/:digest
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai. Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai.
#### Query Parameters ##### Query Parameters
- `digest`: the SHA256 digest of the blob - `digest`: the SHA256 digest of the blob
#### Examples
##### Request ##### Request
```shell ```shell
@ -699,7 +714,7 @@ curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931
Return 200 OK if the blob exists, 404 Not Found if it does not. Return 200 OK if the blob exists, 404 Not Found if it does not.
### Create a Blob #### Create a Blob
```shell ```shell
POST /api/blobs/:digest POST /api/blobs/:digest
@ -707,12 +722,10 @@ POST /api/blobs/:digest
Create a blob from a file on the server. Returns the server file path. Create a blob from a file on the server. Returns the server file path.
#### Query Parameters ##### Query Parameters
- `digest`: the expected SHA256 digest of the file - `digest`: the expected SHA256 digest of the file
#### Examples
##### Request ##### Request
```shell ```shell
@ -733,13 +746,15 @@ List models that are available locally.
### Examples ### Examples
#### Request #### List All Local Models
##### Request
```shell ```shell
curl http://localhost:11434/api/tags curl http://localhost:11434/api/tags
``` ```
#### Response ##### Response
A single JSON object will be returned. A single JSON object will be returned.
@ -790,7 +805,8 @@ Show information about a model including details, modelfile, template, parameter
### Examples ### Examples
#### Request #### Show Information About a Model
##### Request
```shell ```shell
curl http://localhost:11434/api/show -d '{ curl http://localhost:11434/api/show -d '{
@ -798,7 +814,7 @@ curl http://localhost:11434/api/show -d '{
}' }'
``` ```
#### Response ##### Response
```json ```json
{ {
@ -825,7 +841,8 @@ Copy a model. Creates a model with another name from an existing model.
### Examples ### Examples
#### Request #### Create a New Model with a Different Name
##### Request
```shell ```shell
curl http://localhost:11434/api/copy -d '{ curl http://localhost:11434/api/copy -d '{
@ -834,7 +851,7 @@ curl http://localhost:11434/api/copy -d '{
}' }'
``` ```
#### Response ##### Response
Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist. Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist.
@ -852,7 +869,8 @@ Delete a model and its data.
### Examples ### Examples
#### Request #### Delete a Model by Name
##### Request
```shell ```shell
curl -X DELETE http://localhost:11434/api/delete -d '{ curl -X DELETE http://localhost:11434/api/delete -d '{
@ -860,7 +878,7 @@ curl -X DELETE http://localhost:11434/api/delete -d '{
}' }'
``` ```
#### Response ##### Response
Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist. Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist.
@ -880,7 +898,8 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
### Examples ### Examples
#### Request #### Pull a Model by Name
##### Request
```shell ```shell
curl http://localhost:11434/api/pull -d '{ curl http://localhost:11434/api/pull -d '{
@ -888,7 +907,7 @@ curl http://localhost:11434/api/pull -d '{
}' }'
``` ```
#### Response ##### Response
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned: If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
@ -952,7 +971,8 @@ Upload a model to a model library. Requires registering for ollama.ai and adding
### Examples ### Examples
#### Request #### Push a Local Model
##### Request
```shell ```shell
curl http://localhost:11434/api/push -d '{ curl http://localhost:11434/api/push -d '{
@ -960,7 +980,7 @@ curl http://localhost:11434/api/push -d '{
}' }'
``` ```
#### Response ##### Response
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned: If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
@ -1021,7 +1041,8 @@ Advanced parameters:
### Examples ### Examples
#### Request #### Generate an Embedding from a Prompt
##### Request
```shell ```shell
curl http://localhost:11434/api/embeddings -d '{ curl http://localhost:11434/api/embeddings -d '{
@ -1030,7 +1051,7 @@ curl http://localhost:11434/api/embeddings -d '{
}' }'
``` ```
#### Response ##### Response
```json ```json
{ {