Merge ef826e57900d6c2b3eae9a7df7e29d490178591c into 67691e410db7a50b07a64858820b14de9aa91314

This commit is contained in:
Lennart J. Kurzweg 2024-11-14 15:56:21 +08:00 committed by GitHub
commit 489464fe6d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -46,7 +46,7 @@ Generate a response for a given prompt with a provided model. This is a streamin
Advanced parameters (optional):
- `format`: the format to return a response in. Currently the only accepted value is `json`
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`. Other options are passed through to [Llama.cpp](https://github.com/ggerganov/llama.cpp). For info on those read the relevant [Docs](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#generation-flags)
- `system`: system message to (overrides what is defined in the `Modelfile`)
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory