forked from third-party-mirrors/ollama
Currently the entire KV cache is shared by all parallel requestors. This gives maximum resource utilization but there is a potential for overflow and unfairness if multiple requests are trying to use significant context. Instead, it is better to have a hard partition of KV cache space.
runner
Note: this is a work in progress
A minimial runner for loading a model and running inference via a http web server.
./runner -model <model binary>
Completion
curl -X POST -H "Content-Type: application/json" -d '{"prompt": "hi"}' http://localhost:8080/completion
Embeddings
curl -X POST -H "Content-Type: application/json" -d '{"prompt": "turn me into an embedding"}' http://localhost:8080/embeddings
TODO
- Parallization
- More tests