ollama/.dockerignore
Daniel Hiltgen 051a3d271c Add cgo implementation for llama.cpp
Run the server.cpp directly inside the Go runtime via cgo
while retaining the LLM Go abstractions.
2023-12-12 17:26:43 -08:00

8 lines
64 B
Plaintext

.vscode
ollama
app
dist
llm/llama.cpp/gguf
.env
.cache
test_data