This website requires JavaScript.
Explore
Help
Sign In
third-party-mirrors
/
ollama
Watch
1
Star
0
Fork
1
You've already forked ollama
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
ollama
/
llm
/
llama.cpp
History
Michael Yang
6deebf2489
update for qwen
2023-12-04 11:38:05 -08:00
..
ggml
@
9e232f0234
subprocess llama.cpp server (
#401
)
2023-08-30 16:35:03 -04:00
gguf
@
23b5e12eb5
update for qwen
2023-12-04 11:38:05 -08:00
patches
update llama.cpp
2023-11-21 09:50:02 -08:00
generate_darwin_amd64.go
add back
f16c
instructions on intel mac
2023-11-26 15:59:49 -05:00
generate_darwin_arm64.go
update llama.cpp
2023-11-21 09:50:02 -08:00
generate_linux.go
Disable CUDA peer access as a workaround for multi-gpu inference bug (
#1261
)
2023-11-24 14:05:57 -05:00
generate_windows.go
windows CUDA support (
#1262
)
2023-11-24 17:16:36 -05:00