This website requires JavaScript.
Explore
Help
Sign In
norohind
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
forked from
third-party-mirrors/ollama
Code
Pull Requests
Activity
ollama
/
llm
History
Mark Ward
948114e3e3
fix sched to wait for the runner to terminate to ensure following vram check will be more accurate
2024-05-01 18:51:10 +00:00
..
ext_server
llm: add back check for empty token cache
2024-04-30 17:38:44 -04:00
generate
Do not build AVX runners on ARM64
2024-04-26 23:55:32 -06:00
llama.cpp
@
952d03dbea
update llama.cpp commit to
952d03d
2024-04-30 17:31:20 -04:00
patches
Fix clip log import
2024-04-26 09:43:46 -07:00
ggla.go
…
ggml.go
…
gguf.go
…
llm_darwin_amd64.go
…
llm_darwin_arm64.go
…
llm_linux.go
…
llm_windows.go
…
llm.go
Add import declaration for windows,arm64 to llm.go
2024-04-26 23:23:53 -06:00
memory.go
gpu: add 512MiB to darwin minimum, metal doesn't have partial offloading overhead (
#4068
)
2024-05-01 11:46:03 -04:00
payload.go
…
server.go
fix sched to wait for the runner to terminate to ensure following vram check will be more accurate
2024-05-01 18:51:10 +00:00
status.go
…