This website requires JavaScript.
Explore
Help
Sign In
norohind
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
forked from
third-party-mirrors/ollama
Code
Pull Requests
Activity
ollama
/
llm
History
Jeremy
440b7190ed
Update gen_linux.sh
...
Added OLLAMA_CUSTOM_CUDA_DEFS and OLLAMA_CUSTOM_ROCM_DEFS instead of OLLAMA_CUSTOM_GPU_DEFS
2024-04-18 19:18:10 -04:00
..
ext_server
Support unicode characters in model path (
#3681
)
2024-04-16 17:00:12 -04:00
generate
Update gen_linux.sh
2024-04-18 19:18:10 -04:00
llama.cpp
@
7593639ce3
update llama.cpp submodule to
7593639
(
#3665
)
2024-04-15 23:04:43 -04:00
patches
Bump to b2581
2024-04-02 11:53:07 -07:00
ggla.go
refactor tensor query
2024-04-10 11:37:20 -07:00
ggml.go
add stablelm graph calculation
2024-04-17 13:57:19 -07:00
gguf.go
fix padding to only return padding
2024-04-16 15:43:26 -07:00
llm_darwin_amd64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_linux.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_windows.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm.go
cgo quantize
2024-04-08 15:31:08 -07:00
payload.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
server.go
add stablelm graph calculation
2024-04-17 13:57:19 -07:00
status.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00