This website requires JavaScript.
Explore
Help
Sign In
third-party-mirrors
/
ollama
Watch
1
Star
0
Fork
1
You've already forked ollama
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
ollama
/
llm
History
jmorganca
fcf4d60eee
llm: add back check for empty token cache
2024-04-30 17:38:44 -04:00
..
ext_server
llm: add back check for empty token cache
2024-04-30 17:38:44 -04:00
generate
Do not build AVX runners on ARM64
2024-04-26 23:55:32 -06:00
llama.cpp
@
952d03dbea
update llama.cpp commit to
952d03d
2024-04-30 17:31:20 -04:00
patches
Fix clip log import
2024-04-26 09:43:46 -07:00
ggla.go
…
ggml.go
…
gguf.go
…
llm_darwin_amd64.go
…
llm_darwin_arm64.go
…
llm_linux.go
…
llm_windows.go
…
llm.go
Add import declaration for windows,arm64 to llm.go
2024-04-26 23:23:53 -06:00
memory.go
fix gemma, command-r layer weights
2024-04-26 15:00:55 -07:00
payload.go
…
server.go
llm: dont cap context window limit to training context window (
#3988
)
2024-04-29 10:07:30 -04:00
status.go
…