Logo
Explore Help
Sign In
third-party-mirrors/ollama
1
0
Fork 1
You've already forked ollama
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
ollama/llm
History
Michael Yang 171eb040fc simplify safetensors reading
2024-05-21 11:28:22 -07:00
..
ext_server
feat: add support for flash_attn (#4120)
2024-05-20 13:36:03 -07:00
generate
Port cuda/rocm skip build vars to linux
2024-05-15 15:56:43 -07:00
llama.cpp @ 614d3b914e
set llama.cpp submodule commit to 614d3b9
2024-05-20 15:28:17 -07:00
patches
update llama.cpp submodule to 614d3b9 (#4414)
2024-05-16 13:53:09 -07:00
filetype.go
comments
2024-05-06 15:24:01 -07:00
ggla.go
simplify safetensors reading
2024-05-21 11:28:22 -07:00
ggml.go
simplify safetensors reading
2024-05-21 11:28:22 -07:00
gguf.go
simplify safetensors reading
2024-05-21 11:28:22 -07:00
llm_darwin_amd64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_linux.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_windows.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
llm.go
comments
2024-05-06 15:24:01 -07:00
memory.go
typo
2024-05-13 14:18:34 -07:00
payload.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
server.go
feat: add support for flash_attn (#4120)
2024-05-20 13:36:03 -07:00
status.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
Powered by Gitea Version: 1.23.4 Page: 205ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API