Logo
Explore Help
Sign In
third-party-mirrors/ollama
1
0
Fork 1
You've already forked ollama
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
ollama/llm
History
Daniel Hiltgen ee49844d09
Merge pull request #4153 from dhiltgen/gpu_verbose_response
Add GPU usage
2024-05-08 16:39:11 -07:00
..
ext_server
omit prompt and generate settings from final response
2024-05-03 17:00:02 -07:00
generate
Do not build AVX runners on ARM64
2024-04-26 23:55:32 -06:00
llama.cpp @ 952d03dbea
update llama.cpp commit to 952d03d
2024-04-30 17:31:20 -04:00
patches
Fix llava models not working after first request (#4164)
2024-05-05 20:50:31 -07:00
filetype.go
comments
2024-05-06 15:24:01 -07:00
ggla.go
refactor tensor query
2024-04-10 11:37:20 -07:00
ggml.go
skip if same quantization
2024-05-07 17:44:19 -07:00
gguf.go
fixes for gguf (#3863)
2024-04-23 20:57:20 -07:00
llm_darwin_amd64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_linux.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_windows.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
llm.go
comments
2024-05-06 15:24:01 -07:00
memory.go
Record GPU usage information
2024-05-08 14:45:39 -07:00
payload.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
server.go
Merge pull request #4153 from dhiltgen/gpu_verbose_response
2024-05-08 16:39:11 -07:00
status.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
Powered by Gitea Version: 1.23.4 Page: 148ms Template: 10ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API