Daniel Hiltgen c3d321d405
llm: Remove GGML_CUDA_NO_PEER_COPY for ROCm (#7174)
This workaround logic in llama.cpp is causing crashes for users with less system memory than VRAM.
2024-10-12 09:56:49 -07:00
..
2024-08-21 11:49:31 -07:00
2024-08-05 09:28:07 -07:00