This introduces a little array type for holding GGUF arrays that
prevents the array from growing too large. It preserves the total size
of the array, but limits the number of elements that are actually
allocated.
GGUF arrays that are extremely large, such as tokens, etc, are generally
uninteresting to users, and are not worth the memory overhead, and the
time spent allocating and freeing them. They are necessary for
inference, but not for inspection.
The size of these arrays is, however, important in Ollama, so it is
preserved in a separate field on array.
The recent refactoring of the memory prediction assumed all layers
are the same size, but for some models (like deepseek-coder-v2) this
is not the case, so our predictions were significantly off.
Prior to this change, we logged the memory prediction multiple times
as the scheduler iterates to find a suitable configuration, which can be
confusing since only the last log before the server starts is actually valid.
This now logs once just before starting the server on the final configuration.
It also reports what library instead of always saying "offloading to gpu" when
using CPU.
On Windows, recent llama.cpp changes make mmap slower in most
cases, so default to off. This also implements a tri-state for
use_mmap so we can detect the difference between a user provided
value of true/false, or unspecified.
We update the PATH on windows to get the CLI mapped, but this has
an unintended side effect of causing other apps that may use our bundled
DLLs to get terminated when we upgrade.
Still not complete, needs some refinement to our prediction to understand the
discrete GPUs available space so we can see how many layers fit in each one
since we can't split one layer across multiple GPUs we can't treat free space
as one logical block
This reverts commit f5f245cc154580fa7b4052c001d2a7e3d771cfb8, reversing
changes made to 94d37fdcae30ddeb6c9f65c8707004f5ec9eaf33.
this change broke gguf v2 which is incorrectly detected as big endian
On some systems, 1 minute isn't sufficient to finish the load after it
hits 100% This creates 2 distinct timers, although they're both set to
the same value for now so we can refine the timeouts further.