There was an error while loading. Please reload this page.
update llama.cpp submodule to `ceca1ae` (ollama#3064)
update llama.cpp submodule to `c29af7e` (ollama#2868)
fix `build_windows.ps1` script to run `go build` with the correct flags
update llama.cpp submodule to `c14f72d`
handle race condition while setting raw mode in windows (ollama#2509)
Merge pull request ollama#2403 from dhiltgen/handle_tmp_cleanup Ensure the libraries are present
fix error on `ollama run` with a non-existent model
Merge pull request ollama#2197 from dhiltgen/remove_rocm_image Add back ROCm container support
update submodule to `cd4fddb29f81d6a1f6d51a0c016bc6b486d68def`
revisit memory allocation to account for full kv cache on main gpu