so I activated the ollama service today, pulled the Deepseek-r1:1.7B and tried to use it, however, a new chat would open instead of seeing the output of the LLM upon my prompt.
Checking the journal, I see this:
nov 02 16:38:21 woody ollama[1631]: /build/source/ml/backend/ggml/ggml/src/ggml-cuda/common.cuh:106: ggml was not compiled with any CUDA arch <= 610
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-base.so(+0x15156) [0x7f0475daa156]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-base.so(ggml_print_backtrace+0x21a) [0x7f0475daa5fa]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-base.so(ggml_abort+0x157) [0x7f0475daa7d7]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cuda.so(_Z31ggml_cuda_highest_compiled_archi+0x28) [0x7f04254bcd58]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cuda.so(_Z24ggml_cuda_should_use_mmq9ggml_typeil+0x87) [0x7f0425569e87]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cuda.so(+0xd6eee) [0x7f04254d6eee]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cuda.so(+0xdb616) [0x7f04254db616]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x11c0e87]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x1235011]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x1236366]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x123984f]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x123a76b]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x117fb91]
nov 02 16:38:21 woody ollama[1631]: /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped() [0x4a79a4]
nov 02 16:38:21 woody ollama[1631]: SIGABRT: abort
nov 02 16:38:21 woody ollama[1631]: PC=0x7f04cca9984c m=0 sigcode=18446744073709551610
nov 02 16:38:21 woody ollama[1631]: signal arrived during cgo execution
nov 02 16:38:21 woody ollama[1631]: goroutine 13 gp=0xc000504c40 m=0 mp=0x217eaa0 [syscall]:
nov 02 16:38:21 woody ollama[1631]: runtime.cgocall(0x117fb50, 0xc00017fbd8)
nov 02 16:38:21 woody ollama[1631]: runtime/cgocall.go:167 +0x4b fp=0xc00017fbb0 sp=0xc00017fb78 pc=0x49ca0b
nov 02 16:38:21 woody ollama[1631]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f04645b0fd0, {0x6, 0x7f046460e4c0, 0x0, 0x7f0464664a40, 0x7f04645b7650, 0x7f0465156660, 0x7f0465160730})
a panic!
but I don’t understand the ggml was not compiled with any CUDA arch <= 610, I have two GPUs in the same computer, a 1050 Ti and a 6500 XT. For LLMs I want to use my 1050 Ti and it was correctly detected by ollama:
nov 02 16:38:20 woody ollama[1631]: llama_model_load: vocab only - skipping tensors
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.080-06:00 level=INFO source=server.go:398 msg="starting runner" cmd="/nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/bin/.ollama-wrapped runner --model /var/lib/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --port 36455"
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.090-06:00 level=INFO source=runner.go:864 msg="starting go runner"
nov 02 16:38:21 woody ollama[1631]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
nov 02 16:38:21 woody ollama[1631]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
nov 02 16:38:21 woody ollama[1631]: ggml_cuda_init: found 1 CUDA devices:
nov 02 16:38:21 woody ollama[1631]: Device 0: NVIDIA GeForce GTX 1050 Ti, compute capability 6.1, VMM: yes, ID: GPU-e179f674-b924-3f93-1122-128082dafddf
nov 02 16:38:21 woody ollama[1631]: load_backend: loaded CUDA backend from /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cuda.so
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.169-06:00 level=INFO source=server.go:503 msg="system memory" total="31.3 GiB" free="22.8 GiB" free_swap="50.0 GiB"
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.169-06:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/var/lib/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc library=cuda parallel=1 required="1.8 GiB" gpus=1
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.169-06:00 level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[3.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.8 GiB" memory.required.partial="1.8 GiB" memory.required.kv="112.0 MiB" memory.required.allocations="[1.8 GiB]" memory.weights.total="934.7 MiB" memory.weights.repeating="752.1 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
nov 02 16:38:21 woody ollama[1631]: load_backend: loaded CPU backend from /nix/store/zyxx6r23fw4dy00vn4i1s5g6s9hd7mg2-ollama-0.11.10/lib/ollama/libggml-cpu-haswell.so
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.170-06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,890,900,1000,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.170-06:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:36455"
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.181-06:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:6 GPULayers:29[ID:GPU-e179f674-b924-3f93-1122-128082dafddf Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.181-06:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
nov 02 16:38:21 woody ollama[1631]: time=2025-11-02T16:38:21.181-06:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
nov 02 16:38:21 woody ollama[1631]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce GTX 1050 Ti) - 3982 MiB free
nov 02 16:38:21 woody ollama[1631]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /var/lib/ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
this is how I configured the activation of the service:
services = {
ollama = {
enable = true;
acceleration = "cuda";
};
nextjs-ollama-llm-ui = {
enable = true;
port = 3999;
};
};
Output of nvidia-smi:
Sun Nov 2 16:50:57 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1050 Ti Off | 00000000:04:00.0 Off | N/A |
| 46% 33C P8 N/A / 75W | 6MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3369 G ...urrent-system/sw/bin/Hyprland 1MiB |
+-----------------------------------------------------------------------------------------+
(I don’t know why hyprland is using 1 MG of VRAM from my 1050 Ti)