when running various ollama run MODEL commands the following error prevents models like QwQ from running - this occurs on ollama and ollama-rocm, the error identical with various models on each package:
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: ROCm error: invalid device function
current device: 0, in function ggml_cuda_compute_forward at /build/source/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2672
err
/build/source/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: ROCm error
It doesn’t appear to be related to the model size as I’ve attempted to install other models and gotten a different error claiming the ram available was insufficient.
Is this common behavior due to nixpkgs misconfiguration or a problem with my amd cpu/amd gpu NixOS system?