So I am running invokeai in a docker container, but this thing is very slow.
My AMD GPU is supposed to have 20GB of VRAM, therefore I figured out that something is wrong.
When using the docker run
command I was told to pass my GPU’s drivers, yet I assume that this is not where they are on nixos. It may work for debian or arch based distros.
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 --name invokeai -d --volume ~/invokeai:/invokeai ghcr.io/invoke-ai/invokeai:v4.2.9-rocm
I’m talking about that --device
part.
Invokeai stated that a command rocm-smi
is supposed to work.
It does on my base system, not prompting any errors (or so I believe:)
========================================= ROCm System Management Interface =========================================
=================================================== Concise Info ===================================================
Device [Model : Revision] Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
Name (20 chars) (Edge) (Avg) (Mem, Compute)
====================================================================================================================
0 [0x1002 : 0xcc] 35.0°C 11.0W N/A, N/A 26Mhz 96Mhz 0% auto 265.0W 7% 0%
0x1002
1 [0x1002 : 0xc5] 41.0°C 21.096W N/A, N/A None 2400Mhz 0% auto Unsupported 5% 0%
0x1002
====================================================================================================================
=============================================== End of ROCm SMI Log ================================================
But it doesn’t inside of my docker container.
How to approach this? Like it is pretty difficult to find information about it.
My assumptions are that I need to know where nixos keeps the gpu device/driver, then somehow pass this into the docker.
Alternatively I may need to build my own image based on the already working one, but having something about my rocm in its build steps?
I’m sorry if this is obvious.
And thank you in an advance for any sort of help.