So basically I wanted to run webUI stable diffusion on my machine to see if it works. Currently I try to do this by using invokeai, because I’ve read that this would be more nix way of doing it.
I’m stuck with the following error:
Traceback (most recent call last):
File "/home/morswin/invokeai/.venv/bin/invokeai-web", line 8, in <module>
sys.exit(run_app())
^^^^^^^^^
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/run_app.py", line 10, in run_app
from invokeai.app.api_app import invoke_api
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api_app.py", line 23, in <module>
from invokeai.app.api.dependencies import ApiDependencies
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api/dependencies.py", line 14, in <module>
from invokeai.app.services.download.download_default import DownloadQueueService
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/services/download/__init__.py", line 3, in <module>
from invokeai.app.services.download.download_base import (
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/services/download/download_base.py", line 13, in <module>
from invokeai.backend.model_manager.metadata import RemoteModelFile
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/__init__.py", line 16, in <module>
from invokeai.backend.model_manager.load import LoadedModel
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/__init__.py", line 9, in <module>
from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/load_base.py", line 21, in <module>
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase, ModelLockerBase
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/__init__.py", line 4, in <module>
from .model_cache_default import ModelCache # noqa F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 39, in <module>
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_util.py", line 14, in <module>
from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import DepthAnythingPipeline
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/image_util/__init__.py", line 12, in <module>
from invokeai.backend.image_util.util import InitImageResizer, make_grid # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/image_util/util.py", line 4, in <module>
import cv2
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/cv2/__init__.py", line 181, in <module>
bootstrap()
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/cv2/__init__.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/current-system/sw/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
From what I understand, I’m missing that libGL.so.1 c++ or c library.
I am already using nix-shell and python venv to run all of this.
The shell.nix file I’m using looks like this:
I commented out some parts because nothing I found on the internet actually helped me.
This line:
LD_LIBRARY_PATH = “${pkgs.stdenv.cc.cc.lib}/lib/:/run/opengl-driver/lib/”;
is here because I was getting different error, the following one, which this line solved.
Traceback (most recent call last):
File "/home/morswin/invokeai/.venv/bin/invokeai-web", line 8, in <module>
sys.exit(run_app())
^^^^^^^^^
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/run_app.py", line 10, in run_app
from invokeai.app.api_app import invoke_api
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api_app.py", line 8, in <module>
import torch
File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/torch/__init__.py", line 237, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: libstdc++.so.6: cannot open shared object file: No such file or directory
I’m adding this in case it may help.
I also tried installing following packages by regular nixos-rebuild way.
mesa
libGL
gcc14
python311Packages.opencv4
And none of them solved my issue.
I also have opengl enabled in my hardware with this part that was required for me to run davinchi resolve, which works for me:
And at the end, yes, I have installed AMD drivers.
Is there anything I can do to solve this issue? I’d want to be able to run this webUI thing. Nixos solved so many issues I had with arch-based distros. No crashes so far and such, but I’m stuck on this nix way to run stuff. I am convinced that I need to add something to my shell.nix file, but what?
Is there any piece of information that I should provide, that I’ve forgot to mention? Like some kind of logs or such?
Also sorry for my english and any mistakes and typos I’d made, it’s not my first language.
And thank you all in advance for any help you’d be willing to provide me with.
The project already has a flake.nix file, so you don’t have to write your own shell. All you need to do is execute nix develop and it should install and set up everything that you need.
To run the WebUI from there, all you need to do is run one of the following commands in a locally cloned repo, depending on your GPU:
nix run .#invokeai-amd
nix run .#invokeai-nvidia
If you want to try it out without cloning, run:
nix run github:nixified-ai/flake#invokeai-amd
nix run github:nixified-ai/flake#invokeai-nvidia
Notes:
If you’re gonna use nixified-ai, it might be worth it to set up the binary cache to save on compilation resources. After you add the cache, switch to your system before you run the flake for the cache to be enabled.
It’s been a while since nixified-ai has been updated, so invokeai might be out of date.
For some reason it didn’t execute correctly without it.
Additionally I encounter terminal, let’s say visual glitches, and I have no Ida why is this. There is a text UI at initial configuration stage that overrides its prompts, but I is this InvokeAI’s fault? It may be mine, but neither using bash, nor fish did change the outcome. I still can go through this, but it is often incomprehensible what input am I giving there.
Maybe it is the issue that I’m using kitty terminal emulator?
Anyway, I can now start the program but it first fails with the following error:
[2024-09-04 20:59:21,542]::[InvokeAI]::INFO --> Installing runwayml/stable-diffusion-v1-5 [3/11]
Traceback (most recent call last):
File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 269, in hf_raise_for_status
response.raise_for_status()
File "/nix/store/kjacwxgfwl9mj4r2472y49rxgdgxm699-python3.11-requests-2.31.0/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/bin/.invokeai-configure-wrapped", line 9, in <module>
sys.exit(invokeai_configure())
^^^^^^^^^^^^^^^^^^^^
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/invokeai_configure.py", line 946, in main
process_and_execute(opt, models_to_download)
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/frontend/install/model_install.py", line 697, in process_and_execute
installer.install(selections)
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 217, in install
self.heuristic_import(path)
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 262, in heuristic_import
models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 332, in _install_repo
hinfo = HfApi().model_info(repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1867, in model_info
hf_raise_for_status(r)
File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 310, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66d8ae09-749431e763269cfa608710dc;abc5bb59-2e29-40fe-89f6-e2576c6adc8e)
Repository Not Found for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
Then when I actually try to use its website interface, I get the following log/error:
Ok, sorry, but at least for me, all of those options have failed me.
It’s either that something is only supporting nvidia while I have an amd graphics card,
or even more often it fails with managing models.
Like it’s either failing at the stage when it downloads it. Or it doesn’t see it afterwards.
But the flake.nix on InvokeAI website contains interesting stuff in there, so this may help me combine my own flake.nix or shell.nix file as I make further attempts.
Sorry for no logs this time, but there were too many different attempts.
Like, now I will try to either do this with InvokeAI’s docker container, because I assume that this would be all correctly set up.
If this fails, I will again try making my own shell or flake as I mentioned earlier.
And of course, I will mention here if I succeed in any way.
I don’t want anybody else to go through this frustration I feel.
Anyway, thank you @eljamm for help.
I may need to better study how to use nix as the language and as the tool.