Nixos doesn't see libGL.so.1 library

So basically I wanted to run webUI stable diffusion on my machine to see if it works. Currently I try to do this by using invokeai, because I’ve read that this would be more nix way of doing it.

I’m stuck with the following error:

Traceback (most recent call last):
  File "/home/morswin/invokeai/.venv/bin/invokeai-web", line 8, in <module>
    sys.exit(run_app())
             ^^^^^^^^^
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/run_app.py", line 10, in run_app
    from invokeai.app.api_app import invoke_api
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api_app.py", line 23, in <module>
    from invokeai.app.api.dependencies import ApiDependencies
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api/dependencies.py", line 14, in <module>
    from invokeai.app.services.download.download_default import DownloadQueueService
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/services/download/__init__.py", line 3, in <module>
    from invokeai.app.services.download.download_base import (
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/services/download/download_base.py", line 13, in <module>
    from invokeai.backend.model_manager.metadata import RemoteModelFile
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/__init__.py", line 16, in <module>
    from invokeai.backend.model_manager.load import LoadedModel
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/__init__.py", line 9, in <module>
    from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig, ModelLoaderBase
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/load_base.py", line 21, in <module>
    from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase, ModelLockerBase
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/__init__.py", line 4, in <module>
    from .model_cache_default import ModelCache  # noqa F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 39, in <module>
    from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_util.py", line 14, in <module>
    from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import DepthAnythingPipeline
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/image_util/__init__.py", line 12, in <module>
    from invokeai.backend.image_util.util import InitImageResizer, make_grid  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/image_util/util.py", line 4, in <module>
    import cv2
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/run/current-system/sw/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

From what I understand, I’m missing that libGL.so.1 c++ or c library.
I am already using nix-shell and python venv to run all of this.
The shell.nix file I’m using looks like this:

{ pkgs ? import <nixpkgs> {} }:
  pkgs.mkShell {
    LD_LIBRARY_PATH = "${pkgs.stdenv.cc.cc.lib}/lib/:/run/opengl-driver/lib/";
    # nativeBuildInputs = with pkgs; [
    # ];
    # buildInputs = with pkgs; [
    # ];
}

I commented out some parts because nothing I found on the internet actually helped me.
This line:

LD_LIBRARY_PATH = “${pkgs.stdenv.cc.cc.lib}/lib/:/run/opengl-driver/lib/”;
is here because I was getting different error, the following one, which this line solved.

Traceback (most recent call last):
  File "/home/morswin/invokeai/.venv/bin/invokeai-web", line 8, in <module>
    sys.exit(run_app())
             ^^^^^^^^^
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/run_app.py", line 10, in run_app
    from invokeai.app.api_app import invoke_api
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api_app.py", line 8, in <module>
    import torch
  File "/home/morswin/invokeai/.venv/lib/python3.11/site-packages/torch/__init__.py", line 237, in <module>
    from torch._C import *  # noqa: F403
    ^^^^^^^^^^^^^^^^^^^^^^
ImportError: libstdc++.so.6: cannot open shared object file: No such file or directory

I’m adding this in case it may help.

I also tried installing following packages by regular nixos-rebuild way.

  • mesa
  • libGL
  • gcc14
  • python311Packages.opencv4

And none of them solved my issue.

I also have opengl enabled in my hardware with this part that was required for me to run davinchi resolve, which works for me:

hardware.opengl = {
  enable = true;
  extraPackages = with pkgs; [
    rocmPackages.clr.icd
  ];
};

And at the end, yes, I have installed AMD drivers.

Is there anything I can do to solve this issue? I’d want to be able to run this webUI thing. Nixos solved so many issues I had with arch-based distros. No crashes so far and such, but I’m stuck on this nix way to run stuff. I am convinced that I need to add something to my shell.nix file, but what?
Is there any piece of information that I should provide, that I’ve forgot to mention? Like some kind of logs or such?

Also sorry for my english and any mistakes and typos I’d made, it’s not my first language.
And thank you all in advance for any help you’d be willing to provide me with.

The project already has a flake.nix file, so you don’t have to write your own shell. All you need to do is execute nix develop and it should install and set up everything that you need.

That said, you could also try GitHub - nixified-ai/flake: A Nix flake for many AI projects as well, which could automate the install even further.

To run the WebUI from there, all you need to do is run one of the following commands in a locally cloned repo, depending on your GPU:

  • nix run .#invokeai-amd
  • nix run .#invokeai-nvidia

If you want to try it out without cloning, run:

  • nix run github:nixified-ai/flake#invokeai-amd
  • nix run github:nixified-ai/flake#invokeai-nvidia

Notes:

  • If you’re gonna use nixified-ai, it might be worth it to set up the binary cache to save on compilation resources. After you add the cache, switch to your system before you run the flake for the cache to be enabled.
  • It’s been a while since nixified-ai has been updated, so invokeai might be out of date.
1 Like

This is the thing I was missing.

For some reason it didn’t execute correctly without it.

Additionally I encounter terminal, let’s say visual glitches, and I have no Ida why is this. There is a text UI at initial configuration stage that overrides its prompts, but I is this InvokeAI’s fault? It may be mine, but neither using bash, nor fish did change the outcome. I still can go through this, but it is often incomprehensible what input am I giving there.

Maybe it is the issue that I’m using kitty terminal emulator?

Anyway, I can now start the program but it first fails with the following error:

[2024-09-04 20:59:21,542]::[InvokeAI]::INFO --> Installing runwayml/stable-diffusion-v1-5 [3/11]
Traceback (most recent call last):
  File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 269, in hf_raise_for_status
    response.raise_for_status()
  File "/nix/store/kjacwxgfwl9mj4r2472y49rxgdgxm699-python3.11-requests-2.31.0/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/bin/.invokeai-configure-wrapped", line 9, in <module>
    sys.exit(invokeai_configure())
             ^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/invokeai_configure.py", line 946, in main
    process_and_execute(opt, models_to_download)
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/frontend/install/model_install.py", line 697, in process_and_execute
    installer.install(selections)
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 217, in install
    self.heuristic_import(path)
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 262, in heuristic_import
    models_installed.update({str(model_path_id_or_url): self._install_repo(str(model_path_id_or_url))})
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/backend/install/model_install_backend.py", line 332, in _install_repo
    hinfo = HfApi().model_info(repo_id)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1867, in model_info
    hf_raise_for_status(r)
  File "/nix/store/lfgqglkfzcxsh1sc6ppdgsdp82x6r21g-python3.11-huggingface-hub-0.18.0/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 310, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66d8ae09-749431e763269cfa608710dc;abc5bb59-2e29-40fe-89f6-e2576c6adc8e)

Repository Not Found for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

Then when I actually try to use its website interface, I get the following log/error:

2024-09-04 20:59:50.372231682 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:1827 CreateInferencePybindStateModule] Init provider bridge failed.
[2024-09-04 20:59:52,525]::[InvokeAI]::INFO --> Patchmatch initialized
/nix/store/knqd0zgkmj3pajqcmh785qc6m8hjf0hc-python3.11-torchvision-0.15.2/lib/python3.11/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
[2024-09-04 20:59:53,140]::[uvicorn.error]::INFO --> Started server process [45839]
[2024-09-04 20:59:53,140]::[uvicorn.error]::INFO --> Waiting for application startup.
[2024-09-04 20:59:53,140]::[InvokeAI]::INFO --> InvokeAI version 3.3.0post3
[2024-09-04 20:59:53,140]::[InvokeAI]::INFO --> Root directory = /home/morswin/invokeai
[2024-09-04 20:59:53,140]::[InvokeAI]::INFO --> Using database at /home/morswin/invokeai/databases/invokeai.db
[2024-09-04 20:59:53,194]::[InvokeAI]::INFO --> GPU device = cuda AMD Radeon RX 7900 XT
[2024-09-04 20:59:53,195]::[InvokeAI]::INFO --> Scanning /home/morswin/invokeai/models for new models
[2024-09-04 20:59:53,330]::[InvokeAI]::INFO --> Scanned 9 files and directories, imported 0 models
[2024-09-04 20:59:53,330]::[InvokeAI]::INFO --> Model manager service initialized
[2024-09-04 20:59:53,393]::[InvokeAI]::INFO --> Pruned 0 finished queue items
[2024-09-04 20:59:53,396]::[InvokeAI]::INFO --> Cleaned database
[2024-09-04 20:59:53,396]::[uvicorn.error]::INFO --> Application startup complete.
[2024-09-04 20:59:53,397]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2024-09-04 21:00:03,244]::[uvicorn.access]::INFO --> 127.0.0.1:37396 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q7Wg HTTP/1.1" 200
[2024-09-04 21:00:03,287]::[uvicorn.access]::INFO --> 127.0.0.1:37396 - "GET /api/v1/app/version HTTP/1.1" 200
[2024-09-04 21:00:03,288]::[uvicorn.access]::INFO --> 127.0.0.1:37402 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2024-09-04 21:00:04,779]::[InvokeAI]::INFO --> NSFW checker initialized
[2024-09-04 21:00:04,779]::[uvicorn.access]::INFO --> 127.0.0.1:37406 - "GET /api/v1/app/config HTTP/1.1" 200
[2024-09-04 21:00:04,780]::[uvicorn.access]::INFO --> 127.0.0.1:37418 - "GET /api/v1/models/?model_type=embedding HTTP/1.1" 200
[2024-09-04 21:00:04,780]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=main HTTP/1.1" 200
[2024-09-04 21:00:04,781]::[uvicorn.access]::INFO --> 127.0.0.1:37436 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=onnx HTTP/1.1" 200
[2024-09-04 21:00:04,781]::[uvicorn.access]::INFO --> 127.0.0.1:37396 - "GET /api/v1/models/?model_type=vae HTTP/1.1" 200
[2024-09-04 21:00:04,782]::[uvicorn.access]::INFO --> 127.0.0.1:37402 - "GET /api/v1/models/?model_type=controlnet HTTP/1.1" 200
[2024-09-04 21:00:04,782]::[uvicorn.access]::INFO --> 127.0.0.1:37406 - "GET /api/v1/models/?model_type=t2i_adapter HTTP/1.1" 200
[2024-09-04 21:00:04,783]::[uvicorn.error]::INFO --> ('127.0.0.1', 37448) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=O7dN_QkqDBIRJ5c9AAAA" [accepted]
[2024-09-04 21:00:04,783]::[uvicorn.error]::INFO --> connection open
[2024-09-04 21:00:04,784]::[uvicorn.access]::INFO --> 127.0.0.1:37418 - "GET /api/v1/models/?model_type=ip_adapter HTTP/1.1" 200
[2024-09-04 21:00:04,785]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /api/v1/models/?model_type=lora HTTP/1.1" 200
[2024-09-04 21:00:04,785]::[uvicorn.access]::INFO --> 127.0.0.1:37436 - "GET /api/v1/boards/?all=true HTTP/1.1" 200
[2024-09-04 21:00:04,786]::[uvicorn.access]::INFO --> 127.0.0.1:37396 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:04,786]::[uvicorn.access]::INFO --> 127.0.0.1:37402 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:04,787]::[uvicorn.access]::INFO --> 127.0.0.1:37406 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:04,787]::[uvicorn.access]::INFO --> 127.0.0.1:37418 - "POST /socket.io/?EIO=4&transport=polling&t=P6-Q7XU&sid=O7dN_QkqDBIRJ5c9AAAA HTTP/1.1" 200
[2024-09-04 21:00:04,787]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q7XW&sid=O7dN_QkqDBIRJ5c9AAAA HTTP/1.1" 200
[2024-09-04 21:00:04,853]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /api/v1/models/?base_models=sdxl-refiner&model_type=main HTTP/1.1" 200
[2024-09-04 21:00:04,855]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q7vr&sid=O7dN_QkqDBIRJ5c9AAAA HTTP/1.1" 200
[2024-09-04 21:00:05,790]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "GET /openapi.json HTTP/1.1" 200
Task exception was never retrieved
future: <Task finished name='Task-67' coro=<AsyncServer._handle_event_internal() done, defined at /nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py:513> exception=TypeError("object NoneType can't be used in 'await' expression")>
Traceback (most recent call last):
  File "/nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py", line 515, in _handle_event_internal
    r = await server._trigger_event(data[0], namespace, sid, *data[1:])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py", line 549, in _trigger_event
    ret = await handler(*args)
          ^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/app/api/sockets.py", line 33, in _handle_sub_queue
    await self.__sio.enter_room(sid, data["queue_id"])
TypeError: object NoneType can't be used in 'await' expression
[2024-09-04 21:00:05,913]::[uvicorn.access]::INFO --> 127.0.0.1:37428 - "POST /api/v1/utilities/dynamicprompts HTTP/1.1" 200
[2024-09-04 21:00:11,905]::[uvicorn.access]::INFO --> 127.0.0.1:37462 - "GET / HTTP/1.1" 304
[2024-09-04 21:00:11,908]::[uvicorn.error]::INFO --> connection closed
[2024-09-04 21:00:12,085]::[uvicorn.access]::INFO --> 127.0.0.1:37462 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q9gp HTTP/1.1" 200
[2024-09-04 21:00:12,134]::[uvicorn.access]::INFO --> 127.0.0.1:54608 - "GET /api/v1/app/version HTTP/1.1" 200
[2024-09-04 21:00:12,136]::[uvicorn.access]::INFO --> 127.0.0.1:54620 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2024-09-04 21:00:12,137]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "GET /api/v1/app/config HTTP/1.1" 200
[2024-09-04 21:00:12,137]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "GET /api/v1/models/?model_type=embedding HTTP/1.1" 200
[2024-09-04 21:00:12,137]::[uvicorn.access]::INFO --> 127.0.0.1:54656 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=main HTTP/1.1" 200
[2024-09-04 21:00:12,157]::[uvicorn.access]::INFO --> 127.0.0.1:37462 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=onnx HTTP/1.1" 200
[2024-09-04 21:00:12,158]::[uvicorn.access]::INFO --> 127.0.0.1:54608 - "GET /api/v1/models/?model_type=vae HTTP/1.1" 200
[2024-09-04 21:00:12,158]::[uvicorn.access]::INFO --> 127.0.0.1:54620 - "GET /api/v1/models/?base_models=sdxl-refiner&model_type=main HTTP/1.1" 200
[2024-09-04 21:00:12,159]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "GET /api/v1/models/?model_type=controlnet HTTP/1.1" 200
[2024-09-04 21:00:12,159]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "GET /api/v1/models/?model_type=t2i_adapter HTTP/1.1" 200
[2024-09-04 21:00:12,159]::[uvicorn.access]::INFO --> 127.0.0.1:54656 - "GET /api/v1/models/?model_type=ip_adapter HTTP/1.1" 200
[2024-09-04 21:00:12,159]::[uvicorn.access]::INFO --> 127.0.0.1:37462 - "GET /api/v1/models/?model_type=lora HTTP/1.1" 200
[2024-09-04 21:00:12,160]::[uvicorn.access]::INFO --> 127.0.0.1:54608 - "GET /api/v1/boards/?all=true HTTP/1.1" 200
[2024-09-04 21:00:12,161]::[uvicorn.access]::INFO --> 127.0.0.1:54620 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:12,163]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:12,164]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200
[2024-09-04 21:00:12,200]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "POST /socket.io/?EIO=4&transport=polling&t=P6-Q9ib&sid=AZKcbmaBAFO6XUdZAAAC HTTP/1.1" 200
[2024-09-04 21:00:12,200]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q9ic&sid=AZKcbmaBAFO6XUdZAAAC HTTP/1.1" 200
[2024-09-04 21:00:12,201]::[uvicorn.error]::INFO --> ('127.0.0.1', 54660) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=AZKcbmaBAFO6XUdZAAAC" [accepted]
[2024-09-04 21:00:12,201]::[uvicorn.error]::INFO --> connection open
[2024-09-04 21:00:12,212]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q9in&sid=AZKcbmaBAFO6XUdZAAAC HTTP/1.1" 200
[2024-09-04 21:00:12,217]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "GET /openapi.json HTTP/1.1" 200
[2024-09-04 21:00:12,218]::[uvicorn.access]::INFO --> 127.0.0.1:54628 - "POST /socket.io/?EIO=4&transport=polling&t=P6-Q9ir&sid=AZKcbmaBAFO6XUdZAAAC HTTP/1.1" 200
Task exception was never retrieved
future: <Task finished name='Task-132' coro=<AsyncServer._handle_event_internal() done, defined at /nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py:513> exception=TypeError("object NoneType can't be used in 'await' expression")>
Traceback (most recent call last):
  File "/nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py", line 515, in _handle_event_internal
    r = await server._trigger_event(data[0], namespace, sid, *data[1:])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/mv7p2fncq5qnlc1xsmzl60ppicmc1a7d-python3.11-python-socketio-5.9.0/lib/python3.11/site-packages/socketio/asyncio_server.py", line 549, in _trigger_event
    ret = await handler(*args)
          ^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/10b4kyq33crhfcpw3iiqvc4025bm6129-python3.11-InvokeAI-3.3.0post3/lib/python3.11/site-packages/invokeai/app/api/sockets.py", line 33, in _handle_sub_queue
    await self.__sio.enter_room(sid, data["queue_id"])
TypeError: object NoneType can't be used in 'await' expression
[2024-09-04 21:00:12,287]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "GET /socket.io/?EIO=4&transport=polling&t=P6-Q9j-&sid=AZKcbmaBAFO6XUdZAAAC HTTP/1.1" 200
[2024-09-04 21:00:13,218]::[uvicorn.access]::INFO --> 127.0.0.1:54642 - "POST /api/v1/utilities/dynamicprompts HTTP/1.1" 200
[2024-09-04 21:00:25,541]::[uvicorn.access]::INFO --> 127.0.0.1:58432 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2024-09-04 21:00:25,601]::[uvicorn.access]::INFO --> 127.0.0.1:58432 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2024-09-04 21:00:25,604]::[uvicorn.access]::INFO --> 127.0.0.1:58442 - "GET /api/v1/queue/default/list HTTP/1.1" 200
[2024-09-04 21:00:25,637]::[InvokeAI]::INFO --> Loading model /home/morswin/invokeai/models/sdxl/main/stable-diffusion-xl-base-1-0, type sdxl:main:tokenizer
[2024-09-04 21:00:26,231]::[InvokeAI]::INFO --> Loading model /home/morswin/invokeai/models/sdxl/main/stable-diffusion-xl-base-1-0, type sdxl:main:text_encoder
[2024-09-04 21:00:27,728]::[InvokeAI]::INFO --> Loading model /home/morswin/invokeai/models/sdxl/main/stable-diffusion-xl-base-1-0, type sdxl:main:tokenizer_2
[2024-09-04 21:00:28,193]::[InvokeAI]::INFO --> Loading model /home/morswin/invokeai/models/sdxl/main/stable-diffusion-xl-base-1-0, type sdxl:main:text_encoder_2
fish: Job 1, 'nix run github:nixified-ai/flak…' terminated by signal SIGSEGV (Address boundary error)

I understand this as the program not being able to download the models from huggingface website.

I’d like to try the playing webUI or comfyui, but then I’ll be back to step zero, about that libGL.so.1

Is there anything else I could try doing?
Either webUI, cmfyui or invokeai will be fine with me.

Perhaps nixified-ai didn’t work because it’s outdated. Did you also try using the official InvokeAI flake?

You can try similar projects:

You might also find this video NixOS: Installing AUTOMATIC1111, ComfyUI, and Kohya web UI quite useful.

1 Like

Ok, sorry, but at least for me, all of those options have failed me.

It’s either that something is only supporting nvidia while I have an amd graphics card,
or even more often it fails with managing models.

Like it’s either failing at the stage when it downloads it. Or it doesn’t see it afterwards.

But the flake.nix on InvokeAI website contains interesting stuff in there, so this may help me combine my own flake.nix or shell.nix file as I make further attempts.

Sorry for no logs this time, but there were too many different attempts.
Like, now I will try to either do this with InvokeAI’s docker container, because I assume that this would be all correctly set up.

If this fails, I will again try making my own shell or flake as I mentioned earlier.

And of course, I will mention here if I succeed in any way.
I don’t want anybody else to go through this frustration I feel.

Anyway, thank you @eljamm for help.
I may need to better study how to use nix as the language and as the tool.

1 Like

InvokeAI has a good and working docker container solution. Works for me.
Generally I’ve used this:

And here some nice README.md to actually run this correctly:

This worked for me.

1 Like