I wanted to try and run a github copilot-like setup locally on my nixos machines but ran into some packaging issues.
The way to do it right now seems to be the https://continue.dev/ extensions for VSCode and Jetbrains IDEs.
The way it works is you install the extension, that runs the continue.dev server, which connects to a language model to generate all the completions.
The way it connects to a locally running language model is through https://ollama.ai/ (Ollama | Continue).
Now I managed to do the following:
- ollama is packaged in nixpkgs
- a module for running ollama as a service is available (e.g. GitHub - havaker/ollama-nix: CUDA-enabled ollama nix flake)
- you can install the continue.dev extension into vscode
Running the continue.dev server packaged with the extension fails on NixOS.
[error] Error: spawn /home/xxx/.vscode-oss/extensions/continue.continue-0.7.29-linux-x64/exe/continue_server ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
To solve issues like this you can run the continue.dev server on your own and connect to it, docs here: https://continue.dev/docs/walkthroughs/manually-run-continue
So I started a quest to first package the server and then run it as a service.
However I’m not very experienced in packaging (complex ai) python packages with poetry2nix, so I got stuck after a few ours of trying.
Here is what I got so far, probably including a lot of mistakes. I feel like I’m missing something fundamental, since it’s doing so many package overrides.
Would appreciate if someone could give me some pointers, as I feel like it would be a cool capability for everyone to have this running on nixos.