Looking for someone to help setup and configure Open Interpreter 01 setup

I am new to NixOS but not new to Linux. I currently run Bluefin (on workstation), VanillaOS (on streaming setup), and looking to jump into the deep end of the pool with NixOS. I have a Asus laptop with NixOS installed and would like to get Open Interpreter setup and configured with voice control, links are here https://www.openinterpreter.com/ and here GitHub - OpenInterpreter/open-interpreter: A natural language interface for computers. This is either work for hire or other arrangement which can be discussed. Please reply with hourly rate, estimate of number of hours, etc. after looking over the info on Open Interpreter. I have the 01 device ordered but not yet in my hands but will put one together to get this project complete. Eric

Hi Eric,
If you are still on the lookout to hire someone, I would be glad to help you out as what you are wanting done is within my skillset
You can reach out to me on my email here

1 Like

I have to say that setup looks like an interesting project… @onehumankindness

What is the voice control part going to be?

there’s a device called the 01 that is used to basically “walky talky” to the computer over the internet to control the machine GitHub - OpenInterpreter/01: The open-source language model computer " What does it do?

The 01 exposes a speech-to-speech websocket at localhost:10001.

If you stream raw audio bytes to / in Streaming LMC format, you will receive its response in the same format.

Inspired in part by Andrej Karpathy’s LLM OS, we run a code-interpreting language model, and call it when certain events occur at your computer’s kernel.

The 01 wraps this in a voice interface:"

I primarily want this for business stuff as I’m interested in local only AI productivity automation for everything that doesn’t require me, basically training, coaching, speaking, etc.

okay that stuff is getting quite interesting, but also more ambitious… so as I see it there is three components here, am I right?

  • Some kind of LLM backend, if locally could be ollama or some proxy for multiple backends such as LightLLM. I am fairly familiar with this part of the thing.
  • Then there is open interpreter which allows you to chat in your terminal, and basically interact with this thing in your terminal. Hence it can also run stuff from your terminal, if you trust it do run stuff properly and not do sudo rm -rf / when you tell it to recusively delete a folder.
  • And finally there is the audio component, connected via websocket, presumably some kind of speech-to-text and text-to-speech translation which would allow you to interact with this thing by just talking to it

I primarily want this for business stuff as I’m interested in local only AI productivity automation for everything that doesn’t require me, basically training, coaching, speaking, etc.

Keeping all this in mind, that is yet another step. An interesting step for sure, but the more ambitious one. Because once you set all the stuff up I just mentioned (I personally only run a local ollama service so far, which I use for coding), it also has to do something meaningful, and do it well enough to be interesting for business.

Most interesting idea I could come up with on the spot is integration with just about any API you can think of. Let’s say you want to talk to the Discord API, you could control a Discord bot writing stuff. Or you want to talk to something like ActivePieces for automation for certain communication flows, you could talk to the ActivePieces API. Or whatever…

And to relate all that to NixOS. Ollama + open-interpreter should be fairly easy to get running. There is an ollama service for NixOS which you can just enable. And open-interpreter is already packaged in nixpkgs. So you can just add it to your system environment. Provided you have CUDA or ROCm set up, ollama should just work (I hope).

And once ollama is up and running, you could just do something like ollama pull dolphin-mixtral:8x7b-v2.6 and then interpreter --model ollama/dolphin-mixtral:8x7b-v2.6 as they say in the documentation. Provided you have enough VRAM for that one, as dolphin-mixtral is sadly not going to work on your RTX 4090 if you happen to have one of these, unless you choose some more compact quantized version below 24 GB in size.