Karen is available through pip, but to use the built-in devices there are a few extra libraries you may require.

# Install the required system packages
sudo apt-get -y install \
  libfann2 \
  python3-fann2 \
  python3-pyaudio \
  python3-pyqt5 \
  python3-dev \
  festival \
  festvox-us-slt-hts  \
  libportaudio2 \
  libasound2-dev \
  libatlas-base-dev \

# Optionally create your local environment and then activate it
python3 -m venv /path/to/virtual/env --system-site-packages

# Install the required build libraries
python3 -m pip install scikit-build 

# Install required runtime libraries
python3 -m pip install urllib3 \
  requests \
  netifaces \
  numpy \
  deepspeech \
  pyaudio \
  webrtcvad \
  opencv-contrib-python \
  Pillow \
  padatious \

# Install the karen module
python3 -m pip install karen

NOTE: The installation of OpenCV is required when using the watcher device. This may take a while on the Raspberry Pi OS as it has to recompile some of the libraries. Patience is required here as the spinner icon appeared to get stuck several times in our tests... so just let it run until it completes. If it encounters a problem then it'll print out the error for additional troubleshooting.

If you prefer not to wait then you can install the opencv package that comes with most distributions however this version does not support facial recognition. To use the package instead then issue apt-get install python3-opencv and remove the opencv-contrib-python from the pip package list above. (This will spead up the installation time significantly on the Raspberry Pi at the cost of functionality.)

Troubleshooting: "Cannot find FANN libs"

If you encounter an error trying to install the karen module on the Raspberry Pi then you may need to add a symlink to the library FANN library. This is due to a bug/miss in the "find_fann" function within the Python FANN2 library as it doesn't look for the ARM architecture out-of-the-box. To fix it run the following:

ln -s /usr/lib/arm-linux-gnueabihf/ /usr/local/bin/

Download the Speech Recognition Models


python3 -m karen --download-models --model-type pbmm  

NOTE: Use --model-type tflite if running on the Raspberry Pi.

Via Python

from karen.extras import download_models
model_type = "pbmm"           # use "tflite" for Raspberry Pi
download_models(model_type)   # Downloads models for deepspeech

NOTE: Use model_type = "tflite" if running on the Raspberry Pi.

Starting Up

You can execute Karen directly as a module. To do so try the following:

python3 -m karen

You can disable any of the built-in devices with --disable-builtin-[speaker, watcher, listener, panels]. Use the --help option for full listing of command line options including specifying a custom configuration file.

NOTE: The program will create/save a version of the configuration to ~/.karen/config.json along with any other data elements it requires for operation. The configuration file is fairly powerful and will allow you to add/remove devices and containers for custom configurations including 3rd party devices or custom skills.

Web Control Panel

If everything is working properly you should be able to point your device to the web control panel running on the Brain engine to test it out. The default URL is:


Help & Support

Help and additional details is available at