This project is dedicated to building a "Synthetic Human" which is called Karen (for now) for which we have assigned the female gender pronoun of "she". She has visual face recognition (opencv/opencv), speech transcription (mozilla/deepspeech), and speech synthesis (festival). Karen is written in Python and is targeted primarily at the single board computer (SBC) platforms like the Raspberry Pi.
Visit our main site: https://projectkaren.ai/
Karen's architecture is divided into two main components: Containers and Devices. The containers focus on communication between other containers and devices are designed to control input and output operations. The most important container is the Brain which is a special type of container as it collects data and provides the skill engine for reacting to inputs. While a Brain does support all the methods of a normal container it is recommended to create a separate container to store all your devices.
Python Module Overview
|karen.containers.Brain||Main service for processing I/O.||8080|
|karen.containers.DeviceContainer||Secondary service for devices.||8081|
Python Device Module Overview
|karen.devices.Speaker||Audio output device for text-to-speech conversion|
|karen.devices.Listener||Microphone device for speech-to-text conversion|
|karen.devices.Watcher||Video/Camera device for object recognition|
|karen.panels.RaspiPanel||Panel device designed for Raspberry Pi 7" screen @ 1024x600|
In version 0.8.0 and later you are no longer required to install the brain, device, and the built-in plugins separately.
Karen is available through pip, but to use the built-in devices there are a few extra libraries you may require. Please visit the Basic Install page for more details.
Web Control Panel
If everything is working properly you should be able to point your device to the web control panel running on the Brain engine to test it out. The default URL is:
Demo running on Raspberry Pi
Help & Support
Help and additional details is available at https://projectkaren.ai