Training your Watcher
Karen has built-in capacity for face detection. This is done using the haarcascade classifiers in conjunction with openCV.
All three haarcascade classifiers are included in the installation package and available under "karen_watcher/data/models/watcher". The haarcascade_frontal_default.xml is used if no classifier is explicitly specified.
To train your model the train() method needs to be called. The most simple method is:
python3 -m karen --training-source-folder /path/to/faces-directory
You can force the model to be retrained by adding the
Your faces directory should be configured as follows:
/faces-directory - /Jane - /image1.jpg - /image2.jpg - /image3.jpg - /John - /image1.jpg - /image2.jpg - /image3.jpg
This will create a
recognizer.yml file and a
names.json file. These files are both used to determine who Karen sees when capturing video. If you already have a recognizer and names file built you can specify them with the
namesFile parameters when creating a new Watcher device. View the file
~/.karen/config.json to configure specific runtime options.