This project uses the eigenfaces algorithm in OpenCV to perform face recognition. To use this algorithm one first creates a set of training data with pictures of faces that are and are not allowed to open the box.

Included in the project is a large set of face images that are tuned for training with the face recognition algorithm. This is the database of faces published by research AT&T Laboratories Cambridge in the mid 90's. These faces make up the set of negative images which represent faces that are not allowed to open the box. You can see these images in the training/negative subdirectory of the project.

To generate images of the person who will be allowed to open the box, or positive training images, you can use the included script. This script will take pictures with the box hardware and write them to the training/positive sub-directory (which will be created by the script if it does not exist).

With the box hardware assembled and powered up (servo power is not necessary for this step), connect to the Raspberry Pi in a terminal session and navigate to the directory with the project software. Execute the following command to run the capture_positive script:


After a moment for the script to load, you should see the following instructions:

Capturing positive training images.
Press button or type c (and press enter) to capture an image.
Press CTRL-C to quit.

The script is now ready to capture images for training. If you see an error message, make sure OpenCV and the software dependencies from the previous step are installed and try again.

Point the box's camera at your face and press the button on the box (or send the letter 'c' in the terminal session) to take a picture. If the script detects a single face, it will crop and save the image in the positive training images sub-directory. If the script can't detect a face or detects multiple faces, an error message will be displayed.

If see error messages about not detecting a single face, you can examine the full picture that was captured by the camera to understand if it's picking up your face (or if other faces might be in view--be careful, even photos or posters with faces behind you can be detected!). Sometimes it’s just that the camera is upside-down. Open the file capture.pgm (located in the directory with the scripts) in an image editor to see the last image that was captured. It's easiest to use a tool like Cyberduck to view the files on your Raspberry Pi over SFTP (SSH file transfer protocol). 

NOTE: As each image is captured, you might get an error message like the following:

imwrite_('capture.pgm'): can't write data: OpenCV(4.6.0) […]

error: (-5:Bad argument) Portable bitmap(.pgm) expects gray image in function 'write'

This is non-fatal and can be ignored. The image is gray, something in an underlying library is just reporting wrong. Sorry!

Using the script, capture some pictures of the face that is allowed to open the box. Try to take pictures of the face with different expressions, under different lighting conditions, and at different angles. You can see the images I captured for my positive training data below:

Training images I captured for my face. Try to get pictures with different expressions, in different light, etc. to build a better face recognition model.

Exit the script by pressing CTRL-C in the terminal session.

After capturing a number of positive training images, you can run the script to perform the training. Run the following python script to train the face recognition model:


You should see a message that the training images were loaded, and the text 'Training model...' to indicate the training calculations are being performed. Training the face recognition model on the Pi can take several minutes.

Once the training is complete you should see the message 'Training data saved to training.xml'. The training data is now stored in the file training.xml, which will be loaded by the box software to configure the face recognition model.

If you're curious you can examine images that are output by the training to visualize the eigenfaces of the model. Open the mean.png, positive_eigenface.png, and negative_eigenface.png files in an image viewer. You don't need to do anything with these images, but can learn more about their meaning from the OpenCV website, Wikipedia, or the original research paper on eigenfaces.

Eigenfaces generated from my training data.

The positive eigenface is a summary of the features that differentiate my face from the average or mean face. This face will be allowed to unlock the box.

The negative eigenface represents all the other faces in the training set. This face will not be allowed to unlock the box.

With the training.xml file generated, you're ready to move on to configure the servo latch and software for the project.

This guide was first published on Jan 24, 2014. It was last updated on Jan 24, 2014.

This page (Training) was last updated on Jan 21, 2014.

Text editor powered by tinymce.