sudo apt-get install -y libatlas-base-dev libhdf5-dev libc-ares-dev libeigen3-dev build-essential libsdl-ttf2.0-0 python3-pygame festival python3-h5py
Virtual Environment
There are a few dependency requirements to install TensorFlow inside the Python Environment:
pip3 install virtualenv Pillow numpy pygame
Install rpi-vision
Now to install an Adafruit fork of a program originally written by Leigh Johnson that uses the MobileNet V2 model to detect objects. This part will take a few minutes to complete.
cd ~ git clone --depth 1 https://github.com/adafruit/rpi-vision.git cd rpi-vision python3 -m virtualenv -p $(which python3) .venv source .venv/bin/activate pip3 install -e .
Install TensorFlow 2.x
You should now be inside a virtual environment. You can tell by the (.venv) on the left side of the command prompt. While in the virtual environment, you may download and install Tensorflow 2.4.0
pip3 install https://github.com/bitsy-ai/tensorflow-arm-bin/releases/download/v2.4.0/tensorflow-2.4.0-cp37-none-linux_armv7l.whl
If for some reason this wheel fails to install on your Pi, you may want to try an older version from https://github.com/bitsy-ai/tensorflow-arm-bin/releases.
After this, go ahead and reboot the Pi.
sudo reboot
Running the Graphic Labeling Demo
Finally you are ready to run the detection software. First you want to run as root so that Python can access the Frame Buffer of the display.
sudo bash
Then activate the virtual environment again:
cd rpi-vision && . .venv/bin/activate
To run a program that will display the object it sees on screen type in the following:
python3 tests/pitft_labeled_output.py --tflite
You should see a bunch of text scrolling in your SSH window.
Now start holding up various items in front of the camera and it should display what it thinks it sees, which isn't actually what the item may be. Some items that it's pretty good about identifying are coffee mugs and animals.