For TensorFlow, there are a few dependency requirements to install in the Python Environment:
pip3 install virtualenv Pillow numpy pygame
Now to install an Adafruit fork of a program originally written by Leigh Johnson that uses the MobileNet V2 model to detect objects. This part will take a few minutes to complete.
git clone --depth 1 https://github.com/adafruit/rpi-vision.git
python3 -m virtualenv -p $(which python3) .venv
You should now be inside a virtual environment. You can tell by the (.venv) on the left side of the command prompt. While in the virtual environment, you may download and install Tensorflow 2.3.1
chmod a+x ./tensorflow-2.3.1-cp37-none-linux_armv7l_download.sh
pip3 install --upgrade setuptools
pip3 install tensorflow-*-linux_armv7l.whl
pip3 install -e .
After this, go ahead and reboot the Pi.
Finally you are ready to run the detection software. First you want to run as root so that Python can access the Frame Buffer of the display.
Then activate the virtual environment again:
cd rpi-vision && . .venv/bin/activate
To run a program that will display the object it sees on screen type in the following:
python3 tests/pitft_labeled_output.py --tflite
You should see a bunch of text scrolling in your SSH window.
On your display, if you notice everything is sideways, you can add a
For instance, if you want to rotate everything by 90 degrees, you can type:
Now start holding up various items in front of the camera and it should display what it thinks it sees, which isn't actually what the item may be. Some items that it's pretty good about identifying are coffee mugs and animals.