There are a few packages that TensorFlow requires that need to be installed:
sudo apt-get install -y libatlas-base-dev libhdf5-dev libc-ares-dev libeigen3-dev build-essential libsdl-ttf2.0-0 python-pygame festival python3-h5py
There are a few dependency requirements to install TensorFlow inside the Python Environment:
pip3 install virtualenv Pillow numpy pygame
Now to install an Adafruit fork of a program originally written by Leigh Johnson that uses the MobileNet V2 model to detect objects. This part will take a few minutes to complete.
git clone --depth 1 https://github.com/adafruit/rpi-vision.git
python3 -m virtualenv -p $(which python3) .venv
You should now be inside a virtual environment. You can tell by the (.venv) on the left side of the command prompt. While in the virtual environment, you may download and install Tensorflow 2.3.1
chmod a+x ./download_tensorflow-2.3.1-cp37-none-linux_armv7l.sh
pip3 install --upgrade setuptools
pip3 install ./tensorflow-2.3.1-cp37-none-linux_armv7l.whl
pip3 install -e .
After this, go ahead and reboot the Pi.
Finally you are ready to run the detection software. First you want to run as root so that Python can access the Frame Buffer of the display.
Then activate the virtual environment again:
cd rpi-vision && . .venv/bin/activate
To run a program that will display the object it sees on screen type in the following:
python3 tests/pitft_labeled_output.py --tflite
You should see a bunch of text scrolling in your SSH window.
Now start holding up various items in front of the camera and it should display what it thinks it sees, which isn't actually what the item may be. Some items that it's pretty good about identifying are coffee mugs and animals.