Want to up your robotics game and give it the ability to detect objects? Maybe implement a security camera that can see and identify certain items? Now that the Raspberry Pi is fast enough to do machine learning, adding these features is fairly straightforward.

This guide will show you the steps to get TensorFlow 2 installed on your Raspberry Pi 4 or 5 and perform some object detection using the TensorFlow Lite Python Interpreter, which is faster than the full TensorFlow interpreter.

There are two main setup paths to choose from. The first option is with a PiTFT if you want to have a larger display. The second option is with the BrainCraft HAT, which has a built-in display and audio along several other components such as DotStar LEDs, a Joystick, and ports.

Raspberry Pi 4 or 5 Computer & Camera

To start with, you will need a Raspberry Pi 4 or 5. Since TensorFlow object detection is processing intensive, you should use at least the 4GB model.

You really need a Pi 4 or better, TensorFlow vision recognition will not run on anything slower!
Angled shot of green Raspberry Pi 5 microcontroller.
The Raspberry Pi 5 is the newest Raspberry Pi computer, and the Pi Foundation knows you can always make a good thing better! And what could make the Pi 5 better than the...
Out of Stock
Angled shot of Raspberry Pi 4
The Raspberry Pi 4 Model B is the newest Raspberry Pi computer made, and the Pi Foundation knows you can always make a good thing better! And what could make the Pi 4 better...
In Stock

You will need a camera for the Raspberry Pi to see with.

Overhead shot of green, square camera module with cable.
Raspberry Pi Camera Module 3 is a compact camera from Raspberry Pi. Featuring autofocus and a 12-megapixel sensor, and supported by Raspberry Pi's Picamera2 Python library, Camera...
Out of Stock
Angled shot of Raspberry Pi Camera Board v2 - 8 Megapixels connected to a flex cable and a Raspberry Pi.
Snap, snap! The Camera v2 is the new official camera board released by the Raspberry Pi Foundation!The Raspberry Pi Camera Board v2 is a high quality 8...
In Stock

For the Raspberry Pi 5, you will also need a camera cable, which has a different size than the one that comes with the camera.

Angled shot of Raspberry Pi 5 connected to HQ camera.
This 200mm long camera cable is specifically designed to work with the Raspberry Pi 5 series. Just...
In Stock

All-in One BrainCraft HAT

If you want to get a HAT that has everything you need besides the camera including display, sound, and cooling, you'll want to pick up the BrainCraft HAT.

Video of a white hand hovering a coffe mug over a Adafruit BrainCraft HAT thats connected to a Raspberry Pi 4. Display detects that its a coffee mug.
The idea behind the BrainCraft HAT is that you’d be able to “craft brains” for Machine Learning on the EDGE, with Microcontrollers & Microcomputers. On ASK...
Out of Stock

Display Output

You will also need a display so you can see what it's detecting. You can use any of our displays with the Raspberry Pi, but the 3.5" display is Adafruit's biggest.

Top down view of a white hand touching the display of a PiTFT - Assembled 480x320 3.5" TFT+Touchscreen for Raspberry Pi.
Is this not the cutest, little display for the Raspberry Pi? It features a 3.5" display with 480x320 16-bit color pixels and a resistive touch overlay so is...
Out of Stock

But our other PiTFT's will also work just fine

Pink polished finger touching the screen of a PiTFT Plus 320x240 3.2" TFT + Resistive Touchscreen.
Is this not the cutest little display for the Raspberry Pi? It features a 3.2" display with 320x240 16-bit color pixels and a resistive touch overlay. The plate uses the high...
Out of Stock
Red polished white finger touching the PiTFT Plus Assembled 320x240 2.8" TFT + Resistive Touchscreen.
Is this not the cutest little display for the Raspberry Pi? It features a 2.8" display with 320x240 16-bit color pixels and a resistive touch overlay. The plate uses the high...
Out of Stock
Black polished white finger touching Adafruit PiTFT 2.4" HAT display in a Adafruit PiTFT 2.4" HAT Mini Kit.
Is this not the cutest little display for the Raspberry Pi? It features a 2.4" display with 320x240 16-bit color pixels and a resistive touch overlay. The HAT uses the high speed...
Out of Stock
Top down view of a Adafruit PiTFT 2.2" HAT Mini Kit with desktop image on the display.
The cute PiTFT got even more adorable with this little primary display for Raspberry Pi in HAT form! It features a 2.2" display with 320x240 16-bit color pixels. The HAT uses the...
Out of Stock

Cooling It Down

The Raspberry Pi 4 can run a little hot, especially when TensorFlow is doing a lot of data crunching. If you don't have the BrainCraft hat with the built-in fan, we recommend the Pimoroni Fan SHIM.

Or this mini 5V fan

Miniature 5V Cooling Fan for Raspberry Pi and Other Computers
Looking for another way to keep your Raspberry Pi cool? Hook up this 5V Mini Cooling...
In Stock

Or if you have the Pi 5, you can use an official cooler.

Angled shot of a cooling fan installed on a green microcontroller.
The Raspberry Pi 5 Active Cooler is compatible with the Raspberry Pi 5 and the
In Stock

Or tall heatsink

Square aluminum heat sink with 20 fins
Looking for the best heat sink for your Raspberry Pi? Check out this nice and tall 15x15x15mm Heat...
In Stock

In order to fit the fan/heatsink along with the display, you will need a GPIO stacking header.

GPIO Stacking Header for Pi A+/B+/Pi 2/Pi 3
Connect your own PCB to a Raspberry Pi B+ and stack on top with this normal-height female header with extra long pins.  The female header part is about 8.5mm tall, good for small...
In Stock

Longer Camera Cable

The flex cable that comes with the camera is a bit on the short side, so you may want a longer cable as well.

angled shot of green, credit-card-sized microcontroller connected via a 500mm long FPC ribbon cable to a camera module.
This 500mm long camera cable is specifically designed to work with the Raspberry Pi 5 series. Just plug...
In Stock
Angled shot of Flex Cable for Raspberry Pi Camera or Display - 24" / 610mm.
This cable will let you swap out the stock 150mm long flex cable from a Raspberry Pi Camera (either 'classic' or 'NoIR' type) or Raspberry Pi Display for a...
In Stock
Angled shot of Flex Cable for Raspberry Pi Camera or Display - 18" / 457mm.
This cable will let you swap out the stock 150mm long flex cable from a Raspberry Pi Camera (either 'classic' or 'NoIR' type) or Raspberry Pi Display for a different...
Out of Stock

This guide was originally written for Raspberry Pi OS Buster, with the original Picamera library. When Raspberry Pi released Bullseye, they did so with a promise that eventually there would be a new Picamera2 library that was built on top of the libcamera subsystem. While it was possible to install Picamera2, it was a very involved process that was not friendly to the average user and still highly experimental.

Nearly a year later, they started including Picamera2 with Raspberry Pi OS by default, so it was time for a guide update. The installation path that this updated guide uses is designed to make it as easy as possible for you to get a Raspberry Pi up and running with TensorFlow.

The overall installation path will be to using the 64-bit version of the Raspberry Pi Desktop because recent versions of TensorFlow are no longer compiling for the 32-bit OS and the QT OpenGL graphics drivers are installed by default on the Desktop version.

Image Installation

You're going to get started using the latest Raspbian Pi OS Desktop. Start by downloading and installing the latest Raspberry Pi Imager, which makes it super easy to customize all your settings before you even write the image to the SD Card.

Use the imager to burn the operating system image to a Micro SD Card. Raspberry Pi has provided a quick video overview available to watch.

Select the 64-bit Raspberry Pi OS Bullseye Desktop image by going to Raspberry Pi OS (Other)Raspberry Pi OS (64-bit).

You can customize settings for your Pi by clicking the gear icon in the lower righthand corner after choosing your OS.

Make sure to enable SSH, set a good username and password, set your WiFi credentials, and update the locale settings to your specific needs.

Log In

After properly unmounting ("ejecting") your card for you computer, go ahead and place the microSD card in the Pi and supply power to boot it up. It may take a few minutes before it's available.

On your computer, assuming you left the username as pi and the hostname as raspberrypi.local, SSH into the card using the following:

ssh [email protected]

If you changed your username or hostname settings in the imager, use those instead.

Update the Raspberry Pi

Update the Pi

sudo apt update
sudo apt upgrade -y
sudo apt install -y python3-pip
sudo apt install --upgrade -y python3-setuptools

Setup Virtual Environment

If you are installing on the Bookworm version of Raspberry Pi OS, you will need to install your python modules in a virtual environment. You can find more information in the Python Virtual Environment Usage on Raspberry Pi guide. To Install and activate the virtual environment, use the following commands:

sudo apt install python3.11-venv
python -m venv env --system-site-packages

You will need to activate the virtual environment every time the Pi is rebooted. To activate it:

source env/bin/activate

If needed, you can use deactivate, but leave it active for now.

Upgrade Script

We put together a script to easily make sure your Pi is correctly configured and install Blinka. Although Blinka isn't required for this to work, it's nice to have and this script also enables several other interfaces that are required for this project to work. It requires just a few commands to run. Most of it is installing the dependencies.

cd ~
sudo pip3 install --upgrade adafruit-python-shell
wget https://raw.githubusercontent.com/adafruit/Raspberry-Pi-Installer-Scripts/master/raspi-blinka.py
sudo python3 raspi-blinka.py

If your system default Python is Python 2 (which usually is no longer the case with the recent versions of Raspberry Pi OS), it will ask to confirm that you want to proceed. Choose yes.

It may take a few minutes to run. When it finishes, it will ask you if you would like to reboot. Choose yes.

Driver Installation

The first step to setup the display is to install the necessary drivers. Depending on your exact hardware, you have a couple of options. After the driver setup, there are a couple of additional steps below.

Option 1. BrainCraft HAT Setup

If you have an Adafruit BrainCraft HAT, you'll first want to head over to the Adafruit BrainCraft HAT - Easy Machine Learning for Raspberry Pi guide and go through the setup process there. Specifically, you'll want to follow the Audio Setup, Fan Service Setup, and Display Module Install pages. This will guide you through all the pieces needed to prepare the Pi.

Option 2. PiTFT Setup

If you have just a bare PiTFT, you'll want to install the PiTFT Drivers. There is a new installer script now, so it can be installed with just a few commands. First, start by installing a few dependencies and downloading the repo:

cd ~
sudo pip3 install --upgrade adafruit-python-shell click
sudo apt-get install -y git
git clone https://github.com/adafruit/Raspberry-Pi-Installer-Scripts.git
cd Raspberry-Pi-Installer-Scripts

Next, choose the install command based on your display. For the 2.4", 2.8", or 3.2" Resistive touchscreens, use the following command:

sudo -E env PATH=$PATH python3 adafruit-pitft.py --display=28r --rotation=90 --install-type=mirror

For the 3.5" Resistive touchscreen, use the following command:

sudo -E env PATH=$PATH python3 adafruit-pitft.py --display=35r --rotation=90 --install-type=mirror

For the 2.8" Capacitive touchscreen, use the following command:

sudo -E env PATH=$PATH python3 adafruit-pitft.py --display=28c --rotation=90 --install-type=mirror

When you get asked to reboot, reboot!

After it reboots, you should now see your desktop on the display.

Make the TFT the Default Display

With the Desktop Environment, you can set the default display by setting an environment variable. To set the TFT as the display where the window is drawn, there are a few ways to do this depending on the level of permanence that you would like to give it. Each way involves setting the DISPLAY environment variable to :0.

The first way is to prefix every command with DISPLAY=:0. This sets the enviroment variable until the command is done executing. So if you wanted to run libcamera-hello, you would type out:

DISPLAY=:0 libcamera-hello

The second way is to set the variable for the remainder of the SSH terminal session. To do this, just run the command every time you SSH into the Pi:

export DISPLAY=:0

The third way is to set it in a file that automatically runs each time you connect with SSH such as .bashrc. Of course this also assumes you continue using bash as your default shell. To make a backup of your current .bashrc file and add it, just run these commands:

cp ~/.bashrc ~/.bashrc.backup
echo "export DISPLAY=:0" >> ~/.bashrc
source ~/.bashrc

For the sake of simplicity, this guide will assume you are using one of the latter two methods. If you wanted to use the first method, just prefix it to any command that uses the camera such as the libcamera apps or any of the rpi-vision demos.

Now to do an initial test with the camera to make sure the hardware is working before diving into TensorFlow. This should display what the camera sees on the display.

libcamera-hello -t 0

Press Ctrl+C to exit the test.

Here's what it looks like on the BrainCraft HAT:

And on the PiTFT, it should look something like this:

If you don't see anything on your camera or get an error message, be sure you have the camera interface enabled in raspi-config. Also, be sure the camera cable is not inserted backwards. If it is still having issues, you will need to get that working before continuing. Unfortunately troubleshooting a camera setup can be complex and is beyond the scope of this guide.

Install requirements

There are a few Python packages that TensorFlow requires that need to be installed:

sudo apt install -y python3-numpy python3-pillow python3-pygame

Install the Speech Output package

sudo apt install -y festival

Install rpi-vision

Now to install an Adafruit fork of a program originally written by Leigh Johnson that uses the MobileNet V2 model to detect objects. While there is a MobileNet V3, it isn't as accurate as V2. This part will take a few minutes to complete.

cd ~
source env/bin/activate
git clone --depth 1 https://github.com/adafruit/rpi-vision.git
cd rpi-vision
pip3 install -e .

Install TensorFlow 2.x

Next you can download and install Tensorflow 2. There are a nice set of releases over on PINTO0309's GitHub Releases Page. To install TensorFlow 2.15.0, run the following commands. This cryptic bit of shell code figures out the current version of CPython running on your Pi, substitutes that in, and then downloads and installs the appropriate release file.

CPVER=$(python --version | grep -Eo '3\.[0-9]{1,2}' | tr -d '.')
pip install $(echo "$RELEASE" | sed -e "s/cp[0-9]\{3\}/CP$CPVER/g")

After this, go ahead and reboot the Pi.

sudo reboot
If you get a warning with "No module named 'rpi_vision'", this means Python couldn't load the module. Either the module or the Python binding likely wasn't installed correctly.

Running the Graphic Labeling Demo

Finally you are ready to run the detection software. To run a program that will display the object it sees on screen, type in the following:

cd rpi-vision
python3 tests/pitft_labeled_output.py --tflite

You should see a bunch of text scrolling in your SSH window.

Now start holding up various items in front of the camera and it should display what it thinks it sees, which isn't actually what the item may be. Some items that it's pretty good about identifying are coffee mugs and animals.

Speech Output

As an added bonus, you can hook up a pair of headphones or a speaker to the Raspberry Pi and it will actually tell you what it is detecting. Make sure you don't have any HDMI cords plugged in though or the audio will go through the monitor.

If you don't hear anything, make sure the sound source you are expecting is selected. With the Raspberry Pi Desktop, you have a couple of ways to do this. If you have a mouse connected, you can right-click the speaker icon in the upper right-hand corner and choose your source.

If you prefer the command line, you can also run raspi-config, and go to System Options Audio to select it.

If you still don't hear anything, make sure your volume is turned up by using alsamixer. For the BrainCraft HAT, this is described in more detail on the Audio Setup page.

Finally, you can test the text to speech from the command line by running the following command:

echo "This is a test" | festival --tts

This guide was first published on Sep 04, 2019. It was last updated on Jan 31, 2024.