Welcome to the wonderful world of Lobe, where you can train Machine Learning models without writing a single line of code!! Lobe is free and easy-to-use, all you have to do is take pictures and label them and Lobe does the rest.

In this guide we'll walk through training a Lobe model, exporting it, and running it on a Raspberry Pi 4.

This tutorial is part of a series which include the following tutorials:

Background Knowledge

To be successful with this project, you'll need some experience with the following:

  1. Setting up and using the Raspberry Pi
  2. Some familiarity with using the terminal window
  3. Installing the Pi Camera
Angled shot of assembled BrainCraft HAT with Raspberry Pi Camera and long FPC ribbon cable.
Machine learning is a transformative tool that’s redefining how we build software—but up until now, it was only accessible to a small group of experts. At Adafruit, we...

First, we'll train a custom machine learning model using objects on your desk. For my model I used a pen, Lobe sticker, and a succulent plant. Feel free to use any objects you'd like!

First, download and install Lobe from the link below:

1. Open Lobe and create a new project.

2. In the Label tab, select Import (top right corner), and Camera from the drop down menu.

If this is your first time using Lobe you'll need to give it permission to use your camera.

If your computer doesn't have a camera you can take pictures using a cellphone or digital camera and import them using the Images option.

3. In the bottom left corner, type a label for the first image.

My first label is "Pen" because it's a picture of, well, a pen!

4. Take between 10 and 20 pictures of your objects using your computer's camera.

Take pictures from different angles, in different lighting conditions, and with different hand placements to improve model accuracy. 

5. Repeat steps 3 and 4 for the rest of your objects.

Remember to add a new label for each object (e.g. "Sticker" and "Succulent").

6. Add a "Nothing" category.

This improves accuracy of the ML model. See below for more info.

Why is there always a prediction even when nothing is in the image?

Lobe will always predict one of your labels even if your image does not contain any related content. If you expect your model to see these types of images, create a ‘None’ label and add variations of these images as examples. You can use this ‘None’ label as a placeholder when waiting for relevant predictions.

In Lobe, training happens automatically as soon as you add enough images (a minimum of 5 for each label). 

When you have taken between 5 and 10 pictures of each object and training is complete, test out the model by following the steps below.

1. In Lobe, select the Use tab and choose Camera.

1a. This is the ML model prediction. The more full this bar is, the more confident the algorithm is with the prediction.

2. Try different placements of the objects used for training and see how the model performs.

 

Try other objects of the same type to check model accuracy. Also try out multiple objects at once. You will notice Lobe only makes one prediction at a time, even if there are multiple objects. See if you can trick your model into predicting the wrong thing and find ways to improve it.

3. Improve your model using the buttons in the bottom right.

My model did a great job at identifying sticky notes, but it mixed up the pen and highlighter.

1b. Use these buttons to give Lobe feedback to improve your model.

Click the green button for images that the model predicts correctly. Select the red button for images the model predicts incorrectly.

Next, export your Lobe model to use on the Raspberry Pi. We'll use TensorFlow Lite which is a format that is optimized for mobile and edge devices, like the Pi. 

In Lobe, navigate to the Use tab and click Export.

Select TensorFlow Lite and select a location to save the model. We'll transfer the model to our Raspberry Pi later in the tutorial.

Now that the ML model is working on your computer, let's get your Raspberry Pi and BrainCraft HAT ready to make predictions on new images (also called inferencing). Your kit includes all the parts you need for this.

The next few steps walk you through setting up your BrainCraft HAT. We will skip audio setup since we aren't using it for this project, but if you can find the complete setup guide here.

Please note that the following steps require installing a lot of packages on the Pi and it may take a few hours to download everything.

Logging into your Pi in a headless configuration via SSH? Use the "raspberrypi.local" IP (more info below)

As long as you're on the same WiFi as the Pi, you can use the IP address raspberrypi.local, as shown below:

OK now you have all your parts in order, it's time to get your Raspberry Pi computer set up with the HAT or Bonnet.

Step 1 - Burn SD Card

Use Etcher or the Raspberry Pi Imager to burn the latest Raspbian Lite to an SD card (you can use full but we won't be using the desktop software and it takes up a bunch of room.

If you are using the Raspberry Pi Imager, you can press Ctrl+Shift+x to get to advanced options.
If you enabled SSH and WiFi credentials in the Imager, you can skip steps 2 and 3

Step 2 - Configure log-in access

You'll need to be able to log into your Pi, either enable SSH access (and use and Ethernet cable), use a USB to serial cable, or connect a monitor and keyboard. Basically get it so you can log in.

We have a quickstart guide here and here that you can follow, or there's dozens of online guides. it is assumed by the next step you are able to log in and type commands in - ideally from a desktop computer, so you can copy and paste in some of the very long commands!

Step 3 - Log in & Enable Internet

Once you've logged in, enable WiFi (if you have built in WiFi) with sudo raspi-config so you can ssh in.

Enable SSH as well if you haven't yet, also via sudo raspi-config

After you're done, reboot, and verify you can log into your Pi and that it has internet access by running ping -c 3 raspberrypi.org and seeing successful responses.

Step 4 - Update/Upgrade

Now that you are logged in, perform an update/update:

sudo apt update
sudo apt -y upgrade

and

sudo apt install --upgrade python3-setuptools

Step 5 - Setup Virtual Environment

If you are installing on the Bookworm version of Raspberry Pi OS or later, you will need to install your python modules in a virtual environment. You can find more information in the Python Virtual Environment Usage on Raspberry Pi guide. To Install and activate the virtual environment, use the following commands:

sudo apt install python3.11-venv
python -m venv env --system-site-packages

To activate the virtual environment:

source env/bin/activate

OK you've now got a nice, clean, connected, and up-to-date Pi!

Blinka is our CircuitPython library compatibility layer. It allows many of the libraries that were written for CircuitPython to run on CPython for Linux. To learn more about Blinka, you can check out our CircuitPython Libraries on Linux and Raspberry Pi guide.

We put together a script to easily make sure your Pi is correctly configured and install Blinka. It requires just a few commands to run. Most of it is installing the dependencies.

This page is out of date for Raspberry Pi OS bookworm. Until this page is updated, refer to https://learn.adafruit.com/circuitpython-on-raspberrypi-linux/installing-circuitpython-on-raspberry-pi and that whole guide for the latest info on installing Blinka.
cd ~
sudo pip3 install --upgrade adafruit-python-shell
wget https://raw.githubusercontent.com/adafruit/Raspberry-Pi-Installer-Scripts/master/raspi-blinka.py
sudo python3 raspi-blinka.py

When it asks you if you want to reboot, choose yes.

Finally, once it reboots, there are just a couple CircuitPython libraries to install for the BrainCraft HAT or Voice Bonnet.

The DotStar library is for controlling the 3 on-board DotStar LEDs and the Motor library is for testing out the GPIO pins.

pip3 install --upgrade adafruit-circuitpython-dotstar adafruit-circuitpython-motor adafruit-circuitpython-bmp280

That's it for Blinka and CircuitPython libraries.

The Fan option is only available on the Raspberry Pi 4.

We have a really simple fan service that will control the onboard fan. The reason we have it set up as a service instead of keeping the fan on all the time is so that it doesn't drain too much power from the Pi during the initial power on.

The fan service basically controls turning GPIO 4 on at startup, which is what the fan is connected to. Installing the fan service is really simple and we have a script for doing that.

To install, just type sudo raspi-config

Select Performance Options

Select Fan

Select Yes

And make sure you put down GPIO pin 4 for the fan

You can customize the fan temperature setting

That's it!

You can then 'stress test' by running

  • sudo apt-get install stress
  • while true; do vcgencmd measure_clock arm; vcgencmd measure_temp; sleep 10; done& stress -c 4 -t 900s

When the temperature hits the limit you set earlier, the fan should turn on, and cool the pi back down (in this case I set it to 70 C):

On some newer versions of Raspberry Pi OS the Fan service fails to start. Luckily these newer version of the OS have a built in fan control you can turn on.

In command prompt or terminal connect to the pi using SSH and run the command:

sudo raspi-config

Select Performance Options

Select Fan

The default GPIO pin is 14, but the BrainCraft Hat has the fan connected to pin 4.

You can set the temperature to anything between 60-120 degrees Celsius. 80 is a good midway point in that range.

You'll need to reboot after changing the settings.

Now your fan will come on when ever the board is over 80 degrees. To check the current temperature of your board you use this command:

/opt/vc/bin/vcgencmd measure_temp

There's two ways you can use the 1.54" 240x240 display on the BrainCraft HAT. For machine learning purposes, the advanced method is the way to go, so that's what we'll be covering in this guide.

Be aware that you can only choose to do one way at a time. If you choose the advanced way, it will install the kernel driver, which will prevent you from doing it the easy way without uninstalling the driver first.

The easy way is to use 'pure Python 3' and Pillow library to draw to the display from within Python. This is great for showing text, stats, images etc that you design yourself. If you want to do that, the BrainCraft HAT has a pretty close layout to the Adafruit 1.3" Color TFT Bonnet including the same type of display and a joystick, though the pinouts are slightly different. If you choose this option, You can skip this page and view the Python Setup page for instruction for that display.

The advanced way is to install a kernel module to add support for the TFT display that will make the console appear on the display. This is cute because you can have any program print text or draw to the framebuffer (or, say, with pygame) and Linux will take care of displaying it for you. If you don't need the console or direct framebuffer access, please consider using the 'pure Python' technique instead as it is not as delicate.

If you plan on using the Pi Camera for vision projects, you will need to go with the advanced route!

Installing The 1.54" Kernel Module

We have tried to make this as easy as possible for you by providing a script that takes care of everything. There's only a couple of dependencies needed. To get everything setup, just run the following at the terminal:

cd ~
sudo pip3 install --upgrade adafruit-python-shell click
sudo apt-get install -y git
git clone https://github.com/adafruit/Raspberry-Pi-Installer-Scripts.git
cd Raspberry-Pi-Installer-Scripts
sudo -E env PATH=$PATH python3 adafruit-pitft.py --display=st7789v_bonnet_240x240 --rotation=0 --install-type=mirror
If you want to use the BrainCraft HAT for vision projects, you will need to install the display driver as mirror and not console.

When you get asked to reboot, reboot!

That's it! You will now have the BrainCraft HAT with a console display on it

Install the Pi Camera module

Make sure you have the Pi camera module by running the following command:

pip3 install picamera

Now that you have everything set up, it's time to do an initial test with the camera. This should display what the camera sees on the display.

raspistill -t 0

Exit the camera test by pressing CTRL + C

Since we're using the Pi in a headless configuration, we'll use an FTP connection to transfer files between our computer and the Pi.

Windows Instructions

Download and install WinSCP

 

Open WinSCP and start a New Session

Select an SFTP connection, fill in the IP address of your Pi, set the username to Pi, and put in your password.

Your Pi's IP address is on the screen of the BrainCraft. You can also use the hostname address, e.g. "raspberrypi.local" ([email protected]).

Mac Instructions

Download and install FileZilla. When it's done installing, open the program.

Type sftp:// followed by the IP address of your Pi. Set the username to pi and put in your password.

Your Pi's IP address is on the screen of the BrainCraft. You can also use the hostname address, e.g. "raspberrypi.local" ([email protected]).

1. Connect your Pi to a power source and wait for it boot up.

You should see a solid red light and an intermittently flashing green light.

2. Open command prompt on a PC or terminal on Mac/Linux and connect to your Pi using SSH.

Type the following command but replace the bolded number below with IP address of your Pi:

ssh pi@192.168.0.22

Your Pi's IP address is on the screen of the BrainCraft. You can also use the hostname address, e.g. "raspberrypi.local" ([email protected])

3. Download the GitHub folder.

Run the following commands to download the sample code from GitHub:

cd ~
git clone https://github.com/lobe/lobe-adafruit-kit.git

4. Create a new folder called model in the home directory.

Type the following commands:

cd ~
mkdir model

5. Open the FTP connection from the previous step.

6. Copy saved_model.tflite and signature.json from your exported Lobe model to the model directory on the Pi.

7. In terminal on the Pi, run the following script to install Lobe and all it's dependences:

 

cd ~
wget https://raw.githubusercontent.com/lobe/lobe-python/master/scripts/lobe-rpi-install.sh
sudo bash lobe-rpi-install.sh

8. In terminal on the Pi, run the Python program lobe-basic-prediction.py

Type the following commands:

cd ~
cd lobe-adafruit-kit
python3 lobe-basic-prediction.py

Keep testing the model on the Pi and see how it works. If you find that the prediction is consistently wrong, you can add more images to the model to improve its performance.

You can train an ML model to recognize all sorts of objects and then use the BrainCraft to trigger actions in the physical world!

To learn how to do this, check out these more advanced projects:

This guide was first published on Mar 30, 2021. It was last updated on Mar 08, 2024.