When you first run the program, it will take some time to load the TensorFlow library and the Lobe ML model. When the program is ready to capture an image, the status light (white LED) will pulse.
Once you've taken an image, the program will compare the image to the Lobe ML model and output the resulting prediction (line 83). The output determines which light is turned on: yellow (garbage), blue (recycle), green (compost), or red (hazardous waste).
If none of the indicator LEDs turn on and the status LED returns to pulse mode, it means that the image captured was "not trash", in other words, retake the photo!
Capturing an Image
Press the pushbutton to capture an image. Note that you may need to hold the pushbutton for at least 1 second for the program to register the press. It is recommended to take some test images, then open them on the Desktop to better understand the camera view and frame.
To allow the user time to position the object and for camera light levels to adjust, it takes about 5s to fully capture an image. You may change these settings in the code (lines 35 and 41), but keep in mind the Pi Foundation recommends a minimum of 2s for light level adjustment.
The biggest challenge is ensuring that the captured image is what we expect, so take some time to review the images and compare expected results with indicator LED output. If necessary, you can pass in images to the Lobe ML model for direct inferencing and faster comparison.
A few things to note:
- The TensorFlow library will likely throw some warning messages -- this is typical for the version used in this sample code.
- The prediction labels must be exactly as written in the
led_select()function, including capitalization, punctuation, and spacing. Be sure to change these if you have a different Lobe model.
- The Pi requires a steady power supply. The Pi's power light should be bright, solid red.
- If one or more LEDs are not turning on when expected, check by forcing them on with the command: