The first part of our Rock, Paper, Scissors game is to train an ML model that can classify the hand gesture you make.

We'll use Lobe to collect images to create a dataset and train the model.

Collect your images in Lobe

First, download and install Lobe from the link below:

Open Lobe and create a new project.

From the top right, select Import and choose Camera in the drop-down menu.

In the bottom left corner, set your first label to "Rock" and take at least 10 pictures of your fist at different angles.

Hint: If you press and hold the capture button you can take a burst of images in Lobe.

Make sure your model labels are capitalized to work with the sample code.

Repeat this step for the "Paper" and "Scissors" hand gestures.

Test your model

In Lobe, switch to the Use tab in the left menu and select Camera on the top.

Test all three hand gestures and check that the model is recognizing them accurately. If the model is not working as expected, you can use the green and red buttons to improve the model.

Click the green button when your model predicts the correct label. This will add the image with the correct label to your dataset.

Click the red button when your model predicts the wrong label. You can then provide the correct label and the image will get added to your dataset. 

If you find that one of the gestures is consistently confusing the model, try collecting more images of that gesture.

This guide was first published on Mar 31, 2021. It was last updated on Mar 31, 2021.

This page (Train a Gesture Detector) was last updated on Feb 18, 2021.

Text editor powered by tinymce.