The first part of our Rock, Paper, Scissors game is to train an ML model that can classify the hand gesture you make.
We'll use Lobe to collect images to create a dataset and train the model.
Open Lobe and create a new project.
From the top right, select Import and choose Camera in the drop-down menu.
In the bottom left corner, set your first label to "Rock" and take at least 10 pictures of your fist at different angles.
Hint: If you press and hold the capture button you can take a burst of images in Lobe.
Repeat this step for the "Paper" and "Scissors" hand gestures.
In Lobe, switch to the Use tab in the left menu and select Camera on the top.
Test all three hand gestures and check that the model is recognizing them accurately. If the model is not working as expected, you can use the green and red buttons to improve the model.
Click the green button when your model predicts the correct label. This will add the image with the correct label to your dataset.
Click the red button when your model predicts the wrong label. You can then provide the correct label and the image will get added to your dataset.
If you find that one of the gestures is consistently confusing the model, try collecting more images of that gesture.
Text editor powered by tinymce.