This page dives deeper into how the Rock, Paper, Scissors game code works.
signs = ['Rock', 'Paper', 'Scissors'] game_logic = {'Rock' : 'Paper', 'Paper' : 'Scissors', 'Scissors' : 'Rock'}
First, we define the three hand gestures that the player and computer can play.
Next, we define the game logic using a dictionary. The dictionary value on the right beats the dictionary key on the left. For example, the dictionary key 'Rock' is beaten by the dictionary value 'Paper'.
def load_image(path, camera) -> Image: img = Image.open(path) pad = Image.new('RGB', ( ((img.size[0] + 31) // 32) * 32, ((img.size[1] + 15) // 16) * 16, )) pad.paste(img, (0, 0)) layer = camera.add_overlay(pad.tobytes(), size=img.size) return layer
Next, we include a function to load images as layers. We use these images for the countdown and to show the hand signs that the computer chooses. The function scales the images to proper size to fit on the screen.
def random_sign(): return random.choice(signs) def compare_signs(player_sign, computer_sign): if (game_logic[player_sign] == computer_sign): print('computer wins') return elif (game_logic[computer_sign] == player_sign): print('you win') return else: print('tie') return
The first function above uses random.choice
to return a random element from the list of signs
.
The second function compares the sign detected by the Lobe model with the sign the Raspberry Pi chose randomly. It checks whether the sign the player chose loses to the sign the computer chose.
For example, if the player chooses Rock
, then game_logic[player_sign]
will equal Paper
since we're getting the element that is at game_logic
index Rock
.
model = ImageModel.load('~/model')
ImageModel
is a class from the Lobe library. We load the Lobe model and create an instance of the class.
with picamera.PiCamera(resolution=(224, 224), framerate=30) as camera:
Next, we instantiate the Pi Camera. The rest of the code runs inside this indent block.
stream = io.BytesIO() camera.start_preview() time.sleep(2)
We create a stream to continuously show the camera footage, then start the preview on the camera to populate the stream, and wait 2 seconds to let the camera warm up.
rock = load_image('assets/rock.png', camera) paper = load_image('assets/paper.png', camera) scissor = load_image('assets/scissor.png', camera) counter_one = load_image('assets/one.png', camera) counter_two = load_image('assets/two.png', camera) counter_three = load_image('assets/three.png', camera)
The above code loads all the images we use in the game. Each image is loaded into a separate layer.
Main loop
The section below covers the main program loop.
stream.seek(0)
The above code returns the stream to the first byte.
inputs = get_inputs() while Input.BUTTON not in inputs: inputs = get_inputs() time.sleep(0.1)
Next, we wait for the button to be pressed before starting the game.
camera.preview.alpha = 0 counter_one.layer = 3 time.sleep(1) counter_one.layer = 0 counter_two.layer = 3 time.sleep(1) counter_two.layer = 0 counter_three.layer = 3 time.sleep(1) counter_three.layer = 0 camera.preview.alpha = 255 time.sleep(1)
Once the button is pressed, we turn the opacity of the camera preview to 0 so it's transparent. Then we move each number of the countdown to the front and wait 1 second between each one. Finally we turn the camera opacity back to 255 (opaque).
camera.capture(stream, format='jpeg') img = Image.open(stream) result = model.predict(img) label = result.prediction camera.annotate_text = label time.sleep(0.5)
The next step is to capture an image from the camera stream and open it as a Pillow image. The Lobe model uses inferencing to make a prediction on the image, and returns the predicted label. This is added to the camera preview.
computer_sign = random_sign() if (computer_sign == 'Rock'): Rock.layer = 3 elif (computer_sign == 'Paper'): Paper.layer = 3 elif (computer_sign == 'Scissors'): Scissors.layer = 3 time.sleep(2) Rock.layer = 0 Paper.layer = 0 Scissors.layer = 0
The above code is how the Pi plays the game! The Pi generates a random sign, checks which sign was generated, and shows the corresponding image for 2 seconds.
winner = compare_signs(label, computer_sign) if (winner == 'player'): camera.annotate_text = 'You Win!' elif (winner == 'computer'): camera.annotate_text = 'You Lose...' elif (winner == 'tie'): camera.annotate_text = 'Tie'
Finally, we compare the player gesture (label
from the prediction) and the computer gesture, check who the winner was and add the text to the camera preview.