How it Works

Joy is written as an Arduino sketch. For something that basically does one task…a USB game pad, albeit one embellished with a lot of graphical flair…it’s an awfully big and hairy Arduino sketch.

It was written this way for performance reasons, to keep everything animated and responsive. But to be honest, between ever-faster microcontrollers and ongoing improvements in CircuitPython speed and features, there will probably be no need to program projects like this in such a tedious manner in the future! But if you’re curious, here’s a link to the code:

Joy_of_Arcada source code on Github

There are four files: Joy_of_Arcada.ino is the main Arduino sketch, which is accompanied by three header files (.h) containing tables of graphics, sound and keyboard codes.

Joy_of_Arcada is so named because it uses our Adafruit_Arcada library, which encapsulates a lot of graphics, sound and control-related functions common to several Adafruit boards. The Arcada library, in turn, depends on a whole bunch of other libraries to provide the lower-level functionality. So many libraries, in fact, that rather than list them all here it’s best to link to this other guide explaining all the prerequisites.

You should also have the latest Adafruit SAMD boards package for Arduino (version 1.5 or later, we suggest the latest version in the Arduino library manager). If you’ve used other Adafruit SAMD boards in the past (M0, M4, HalloWing, etc.), it’s worth checking for any recent updates (Tools→Board→Boards Manager…). In addition, before compiling this code, make sure to select Tools→USB Stack→TinyUSB (this lets our code access files on the PyGamer/PyBadge flash filesystem).

Animating Joy

To ensure button and joystick input is processed expediently, the code pulls shenanigans to draw the face very quickly.

First, the entire screen is not drawn for every frame of animation. There’s really only a rectangular section in the middle where all the motion occurs — the bounds of the eyes and mouth. So after clearing the screen and drawing a full-face bitmap just once, all subsequent updates refresh only this middle area.

An offscreen buffer for just this area is maintained in RAM (called a framebuffer in Arcada library parlance, or a GFXcanvas16 object in Adafruit_GFX terms). We periodically modify sections of the buffer in RAM and copy it to the screen.

The offscreen buffer is processed in regions, of varying height but all the same width. Even though that wastes a little memory (the mouth is not as wide as the eyes, for example), making everything the same width allows us to use a single memcpy() call to draw each region, because the scanlines are contiguous in memory (moving data between different-width images would require copying each scanline separately). It’s a one-dimensional operation rather than 2-D.

The pupils are a special case. Those are drawn more conventionally (using the drawRGBBitmap() function from the Adafruit_GFX library, because they’re round and we need that code’s masking capability). Then the eyelids are drawn on top of this when needed, using memcpy() as previously described.

So drawing the face then is mostly a matter of copying one of several fixed-sized bitmap images (encoded in the graphics.h file — more on that in a moment) to a corresponding area in the offscreen buffer.

Copying each completed frame of face animation to the screen is done using direct memory access (DMA), which lets the data transfer occur “in the background,” not using any instruction cycles…allowing us to move on with handling more joystick and button input while the screen redraws.

Preparing the Graphics

As alluded to above, the graphics are encoded as tables in a header file (part of program memory), not as image files in the flash filesystem. They’re just huge arrays of 16-bit values, one value per pixel.

It’s a frequent misconception that we have some kind of finely-crafted tool for converting images into header files like this, but that’s not true. Typically I’ll use some throwaway Python code and the Python Imaging Library (PIL or its offspring Pillow) for such conversions. This often starts with an existing image conversion script (such as the one from the Uncanny Eyes project — in this repository), tweaking it for the task at hand…but it’s extremely rare that anything like this is held onto. Every project’s needs are different, and it’s better for your mental health to think of these little one-off scripts as disposable tissues, not precious gems to be hoarded. It’s very informal stuff and that’s actually a good thing. Python makes it so quick.

If you really need something ready-made though, this online tool can handle quite a number of situations.

This guide was first published on Jun 15, 2019. It was last updated on Jun 15, 2019. This page (How it Works) was last updated on Oct 15, 2019.