It only takes a few seconds’ observation to recognize that it’s not our eye code…
Spotting the difference is nothing to do with shapes or colors…that’s all configurable in the HalloWing eye code. Rather, it’s the way the iris scales (or doesn’t) as the pupil dilates…on the left, it’s a diffuse circle pasted over the iris…on the right, we’d put painstaking effort into getting the whole iris to morph.
But that’s okay! While it would be neat to see our code out in the wild, I’ll explain in a moment why we’re each making smart moves in the long run for our intended audiences.
Using an oscilloscope to listen in on some signals was sufficient for a theory of how it works. This focused on one of the display connectors, and the flash chip.
Also, just looking at the screens as it worked…
For example, it’s easy to spot that the exact same image is being shown on both displays. The signals are simply split and sent to both connectors.
In our two-eyed projects, each is controlled separately, there are distinct left/right eyes…they have a particular shape, and they “fixate” — slightly crossed, as eyes normally do in reality, to focus on nearby subjects.
Here they’ve cleverly approximated fixation by making the plastic housings distinct for left and right, creating a physical stencil of the larger eye image.
Didn’t think to take screen shots from the ’scope, but descriptions should suffice…
The TFT displays are receiving data over an 8-bit parallel connection. It’s unclear whether this was for cost (some displays might be cheaper with parallel vs a high-speed SPI interface), or that a wider-but-slower connection might have better immunity to interference from nearby motors, or if the microcontroller couldn’t drive SPI at sufficient speed.
The clock signal for this connection is running at 2.5 MHz.
2.5M x 8 bits = 20 megabits aggregate throughput.
Two signals, most likely chip select and data latch, could be seen blipping around 60 Hz — a likely screen refresh rate.
One screenful of data…160 x 128 pixels, most likely 16 bits/pixel…is 327,680 bits.
327,680 x 60 Hz = 19,660,800 bits/second … or about 20 megabits again, the math checks out. They can pump full frames to the displays at this rate, no messing with “dirty rectangle” techniques.
Turning attention to the flash chip…
On our MONSTER M4SK eyes, the flash storage is only accessed on startup to load some graphics into RAM, then everything else happens internally on the microcontroller, using lots of math. It was odd to see constant access to the flash on the werewolf eyes…
The clock signal to flash was found to be running at 10 MHz. Data could be seen on two channels — dual SPI.
10M x 2 bits = 20 megabits. Same as the screen, exactly 1:1.
And that’s all the info we need. Case closed.
There is no “eye code” here! It’s a video player. Its whole job is to move frames from flash storage to screen.
It becomes even more apparent as you watch it run. You can’t fit many frames in 2 megabytes, but they make good use of them. The eye only moves between five positions, always returning to the “home position” between moves. Never blinks or dilates when in motion, only at those locations. And the blinks and dilations are always symmetrical — the frames going into a motion match the frames coming out of a motion, just in reverse. The pattern repeats after about 15 seconds.
So, at this point, if we wanted to take it to the next level, we could desolder or blue-wire to the flash chip, extract its contents (datasheet and app note can be found through the link on the prior page), decipher how the data is laid out and replace the graphics. Reject their reality and substitute our own.
In practice though…while that would be a neat hack, and I’m sure someone will do it…very few readers will have the tools and methods to follow along at home, so I ended the exploration there.
It would be significantly easier (and also better looking) to simply swap out the eyes for a MONSTER M4SK board, which can then be customized with your own graphics over USB. We’re done this exact thing in several other guides, so it’s easy to follow along!
It’s pretty clever though. My only disappointment is that the eyes look so basic. With pre-rendered frames and no realtime computational bottleneck, the eyes could have looked phenomenal, all smooth and antialiased and richly detailed, had someone just put in the work.
The manufacturer, Seasonal Visions International, has a U.S. office in Emeryville, California, a stone’s throw from Pixar. I mean literally…step out front, throw a stone, hit Pixar. You’d think they could bribe someone with a case of craft beer or something.
Conclusion
Our eye projects are over-engineered because that’s our bag. Our friends are cosplayers and coders and puppeteers, an audience that demands these details, that the eyes are asymmetrical and respond to inputs and light and never twice move or blink quite the same…they’re “alive.” Customizable, reprogrammable, or set them up for completely unrelated tasks, all through USB. We made something perfect for our needs.
A Halloween yard animatronic has different goals: scare the crap out of kids! This doesn’t require finesse, and few if any will notice it’s running in a loop.
More importantly though, had they programmed a chip to render the eyes procedurally — and the code was there for the taking — that’s all it would do. Eyes. What they made instead — a super economical video looper — could be adapted to many other mass-produced products, most outside the Halloween realm…toys, plush, greeting cards…never touching the microcontroller again, just substituting a different preprogrammed flash chip (and leaving out the second screen in some applications). The engineering costs could be spread for years among a dozen product lines, instead of just one. It’s stuck in a never-changing loop, but that works fine for the intended use. They made something perfect for their needs.
I see some parallels with last year’s Speak & Spell reboot. While this could’ve used the original’s exact voice — the code’s out there to emulate it — instead there’s just a large flash chip of newly-recorded spoken audio samples.
It’s disappointing from a technology purist’s perspective…but on the flip side, they had to balance being a nostalgia product while simultaneously a modern educational product where people expect affordability and higher voice quality now. Engineering is expensive. Not much of a technical hack, but a good dollars-and-cents hack. Both are valid.
So if you’ll excuse me, I need to hack this Speak & Spell into this werewolf now…