You can see an early version of the project on the September 7th adafruit Show and Tell show (~12 minutes into the show)!
To build this project you will need the following:
- Raspberry Pi (either model A or B will work).
- Two servos, like these micro servos.
- Laser diode. You can buy one or scavenge one out of a laser pointer (what I've chosen to do in this project).
- PWM/servo controller based on the PCA9685 chip.
- Network camera that can output a MJPEG video stream. I use this Wansview camera, but check for support from other brands such as Axis, Foscam, etc. See the note on video streaming below to understand why a network camera is used instead of a webcam or other video source.
This project assumes your Raspberry Pi is running the Raspbian operating system, is connected to your network, and is setup to enable I2C communication. If you use a distribution such as Occidentalis much of this setup is done for you already. However if you need to setup your Raspberry Pi, follow these guides:
Video Streaming NoteIn building this project I found network video cameras, such as those used for security and monitoring, work best for streaming video. I attempted to use a webcam that streamed video to sites such as Ustream or Livestream, but found the latency of those streams was extremely high--on the order of 10-15 seconds. With such high latency it is not possible to control the laser through the web in real time. I even tested setting up my own video streaming server with Amazon EC2 CloudFront, but still could not get a low enough latency video stream.
In addition to high latency, I also found embedded video streams (such as from video streaming web services) are not easily adapted for the control needs of this project. The problem is that web browsers enforce a strict cross-domain security model which does not allow a video embedded in an iframe or object (the typical means for embedding web video) to expose click and other events to the parent web page. This means targeting the laser with clicks on the video is not possible (at best you could present a 'track pad' target area below the video, or a joystick/direction pad control for manual movement--neither is as ideal as targeting directly from the video).
Using a network video camera that outputs an MJPEG video stream solves both these problems by having low latency encoding (at the expense of higher bandwidth compared to more modern video codecs), and the ability to embed directly in an image tag which is not subject to as strict cross-domain security restrictions.