The Adafruit_OV7670 library provides a number of special image effects. Some of these are “in-camera” effects — every frame from the camera continuously comes out this way in real time, with no processing required on the host microcontroller. Others are “postprocessing” effects, requiring that the camera be paused while the host microcontroller does a number on the last-received image in memory.
For reference, here’s a normal unmodified scene captured from the OV7670 and shown on a TFT display. |
The camera can mirror (flip) the image on the horizontal and/or vertical axes, set with the
|
|
Night mode can sometimes take better images in low-light situations. The tradeoff is a reduced frame rate, as the camera is adding up several images over time. The maximum number of frames to accumulate can be specified…though, if lighting is sufficient, the camera might ignore this and use a lesser setting. It’s enabled or disabled with:
where
|
|
The camera can output test patterns, if that’s useful to anybody:
where
|
Postprocessing Effects
The following effects are generated in code, not by the camera. It’s therefore necessary to pause the camera output before performing any of these operations, else the next incoming frame will overwrite the interim results in RAM.
// Pause camera before processing image cam.suspend(); // Call processing function(s) cam.image_edges(); // Do something with image data here -- TFT display, SD card, etc. // Finished with image, return image buffer to camera cam.resume();
All of the postprocessing function names begin with image_
You can chain multiple processing functions, each will modify the output of the prior function. For example, image_median()
then image_edges()
. The order of operations will affect the outcome; they are not interchangeable.
Remember that these only process the last-captured image in memory. They are not continuously applied while the camera is “live.” You must suspend, process, do something with the image data, then resume.
Some of these functions only work in RGB mode, YUV is not always supported.
|
|
|
The following work with RGB images only, YUV is not handled.
The two arguments are the width and height (in pixels) of the mosaic tiles. These do not need to be powers of two, anything >= 1 will suffice. If the image size does not divide evenly into the tile size, the fractional tiles will always be along the right and/or bottom edge(s). |
|
This is a fairly math-intensive operation and might only manage 1-2 frames per second, that’s normal. |
|
This requires objects be in-focus and adequately lit. In some cases the result might be nearly empty or totally full of pixel “snow,” so |