The wearing of face masks to help fight the spread of the coronavirus disease 2019 (COVID-19) is a topic of much discussion. No one likes wearing these stupid masks. But hopefully everyone wants to help prevent spreading the covid virus. So do masks actually help with this?

This is a DIY science experiment, designed for folks to try out at home and does not replace guidance from your local health authorities!

A recent paper was published that outlines a simple method for testing mask efficacy, titled "Low-cost measurement of face mask efficacy for filtering expelled droplets during speech". In this guide we will show how you could recreate the test setup from that paper and run the experiment for yourself, along with results from our own recreation.

We think this would make for a great experiment to run in an educational setting. It has all the elements of the scientific method and can be carried out with reasonably cheap and accessible hardware and software.

Experiments are done to gather information to help provide answers to questions. So before starting any experiment, we first need to figure out what the question is. The obvious question that might be asked is:

Does wearing a mask help prevent the spread of covid?

That's obviously what is important. However, that question is too broad to be easily answered directly. Let's back up a bit and consider how covid can be spread. The World Health Organization has a good scientific brief available here. From that article, they identify several "Modes of transmission", including:

  • Contact and droplet transmission - droplets >5-10 micrometers (um) in diameter via coughing, sneezing, or even talking.
  • Airborne transmission - droplets <5um in diameter.
  • Fomite transmission (contaminated surfaces) - when the above sources land on a surface.
  • Other

The Center for Disease Control also has a summary page. From there they include among the most common methods of spreading:

  • When people with COVID-19 cough, sneeze, sing, talk, or breathe they produce respiratory droplets.

So let's focus on those "droplets" - these >5-10um diameter airborne "blobs" that come flying out of our mouths. Can a mask stop those? Well, that's something we can more easily test directly. So let's state that as our question:

Does a mask reduce the amount of droplets?

The general idea will be to create a laser light sheet into which we can speak and spew forth our blobs. The laser sheet will illuminate them and we will capture the results via video, which we can then process. Let's look at the hardware specifics in more detail.

Here we outline the specific hardware items used. Some can be sourced from Adafruit. Other items are from external vendors.

Camera

A Raspberry Pi Zero W with a V2 camera module was used for all video capture. You could use any other model Pi as well. But since we only used the Pi for video capture, and all Pi's have a good GPU, the Pi Zero W is plenty good.

You can get these items separately:

Raspberry Pi Zero W

PRODUCT ID: 3400
If you didn't think that the Raspberry Pi Zero could possibly get any better, then boy do we have a pleasant surprise for you! The new Raspberry Pi Zero W...
$10.00
IN STOCK

Raspberry Pi Camera Board v2 - 8 Megapixels

PRODUCT ID: 3099
Snap, snap! The Camera v2 is the new official camera board released by the Raspberry Pi Foundation!The Raspberry Pi Camera Board v2 is a high quality 8...
$29.95
IN STOCK

Raspberry Pi Zero v1.3 Camera Cable

PRODUCT ID: 3157
This camera cable is specifically designed to work with the Raspberry Pi Zero - Version 1.3! Just plug it into your Pi...
$5.95
IN STOCK

Or you can get them as a kit:

Raspberry Pi Zero W Camera Pack - Includes Pi Zero W

PRODUCT ID: 3414
We've got to hand it to the Pi Foundation - adding a camera port and built-in WiFi to the already awesome Pi Zero was a brilliant move. Now you can...
$44.95
IN STOCK

You'll also want one of these to help focus the camera lens:

Lens Adjustment Tool for Raspberry Pi Camera

PRODUCT ID: 3518
This nifty little plastic tool provides you a super simple, super cheap means for adjusting the lens of your Raspberry Pi...
$0.95
IN STOCK

NOTE: One drawback to using a Pi Zero W is the smaller and more delicate camera connector on the Pi. The "regular" sized Pi boards all have a larger and generally more durable camera connector that may prove more robust in an educational setting. We mainly wanted to show that this experiment is possible on the cheapest Pi available. Any model Pi will work.

Laser and Optics

Here are the links to the laser, batteries, and line generating optics used:

Yep, the optics were more expensive than the laser itself. But we got a very nice laser sheet as a result.

The Box

We used a left over cardboard shipping box lined with black poster board. Any other suitable box with similar dimensions should be fine.

Inside of the box showing the cutouts for:

  • Pi Camera
  • Laser Sheet
  • Mask (where tester speaks through)

This is the side where the Pi Zero W and Pi Camera are attached. A small hole is cut for the camera module.

The Pi Zero W and Pi Camera module are put in place. Make sure the camera module is pointing through the hole.

Attach the various cables. At a minimum, you need power. Here we show additional connections for audio and a button.

Tack everything down with blue tape.

And here's the final setup, with the addition of the audio cue and the Go button. The mask tester sits in front of the box and places their mouth into the mask test hole. When ready, they can then press the Go button. An audio cue plays to let them know to start speaking. After the video is done, another audio cue plays to let them know it's over.

Here's an example with the laser on and spraying in some water from above through the open box top. Even with the room lights on, the particles are clearly visible in the light sheet.

Note that the box used here does not have an exit slot for the laser, which the box in the original paper did. This did produce a noticeable amount of back scattered light into the closed box, as observed through the mask hole. However, it was minimal enough that it did not seem to affect the simple threshold based analysis done here (details later).

Example

Here's a short clip showing the resulting video.

The actual video acquisition can be done with a few simple command line statements. Alternatively, a fancier Python script can be used to help trigger and automate the video acquisition.

The video format used for the experiment was 1920x1080 at 30fps. This matches the settings used in the source paper.

Make sure you have enabled camera hardware in raspi-config.

Simple Acquisition

Video can be acquired very simply via the command line with use of the raspivid command. Simply SSH into the Pi and run the following. Replace the ### in the filename with a suitable run number to keep track of your video files.

Download: file
raspivid -t 10000 -w 1920 -h 1080 -fps 30 -o run_###.h264
The -t parameter sets the video length in millseconds.

This will leave you with a "raw" H264 video stream. To allow for playback in media player software, an additional step is required to add the suitable "wrapper" data. We did that using the recommended process:

Download: file
MP4Box -add run_###.h264 run_###.mp4

The MP4Box command can be installed with:

Download: file
sudo apt install -y gpac

With this approach, the action happens as soon as you press the <Enter> key. You may find coordinating this with the test subject's readiness tricky. If so, see the next section for a way to help automate and coordinate things better.

Python Acquisition

This is a fancier approach that requires more setup. However, it can really help with synchronizing the video capture with the test subject's speaking.

To provide an audio cue, the following USB sound card was used:

USB Audio Adapter - Works with Raspberry Pi

PRODUCT ID: 1475
The Raspberry Pi has an on-board audio jack, which is super handy for all kinds of sound effects and speech, just plug and go! However, for when you want better audio for music...
OUT OF STOCK

USB OTG Host Cable - MicroB OTG male to A female

PRODUCT ID: 1099
This cable looks like a USB micro cable but it isn't! Instead of a USB A Plug, it has a USB A Socket on the end. This cable is designed for use with OTG (On the Go) host devices...
$2.50
IN STOCK

Alternatively, you could use one of these (for headphone output):

Adafruit I2S Audio Bonnet for Raspberry Pi

PRODUCT ID: 4037
Add some easy-listenin' tunes to your Raspberry Pi using this basic audio bonnet. It'll give you stereo line out from a digital I2S converter for a good price, and sounds nice...
$9.95
IN STOCK

Or one of these (for amplified output):

Adafruit I2S 3W Stereo Speaker Bonnet for Raspberry Pi

PRODUCT ID: 3346
Hey Mr. DJ! Turn up that Raspberry Pi mix to the max with this cute 3W Stereo Amplifier Bonnet for Raspberry Pi. (It's not big enough to be an...
OUT OF STOCK

To trigger video capture, a basic normally-open button was wired to the Pi's GPIO header. We used one from a junk drawer, but really, any normally open button will do:

Our setup connected the button between GPIO4 and GND, but any available GPIO pin will work.

Here is the source code for a Python script you can use to automate testing.

import time
import math
import os
import RPi.GPIO as GPIO
import simpleaudio as sa
import picamera

camera = picamera.PiCamera()
camera.resolution = (1920, 1080)
VIDEO_LENGTH = 10

BUTTON = 4
GPIO.setmode(GPIO.BCM)
GPIO.setup(BUTTON, GPIO.IN, pull_up_down=GPIO.PUD_UP)

SIN_LENGTH = 500
SIN_AMPLITUDE = 127
SIN_OFFSET = 128
DELTA_PI = 2 * math.pi / SIN_LENGTH
sine_wave = bytes([
     int(SIN_OFFSET + SIN_AMPLITUDE * math.sin(DELTA_PI * i)) for i in range(SIN_LENGTH)
])

def play_tone(length):
    play_back = sa.play_buffer(sine_wave*length, 2, 2, 44100)
    play_back.wait_done()

run_number = int(input("Enter run number:"))

print("Press button when ready.")
while GPIO.input(BUTTON):
    pass

play_tone(100)
camera.start_recording("run_{:03d}.h264".format(run_number))
camera.wait_recording(VIDEO_LENGTH)
camera.stop_recording()
play_tone(100)

err = os.system("MP4Box -add run_{0:03d}.h264 run_{0:03d}.mp4".format(run_number))
Be sure to wear proper eye protection when operating the laser.

OK, you've got everything setup and can take video with the press of a button. But don't just start taking a bunch of video. Take some time to get organized and think about the test conditions (different masks, etc.) you want to try. Make a test plan and keep a run log!

A "run" is simply the acquisition of video data for a certain set of conditions. The run can be identified with an integer number that gets incremented for each new run. The test conditions include things like the mask being tested, what speech was used, the video settings, etc. It is important to keep track of the specifics for each of these for each run. That is where a "run log" is super useful.

Run Log

A run log is used to keep track of all the important test conditions for each run. You will use this run log constantly when processing the data post-test. For example, let's say you wanted to compare Mask A to Mask D for test conditions X. You would use the run log to find the specific runs where that occurred. Then you would know you need to compare Run 23 to Run 42 (for example). And since you named your video files with run numbers, it's super easy to find the source data.

Configuration Flags

To help keep the information in the run log concise, you can use configuration flags. These are simply codes you define somewhere and then use the codes in the run log. Letter+Number combinations work well for this. For example, "masks" can be given the letter M. And then each mask is given a number.

  • M0 = no mask
  • M1 = blue surgical mask
  • M2 = fitted N95 mask
  • M3 = cotton bandana
  • etc.

You can then do something similar for the "speech" (S) that is used for a particular run. Something like:

  • S0 = silent
  • S1 = "stay healthy, people" (repeated)
  • S2 = "pppffffffttttttttt" (blow a raspberry)

And similarly for "video" (V) settings:

  • V0 = 1920x1080 @ 30fps
  • V1 = ?

and duration (D):

  • D0 = still image
  • D1 = 10 seconds

etc.

But be careful!

Don't lose the decoder ring for these configuration flags.

It's best to keep these with the run log somewhere.

Example

Here's an example run log sheet. Keeping things to about 10 runs per sheet tends to work good. This leaves plenty of room for notes and other documentation.

The RUN column would just be the run number, like 1, 2, 3, etc. The other columns would contain the configuration flags used for that particular run. If you need to track more parameters, add more columns. Any other notes, just free text comments about anything important, can go in the NOTES column. Other notes can go in blank area at bottom.

The Pi is only used to acquire the video. For processing, we download the video file to a PC and use full Python. This provides plenty of processing power and opens up the entire Python library ecosystem for use.

General Approach

The original research paper processed the video in a reasonably sophisticated way. For each frame, blobs were identified along with their general size using a feature detection algorithm. This provided information about both blob count and size, which allowed for comparing histograms.

For our processing, we are taking a much simpler approach. Consider a case of zero blobs. That should result in a video with all blank frames - nothing ever entered the laser light sheet. Total darkness. At the other extreme, tons of blobs would be frames full of light, tons of green laser light. So we can think in terms of "the more lit pixels in frame, the more blobs there are". We can compute a value for this for each frame as follows.

A single video frame showing blobs lit green in the laser sheet.

The same frame converted to gray scale.

The frame after applying a threshold. Only pixels with a value above the threshold remain.

Now the remaining pixels are counted and a percentage of "lit" pixels is determined. For example:

Download: file
frame_count = np.count_nonzero(frame == True)
frame_percent = 100 * frame_count / (1920*1080)

These values are computed and saved for each frame. A final overall average can then be computed as well.

Required Python Libraries

We used these Python libraries to process the video data. They are all hosted on PyPi and can be installed using pip.

  • imageio - to load MP4 video files and extract frames
  • scikit-image - for image frame conversion and thresholding
  • numpy - for counting pixels
  • matplotlib - for producing plots

Processing Video

The following Python script was used to process MP4 videos taken for this experiment.

import time
import imageio
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
import numpy as np

THRESH = 0.3

RUN = int(input('Enter run number: '))

vid = imageio.get_reader('run_{:03d}.mp4'.format(RUN), 'ffmpeg')

#----------------
# MAIN PROCESSING
#----------------
frame_data = []
start = time.monotonic()
# go through video frame by frame
print("Processing", end='')
for frame in vid:
    print('.', end='', flush=True)
    frame_bin = rgb2gray(frame) > THRESH
    frame_count = np.count_nonzero(frame_bin == True)
    frame_percent = 100 * frame_count / (1920*1080)
    frame_data.append((frame_count, frame_percent))
# overall stats
avg_count = sum([x[0] for x in frame_data]) / len(frame_data)
avg_percent = 100 * avg_count / (1920*1080)

end = time.monotonic()
print("\nProcessing done in {} secs.".format(end - start))
print("Average Count = {}".format(avg_count))
print("Average Percent = {}".format(avg_percent))

#-------------
# SAVE TO FILE
#-------------
print("Saving data to file...")
with open('run_{:03d}.csv'.format(RUN), 'w') as fp:
    for frame, data in enumerate(frame_data):
        fp.write('{},{},{}\n'.format(frame, data[0], data[1]))

#---------
# PLOTTING
#---------
print("Generating plots...")
fig, ax = plt.subplots(1, figsize = (10,5))
ax.set_title("RUN {:03d}\nTHRESH = {}, AVG_CNT = {:4.2}, AVG_PER = {:.3}".format(RUN, THRESH,avg_count, avg_percent))
ax.set_xlabel("FRAME")
ax.set_ylabel("COUNT")
ax.plot([x[0] for x in frame_data])
fig.savefig('run_{:03d}_plot.png'.format(RUN))

print("DONE.")

Masks Tested

Testing was done without a mask on, referred to as "no mask". Then, the following masks were put on and tested.

Blue Surgical - often provided for free at many businesses.

Fitted N95 - this one has a fancy NIOSH rating.

Cotton Bandana - everyone probably has some of these kicking around.

Fashion Mask - capitalism's answer to a pandemic.

Run Log

We put together a simple test plan and went through the runs. Here is the run log from our test for reference.

Run log for runs 1 through 10.

Run log for run 11 through 15.

The plots below show the COUNT of pixels that showed up in each FRAME of the video. The higher the count, the more "blobs" are present. The video was taken at 1920x1080 resolution, so there are a total of 2,073,600 pixels in each frame. We took 10 seconds of video at 30 frames per second, so there ends up being about 300 frames total.

Silence

Sort of boring, but this test was run to create a general baseline. Nothing really stands out here, which isn't surprising. We are essentially looking at noise.

Speech

This is probably the more typical scenario, just basic talking. Here we used the phrase from the source paper, "Stay healthy, people". This was repeated for the duration of the video capture.

The differences are not drastic. The no mask data has some obvious spikes, but is otherwise generally on par with the masks. The cotton bandana does seem to have some obvious spikes. The data of interest here may be within the noise floor for the current experimental setup.

Sneezing / Coughing

To simulate a sneeze or cough scenario, we went PFFFTTTTT, the basic blowing a raspberry. This of course produces a LOT of blobs. The results here are much more drastic. So much so that the "no mask" data swamps the other masks data. All of the masks curves get squished down when plotted on the same plot.

Here is the same plot with the "no mask" curve removed, so we can zoom in and compare mask-to-mask.

Wearing a mask here made a HUGE difference. The mask on values are back down to the silence and speech levels. Amazing! The cotton bandana seemed to be a poor performer, but otherwise, simply wearing any mask helps A LOT!

This is a collection of additional information that may prove useful for anyone trying to recreate this experiment.

The Laser / Camera Hardware

The combination used here seemed more than adequate. The laser proved amazingly bright and readily illuminated particles within the laser sheet. The basic V2 camera module also seemed to perform well. The 1920x1080 resolution is also well within the camera's native sensor resolution.

The Box

The cardboard box used was super cheap and easy to cut up. But it proved a little challenging to keep "clean". The cardboard seemed to produce a lot of small particles whenever moved, the top was opened, etc. We dealt with this by letting the particles settles between runs. But something other than cardboard may be a better choice.

Venting

The setup in the original paper included a filtration system that allowed for clean air to be brought in to the test box. That could probably help with the issue described above. Maybe a shop vacuum with some air conditioner filters could be used?

Laser Exit Slot

Our box also did not have an exit slot for the laser sheet. This was due to the size needed compared to the box (structural issue). It was mitigated by lining the inside with black poster board. However, there was still a noticeable amount of laser reflection inside the box. This was well below the threshold used in the processing. But in order to try and refine the processing, to better examine the "speech" data for example, something to help reduce this internal laser reflection might help.

Data Processing

Just a plug here for Python, which has everything needed for data processing. Additionally, the use of Jupyter Notebook proved useful. We didn't include any of our work on that here, but it was super useful for quick prototyping and testing of the Python based processing that was done.

This guide was first published on Oct 14, 2020. It was last updated on 2020-10-14 11:57:31 -0400.