Make sure you are using the Lite version of Raspberry Pi OS. The desktop version has had some issues with Google Voice and the audio driver.

First to setup all of the packages on the Raspberry Pi. If you haven't done so already, take a look at the Adafruit BrainCraft HAT - Easy Machine Learning for Raspberry Pi guide if you are using the BrainCraft HAT.

This will take you through all the steps needed to get the Raspberry Pi updated and the BrainCraft HAT all set up to the point needed to continue. However, skip the display Module Setup portion since you will be using Python to draw to the display.

Skip the Display Driver installation for now so you can control this through Python. If you already have it installed, you can run it without parameters and choose the Uninstall option to remove it.

Be sure you have some speakers hooked up to the BrainCraft HAT, either through the JST ports on the front or the headphone jack. You will need these later for the speech synthesis.

Set your Timezone

If you haven't done so already, be sure your timezone is set correctly. A freshly setup Raspberry Pi is usually set to GMT by default. You can change it by typing:

sudo raspi-config

Select Localisation Options.

The select Timezone. This will take you to a section where you can select your Timezone. The organization is a bit unusual. For instance, you were in the US Pacific Timezone, you would select US and Pacific Ocean.

This will ensure that it is able to tell you the correct time. You can find more information about using raspi-config in the official documentation.

Install Voice2JSON

Installing Voice2JSON is fairly straightforward. First you need to install some prerequisites by running the following command:

sudo apt-get install libasound2 libasound2-data libasound2-plugins

Next verify you are on the armhf architecture by typing:

dpkg-architecture | grep DEB_BUILD_ARCH=

Next download the package with wget and install the Voice2JSON file:

wget https://github.com/synesthesiam/voice2json/releases/download/v2.0/voice2json_2.0_armhf.deb
sudo apt install ./voice2json_2.0_armhf.deb

Speech Synthesis Library

Voice2Json is also capable of making use of speech synthesis, so it's helpful to have the espeak library installed:

sudo apt-get install espeak-ng

Install a Profile

Voice2JSON uses profiles in order to combine a language with a speech recognition engine. The profile is not included as part of the package installation, so you will need to install that separately. Though there are many additional profiles, this setup installs the US English/PocketSphinx profile using the following commands:

mkdir -p ~/.config/voice2json
curl -SL https://github.com/synesthesiam/en-us_pocketsphinx-cmu/archive/v1.0.tar.gz | tar -C ~/.config/voice2json --skip-old-files --strip-components=1 -xzvf -

Latest Pillow Library

The demo project uses displayio which uses Pillow, or the Python Imaging Library, underneath. To get the latest version of Pillow, you can install by upgrading to the latest PIP and then installing Pillow with the following commands:

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade Pillow

CircuitPython Libraries

A few CircuitPython libraries are needed for this project. These can be easily installed through PIP using the following command:

python3 -m pip install adafruit-circuitpython-st7789 adafruit-circuitpython-dotstar

After that finishes, you should be ready to configure your setup.

This guide was first published on Jun 09, 2021. It was last updated on Jun 09, 2021.

This page (Raspberry Pi Setup) was last updated on Jun 02, 2021.

Text editor powered by tinymce.