/RaspVan

Domotics with a RaspberryPi 3 for my DIY campervan

Primary LanguagePython

RaspVan (codename: Fiona)

Domotics using a Raspberry Pi 3B for our own-built campervan.

At the moment it is just a simple prototype aiming to become a complete domotic voice-controled system.

Commands can be executed either by voice or by sending HTTP requests to a server.


Table of Contents


Requirements

Apart from any other requirement defined in the root or any of the sub-components we need the follwing:

Structure

This repo is organized in a series of sub-components plus the main solution code under raspvan.

To understand how to train, configure, test and run each sub-component please refer to the individual readme files.

.
├── asr                     # ASR component (uses vosk-kaldi)
├── assets
├── common
├── config
├── data
├── docker-compose.yml
├── external
├── hotword                 # HotWord detection (uses Mycroft/Precise)
├── Makefile
├── README.md
├── requirements-dev.txt
├── requirements.txt
├── respeaker
├── scripts
├── setup.cfg
└── raspvan                 # client and server systems

Most of the following components communicate through AMQP using rabbitMQ.

To run the broker backbone glueing all together:

docker-compose up -d rabbit

Hotword

⚠️ TBD

To run:

source .env
source .venv/bin/activate
python -m raspvan.workers.hotword

ASR

We use the dockerized vosk-server from the jmrf/pyvosk-rpi repo.

This server listens via websocket to a sounddevice stream and performs STT on the fly.

💡 For a complete list of compatible models check: vosk/models

# Run the dockerized server
docker-compose up asr-server

Then, run one of the clients:

source .env
source .venv/bin/activate
# ASR from a audio wav file
python -m  asr.client -v 2 -f <name-of-the-16kHz-wav-file>
# Or ASR listening from the microphone
python -m  asr.client -v 2 -d <microphone-ID>

Or run the rabbitMQ-triggered raspvan ASR worker:

python -m raspvan.workers.asr

NLU

⚠️ While the rest of the components use numpy~=1.16 the NLU components requires a newer version in order to work with scikit.

The best thing if running locally is to create a separate virtual environment

See nlu/README.md

The NLU engine has two parts:

  • A Spacy vectorizer + SVM classifier for intent classification
  • A Conditional Random Field (CRF) for entity extraction

💡 Check the details in this Colab notebook: simple-NLU.ipynb

💡 It is advices to collect some voice samples and run them through ASR to use these as training samples for the NLU component to train it on real data.

To collect voice samples and apply ASR for the NLU, run:

# discover the audio input device to use and how many input channel are available
python -m scripts.mic_vad_record -l
# Run voice recording
python -m scripts.mic_vad_record sample.wav -d 5 -c 4

Respeaker

We use respeaker 4mic hat as microphone and visual feedbac with its LED array.

To run the LED pixel demo:

python -m respeaker.pixels

Raspvan

This is the main module which coordinates all the different components.

Relays

i2c relay demo: python -m raspvan.workers.relay

Bluetoth

To run the bluetooth server make run-ble-server

Setting BLE server as a service

Create /etc/systemd/system/ble_server.service with the following content:

[Unit]
Description=RaspVan BLE Server + Redis container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/bin/bash /home/pi/start_ble.sh
ExecStop=

[Install]
WantedBy=default.target

Enable on startup: sudo systemctl enable ble_server.service

Start with : sudo systemctl start ble_server

Check its status with: sudo systemctl status ble_server

How to

Installation

Create a virtual environment

python3.7 -m venv .venv
source .venv/bin/activate

And install all the python dependencies

pip install -r requirements.txt

Finding the sound input device ID

First list all audio devices:

python -m respeaker.get_audio_device_index

You should get a table simlar to this:

┏━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Index ┃ Name     ┃ Max Input Channels ┃ Max Output Channels ┃
┡━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│     0 │ upmix    │                  0 │                   8 │
│     1 │ vdownmix │                  0 │                   6 │
│     2 │ dmix     │                  0 │                   2 │
│     3 │ default  │                128 │                 128 │
└───────┴──────────┴────────────────────┴─────────────────────┘

Device with index 3, which can handle several input and output channels, is the one to pass to the hotword and ASR workers.

⚠️ ALSA won't allow for audio devices to be shared, i.e.: accessed simultaneously by more than one application when using the sound card directly. ⚠️

Solution: Use the pcm devices, i.e.: plugins. Specifically the dsnoop (to have shared input between processes) and dmix (to have several audio outputs on one card).

Copy config/.asoundrc to ~./asoundrc

⚠️ Probably deprecated. Click to expand!

WiFi and automatic hotspot

In order to communicate with the RaspberryPi we will configure it to connect to a series of known WiFi networks when available and to create a Hotspot otherwise.

Refer to auto-wifi-hotspot from raspberryconnect/network.

By default the RaspberryPi will be accessible at the IP: 192.168.50.5 when the hotspot is active.

Wiring and Connections

TBD

Misc