/tjwave

Fun javascipt app to control the IBM TJBot Robot Arm (servo motor)

Primary LanguageJavaScript

TJ Wave

Control the arm of your TJ Bot (using the embedded servo)

Video demo here.

This module provides Node.js code to get control the arm on your TJ Bot via the servo motor. It uses Watson Speech to Text to parse audio from the microphone, processes your commands (e.g command your bot to wave its arm, or dance to a song) and uses Watson Text to Speech to "read" out a text response!

This is designed to run on a Pi with a servo motor attached. See Wiring Servo for how to connect your servo motor Before you start, it is recommended you become familiar with setting up your TJBot/Raspberry Pi by looking at the instructions here.

How It Works

  • Listens for voice commands. See Running for a list of voice commands supported in this sample.
  • Sends audio from the microphone to the Watson Speech to Text Service - STT to transcribe audio to text.
  • Parses the text looking for commands
  • Once a command is recognized, an appropriate action (e.g wave arm) is taken and TJ verbalizes this action as well using Watson Text to Speech to generate an audio file.
  • The robot plays back the response through using the Alsa tools

##Hardware Follow the full set of instructions on instructables to prepare your TJBot ready to run the code.

Note: You must have a servo motor connected to your Pi.

##Wiring Your Servo Motor

Your servo motor has three wires - Power, Ground and Data in. In this recipe I use the Tower Pro servo motor and the wires are as follows - Red (Power), Brown (Ground), Yellow (Data in). For this recipe, a software PWM library is used to control the servo motor, and I wire my setup as follows.

  • Red (+5v, Pin 2)
  • Brown (Ground, Pin 14)
  • Yellow (Data in, Pin 26, GPIO7 )

Note: In the code, you can always change the pins used.

##Build Get the sample code (download or clone) and go to the application folder.

git clone git@github.com:victordibia/tjwave.git
cd tjwave

Update your Raspberry Pi. Please see the guide [here to setup network and also update your nodejs] (http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/step2/Set-up-your-Pi/) installation sudo apt-get update sudo apt-get upgrade curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - sudo apt-get install -y nodejs

Note : Raspberry pi comes with a really old version of nodejs and npm (0.10), hence the need to upgrade it to the latest version.

Install ALSA tools (required for recording audio on Raspberry Pi). (Some of the sample code integrate voice commands)

sudo apt-get install alsa-base alsa-utils
sudo apt-get install libasound2-dev

Install pigpio . This is the software pwm library used to control the servo motor.

sudo apt-get install pigpio 

Install Dependencies

npm install

if you run into errors installing dependencies, try

sudo rm -rf node_modules
sudo npm install --unsafe-perm

Set the audio output to your audio jack. For more audio channels, check the config guide.

amixer cset numid=3 1    
// This sets the audio output to option 1 which is your Pi's Audio Jack. Option 0 = Auto, Option 2 = HDMI. An alternative is to type sudo raspi-config and change the audio to 3.5mm audio jack.

Create config.js

# On your local machine rename the config.default.js file to config.js.
cp config.default.js config.js

Open config.js using your favorite text editor # (e.g // nano) and update it with your Bluemix credentials for the Watson services you use.
nano config.js

Note: do not add your credentials to the config.default.js file.

Test Your Servo

Before running the main code (voice + wave + dance etc), you may test your LED setup and your Servo motor to make sure the connections are correct and the library is properly installed. When you run the test module, it should turn your LED to different colors and wave your robot arm at intervals.

sudo node wavetest.js

If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi.

If your robot arm does not respond, kindly confirm you have connected it correctly. See the PIN diagram here for more information on raspberry pi PINS.

##Running

Start the application. (Note: you need sudo access)

sudo node wave.js     

Then you should be able to speak to the microphone. Sample utterances are

TJ can you raise your arm ?
TJ can you introduce yourself ?
TJ What is your name ?
TJ can you dance ?

For the dance command, your robot processes wav files in the sounds folder. Please ensure you have a .wav file there and set that as your sound file.

Known Issue : LED Audio Conflict

There are known conflicts between using hardware PWM pin on a pi and audio, hence you cannot use both at the same time. For example, our LED library (ws281x) uses hardware PWM and will not work correctly when audio is enabled. To disable audio, you will need to blacklist the Broadcom audio kernel module by creating a file /etc/modprobe.d/snd-blacklist.conf with

blacklist snd_bcm2835

If audio is needed, you can use a USB audio device instead.

Known Issue: App Ends after saying "TJBot is listening"

Usually this occurs microphone setup fails. A solution is to explicitly specify your microphone device id when instantiating mic. To find out your device id, in command line type

arecord -l       //should show things like card 0, card 1, etc

now edit your code file where you instantiate mic to reflect the device id. For example if your microphone is labelled card 0 from the command above, device id is plughw:0,0 , and if card 1, plughw:1,0

var micInstance = mic({ 'rate': '44100', 'channels': '2', 'debug': false, 'exitOnSilence': 6, 'device': 'plughw:0,0' });  // card 0

Whats Next

There are a few things you can do .. and ways to take your robot forward!

  • Use Watson Conversation to improve intent detection. Leverage machine learning capabilities within Watson conversation to better match intents even when recognized text is not accurate.
  • Animate robot interactions using arm movements + lights (e.g wave when your robot speaks or laughs etc)
  • Correlate additional data to robot arm movements ... e.g control your robot arm using an app, a wearable/smartwatch etc.

##update

  • I implemented a watson conversation based version where the conversation api is used to detect intent from a spoken command.
    sudo node wave_conversation.js
    
    • You will need to set up your watson conversation flow and set up a workspace. More on that here .
    • You import sample conversation flow in the folder (workspace.json) to get you started. This creates intents for actions like "hello" , "see" , "wave" , "introduce" etc
    • Finally, this sample (wave_conversation.js) uses both audio and LED. These two hardware devices are known to conflict - a workaround is to disable onboard audio and use USB audio on your Pi.

Contributing and Issues

To contribute, please feel free to fork the repo and send in a pull request. Also, if you find any issues (bugs, etc) or have questions, please feel free to open up a github issue.

Dependencies List

License

MIT License