Code for a remote Arduino controlled robot with color sensing
The ROVAR entirely consists of two modules - the robot part and the camera/visual part. The robot part is what drives the vehicle physically while the visual part simply aids in navigation to the end user. Regardless of their roles, both aspects are complicated in their own right, and have a whole repertoire of design choices to discuss.
The rover component, essentially the drive mechanism of the ROVAR, consists of a number of parts, all centered around an Arduino MEGA 2560. This Arduino receives 5V power over USB from a battery bank attached to the bottom of the unit, though it should be noted that it could receive power from the 5V VRM on the primary motor controller with an input of 9V (which will be explained shortly). The Arduino controls two L298N motor controllers directly, which each control 2x DC motors (6-12V), for a total of 4 motors (4WD if needed). Also controlled by the Arduino is a TCS3200 color sensor and an NRF24l01+ wireless transmitter to the joystick module (explained later as well). All of the motor controllers are powered from a 9V battery module on the top of the vehicle, which was formed by connecting 6x AA batteries in series. Each one is 1.5V, so 1.5 * 6 = 9, which is well within the 5-35V requirement for the motor controllers. The motor controllers have a 2-3V voltage dip when current is run across them, so 9V into the motor controllers is 6-7V to the motors, which is barely above their 5V requirement. The motor controllers each have 4x 5V inputs (labeled IN1...IN4 respectively), which can be controlled from the code (once again, will be explained later) with a “HIGH” keyword to output 5V from an Arduino I/O pin. I/O pins on the Arduino are numbered 1-13, and can be referenced in code as such. This voltage will activate the motor controllers, allowing current to flow from the 9V source to the motor it is required for. Each motor controller also has two PWM input pins, which accept “regulated” voltage from the Arduino to determine the speed of the motors. This can be specified in the code (yet again, will be explained later). The color sensor has a few LEDs to more accurately determine color, but otherwise just outputs various voltages across different pins which the Arduino knows to assign to different colors depending on the pin and input. Once again, the code does all the work of stopping the robot when it hits a black line (a system that will be explained in full later). The wireless receiver does just what it needs to - receives data from the joystick module, and the code processes it into usable signals.
The joystick module is relatively simplistic, consisting of just a joystick, an Arduino UNO, a power supply (5V over USB again), and a wireless transmitter module. The only code functionality for this module is to send the joystick’s values to the robot module for code processing.
Code for an Arduino is in a C/C++ markdown language that is based on C but has many differences and simplifications. For example, the method header syntax of ArduinoC is simply () {} and not <static?> () {} like many other programming languages (notably C-based ones). In this respect, it is much easier to code ArduinoC than many other languages. Still, however, the multifaceted approach required for this project was challenging. Since there were two Arduino modules, two different sets of code were required. I also made the executive decision to put all of the processing code on the robot Arduino, and do nothing except transmit data on the other. The robot Arduino controls all the functions of the robot module, most notably the motor controllers (inherently motors), wireless receiver, and color sensor. The code interfaces with each mechanism through pins on the Arduino, labeled 1-54, that send a 5V signal out or detect when a 5V signal is received. Motor controllers require 5V out signals to turn the motors on, and can be activated in code with a “HIGH” keyword. “LOW” is the opposite, and outputs 0V. Before the code loop runs (the body of the code function), a setup is undertaken to assure that each pin is assigned to an input or output, so it knows what to do. The commands “digitalWrite(<pin#>,)” or digitalRead each send and receive 5V signals respectively, and form the basis for controlling other devices with the Arduino. The motor controllers are divided into IN1 to IN4 and enA & enB for the primary and secondary L298N controllers. The joystick data is received in X and Y axis movements, and a drive calculation portion translates that into two usable speeds (0-255) and passes that to each motor controller.
The camera module is a completely different beast, and is powered by a very different mechanism, a Raspberry Pi. The Raspberry Pi (RPI) is a small computer that runs a watered-down version of Linux (a free and open-source operating system, and an alternative to Windows or Apple OS X), whereas an Arduino runs on an ARM processor, and does not operate as a computer in the same way that the RPI does. The RPI is also powered by 5V over USB, and is connected to the same power source as the robot’s Arduino. It does not have a display built in, so it uses networking over ethernet as its main mode of communication. The use of wifi is tricky here at school (more on that later), but it is how data is transmitted to the laptop and in turn onto the AirPlay device. To send commands to the RPI, a method called SSH is used to access the RPI’s terminal (or command line) from another device on the same network. This way, you can tell the RPI to do something without connecting it to a monitor, as its main purpose is to be small and able to be tucked away. For my purposes, the RPI acted as a web server to encode and transmit data from a connected USB webcam.
The way this was accomplished was through a software called Motion and a lot of networking configuration. Motion is a self-contained piece of software that can be installed from the command line, and as I already said, creates a web server to encode and transmit USB webcam data. It goes about this by first taking the webcam’s input from USB and translating it into a moving image format called MJPEG. You may have heard of JPEG images, and MJPEG is just a lot of those images mashed together into a video. At this point, Motion will start a web server through a service called Apache. Apache will look for the “broadcast IP” (more on IPs later if you want to know) and send the MJPEG image out via the HTTP protocol on a certain port (again explained later as I explain IPs) on that IP. At this point, anyone on the local network can go to any web browser (except internet explorer) and type that IP with the port affixed to it after a colon, and get the MJPEG. It is critical to notice that this image looks delayed significantly - this in fact is not the fault of the WiFi speed. Because my RPI is rather old (2013 model), it has the processing power of an iPhone 3, and as such cannot do anything very fast. Video production is rather CPU (processor) intensive, and as a result the CPU does the best it can to produce the lowest latency image it can without shutting off. The interface of WiFi or ethernet (wired internet) does not add very much latency (delay).
On every ethernet and WiFi network, there are a few key elements that the network cannot function without. As it stands, this starts at the router. Many misconceive the router as a gateway to the internet, but that is what the modem does. The router is a wireless access point, network switch, firewall, gateway, and DHCP server all in one. Let’s talk about those individually. As a wireless access point, it transmits the wired signal into a WiFi signal. As a network switch, it splits each wired ethernet link into multiple, in a home network typically 4-8. The router also acts as a firewall, shielding your inner IPs from the external IPs and vice versa, and to do that it needs to become a gateway separating the two. Arguably the most important part of the network, the DHCP server, also resides on your router, and assigns IPv4 addresses to each device when it is connected. Without DHCP, you would have to manually set what is called a static IP for each device. Devices such as this typically have a certain range of IPs that it can give, generally in the 192.168.X.XXX or 10.X.X.XXX ranges, as those are assigned for private use by the IEEE, which makes standards for the internet. For example, a router at 192.168.1.1 (lower numbers are typical for routers, e.g. .0 or .1) will assign IPs across the entire range of the subnet, from 192.168.1.2 to 192.168.1.255. The subnet is the third portion of the IPv4 address, and determines which devices are connected to each other. When a device is connected to another subnet as another device, they usually cannot send data to each other (some special rules do apply). On the client machine (in my case the RPI), in a DHCP server setup, an IP would automatically be assigned to the client. However, for some reason the school network doesn’t want to play nice with outbound connections from devices on it, and generally has issues assigning an IP. For this reason, I have tried assigning a static IP, but that doesn’t seem to work either. For context, assigning a static IPv4 address consists of setting the receiving IP (the client’s IP), the DNS servers (typically 8.8.8.8 from Google, to make sure that named URLs get translated into server addresses), the gateway (typically your router’s IP), and the subnet mask (always 255.255.255.0). At [REDACTED], because I’m not sure what any of this is, and the Linux command “ifconfig” (which usually returns a lot of network information) seems to not be working, it is nearly impossible for me to assign a static IP. This lead me to simply configure the server through my Linux (Arch) laptop, and airplay to the TV manually, by first finding the IP and using an open-source alternative for linux (java-based airplay.jar). This serves to remove all the input/output limitations on the RPI, but introduces the problem of the cable trailing behind the robot.