/WOPR-JR-Vision

Vision for our 2017 robot

Primary LanguagePython

WOPR-JR-Vision

Our 2017 vision code

Generated by GRIP $LINK, but then tweaked so that it works (TODO: Which lines)

Setup

On your laptop/development machine

To setup a development machine (your laptop), first clone this repo:

git clone https://github.com/LN-STEMpunks/WOPR-JR-Vision.git && cd WOPR-JR-Vision

Now, install requirements:

Linux (Deb based)

Run ./install.sh and be guided through the process

macOS/Windows

We don't have a tested method to get it working on these platforms.

On your coprocessor

Install

On your roborio/raspi/jetson/whatever you will need to install what ./install.sh does.

We will include a raspi image with all this installed that you can download (coming later).

Deploying

To deploy to a raspberry pi, run

./deploy.sh

You can also use:

./deploy.sh computername.local:~/path/to/vision/ [-p password] (replacing with the raspis name or IP, and where you want to put the contents of ./src)

Running

To view help, run python src/grip.py -h

Mainly, when developing, you will use:

python src/grip.py --show

On the raspberry pi or jetson, use:

python src/grip.py -f thing.conf -ip roboRIO-XXXX-frc.local --publish

Where thing.conf is a copy of lab.conf, but tweaked for your current setting,

roboRIO-XXXX-frc.local is the network tables address

and --publish tells us to publish the findings to networktables

Algorithms

Essentially, this finds contours within a certain threshold which is in lab.conf which is the reflective tape.

Then, it takes the two largest contours, finds their center, and then plots a point (and publishes the coordinates to networktables) on the image