Pi Wars 2019 Part 6 – Starting to get a Sense for it!

Choosing The Best Camera

Now that I have a stable chassis and movement code sorted, my next step is to look at the best way to interface with sensors. I want the primary sense of my robot to be using image recognition, so the first port of call was to look for a solid and stable camera platform. From initial research, I found 2 main candidates, the Pixy cam 2, an Arduino based board with an integrated micro processor, or the official Raspberry Pi Camera Module, using open CV. To decide which route to go down, I decided to weigh up the pros and cons of the cameras as seen below:

From this, although the price was a little higher, I decided to opt for the Pixy Cam 2. One of the main things that drew me towards it in the end was the line following API, where the camera automatically identifies and returns a lines location and coordinates on a frame, this should make my goal of achieving a mostly all image processing robot a lot easier!

Setting it up

Setting up the camera was fairly simple for Python2, however for Python3 I had to make a few modifications.

From the git “”, I initially changed the setup.py file for Python3. As with most instances, I just had to change a few print’s to include brackets, the changes are highlighted below:

pixy2/src/host/libpixyusb2_examples/python_demos/setup.py

#!/usr/bin/env python

from distutils.core import setup, Extension

pixy_module = Extension('_pixy',
  include_dirs = ['/usr/include/libusb-1.0',
  '/usr/local/include/libusb-1.0',
  '../../../common/inc',
  '../../../host/libpixyusb2/include/',
  '../../../host/arduino/libraries/Pixy2'],
  libraries = ['pthread',
  'usb-1.0'],
  sources =   ['pixy_wrap.cxx',
  '../../../common/src/chirp.cpp',
  '../../../host/libpixyusb2_examples/python_demos/pixy_python_interface.cpp',
  '../../../host/libpixyusb2/src/usblink.cpp',
  '../../../host/libpixyusb2/src/util.cpp',
  '../../../host/libpixyusb2/src/libpixyusb2.cpp'])

import os
print("dir = ")
print(os.path.dirname(os.path.realpath(__file__)))

setup (name = 'pixy',
  version = '0.1',
  author = 'Charmed Labs, LLC',
  description = """libpixyusb2 module""",
  ext_modules = [pixy_module],
  py_modules = ["pixy"],
  )

Then, I needed to change the install bash script to run the setup script in a Python3 environment as apposed to Python2. To do this, I edited the following file with the highlighted changes below:

pixy2/scripts/build_python_demos.sh

#!/bin/bash

function WHITE_TEXT {
  printf "\033[1;37m"
}
function NORMAL_TEXT {
  printf "\033[0m"
}
function GREEN_TEXT {
  printf "\033[1;32m"
}
function RED_TEXT {
  printf "\033[1;31m"
}

WHITE_TEXT
echo "########################################################################################"
echo "# Building Python (SWIG) Demos...                                                      #"
echo "########################################################################################"
NORMAL_TEXT

uname -a

TARGET_BUILD_FOLDER=../build

mkdir $TARGET_BUILD_FOLDER
mkdir $TARGET_BUILD_FOLDER/python_demos

cd ../src/host/libpixyusb2_examples/python_demos

swig -c++ -python pixy.i
python3 setup.py build_ext --inplace -D__LINUX__

if [ -f ../../../../build/python_demos/_pixy.so ]; then
  rm ../../../../build/python_demos/_pixy.so
fi

cp * ../../../../build/python_demos

if [ -f ../../../../build/python_demos/_pixy.so ]; then
  GREEN_TEXT
  printf "SUCCESS "
else
  RED_TEXT
  printf "FAILURE "
fi
echo ""

The Plan Both Past & Future…

Once this was all done, I was then able to interface with the camera. The first program I wanted to master was the line following application. This code is still very much a work in progress at the moment, however I feel the main basis is there. Once completed, I want it to:

  1. Get the angle of the line
  2. Get the position of the line in the frame
  3. Adjust accordingly using back wheel

Using the Pixy cameras PixyMon software, I am able to get it to identify and tack colours. Using this part of the API, should help me with the nebula challenge, as I will be able to train the camera onto a particular colour, once I can get it to identify the colour, I will then use a Time Of Flight distance sensor to tell the robot how close to target it is. Similarly, I will use the same logic for the canyons of mars maze challenge, using the camera to train and track the alien targets and rotate when the sensor senses the bot is close to the wall with the desired angle.

Once I’ve got all the above working and coded, this should enable me to complete all if the autonomous courses, primarily using image recognition with the backup of a distance sensor. If I had more time, I could use the frame size of the camera to identify sizes and distances, however with the time left, I don’t think this will be possible.

Leave a Reply

Your email address will not be published. Required fields are marked *