The objective is to use readily available “off-the-shelf” low-cost consumer components where possible, and to design electronic subsystems where such components are either not readily available or are too expensive. In terms of affordability, the overall goal is to design a robot that can be made for around the price of a PC (US$1,500 to $2,000 is the target, but the actual cost of building a robot can vary greatly depending on how many of the electromechanical components are made in your own workshop versus purchased pre-fabricated).
The block diagram below identifies the major hardware components that comprise the robot and its accompanying docking station. The cyan coloured blocks represent custom electronic circuits – all microcontroller based with firmware – that have been developed within the scope of this project.
The free and open GNU/Linux operating system has been chosen as the foundation for the software in this project. Kernel and device-driver source code is readily available and well supported by the developer community.
The requirements of the mainboard are:
- at least two firewire ports
- at least two serial ports
- at least two USB ports
- I2C port (optional)
- parallel port (required if no I2C port available)
- integrated graphics controller
- integrated IDE hard drive controller
- integrated audio
- at least 1GHz CPU
- at least 512MB RAM
- compatible with Linux kernel
- small enough to fit within the confines of the robot’s body
- low power consumption
The hardware subsystems of this project are abstracted using the Player driver framework, originally developed at the University of Southern California Robotics Lab.
The Pyro project, developed at Bryn Mawr College in Pennsylvania, provides a higher-level framework that builds upon the Player driver layer to allow experimentation with behaviours and tasks.
Vision is central to the success of the project. Several sources of pre-existing code are being researched:
- The TINA Computer Vision Algorithm Development Libraries originally developed at the University of Sheffield, and continued by the University of Manchester.
- Intel’s Open Source Computer Vision Library
- Stan Birchfield’s stereo algorithm
- The Gandalf Computer Vision Library
- The Coriander project provides a tool for viewing the webcam images for testing/setup purposes.
- The IEEE 1394 for Linux project provides the FireWire webcam device driver.
- The libdc1394 project implements DMA capture for efficient, low latency video transfer from IIDC 1394 digital camera devices.
The binocular colour vision system is based on a pair of consumer WebCams using the FireWire (IEEE 1394) interface. The Pyro 1394 WebCam from ADS was chosen for development. This device has a 1/4″ colour CCD image sensor with a resolution of 659×494 (although the maximum WebCam interface format supported is 640×480). The WebCam has a glass lens with a focal length of 4mm, an F number of 2.0, and a viewing angle of 52°. Any IIDC-compliant (a.k.a. DCAM) digital camera should work (note that this does not include DV camcorders; these are not IIDC-compliant). Here is a list of compatible digital cameras.
A sonar sensor array augments the vision system for detecting objects in the robot’s environment. The Devantech SRF04 has been chosen due to its high performance and low cost compared with sensors based on the ubiquitous Polaroid sonar transducer. The amazingly small minimum range of 3cm measurable by the SRF04 also makes it suitable to use as a downward facing precipice detector to sense floor discontinuities (although in practice this has been found to be effective only for hard floors; carpets tend to absorb the sonar sound).
- Pan-and-tilt head: 1 head-mounted sonar sensor, facing in the same direction as the vision sensors.
- Fixed at front pointing towards ground: 3 precipice-detecting sonar sensors, aimed downward and slightly ahead of the robot.
- Fixed around the perimiter of the robot base at intervals of 45°: 8 outward-facing sonar sensors
The sonar array module, which is based on a PIC16F876 microcontroller, supports up to a maximum of 16 SRF04 sonar sensors. The module fires the sensors sequentially at 25ms intervals, allowing a full scan of 16 sensors to be completed in 0.4s. It interfaces with the host mainboard via I2C.
A command-line utility program oap-sonar, that runs on the robot’s mainboard system, performs the following functions:
- test the Sonar Array Module’s I2C interface,
- read sonar sensor echo times,
- zero the 8-bit timer counter,
- set or query the number of sonar channels, and
- set or query the power state of the module (low power, or active scanning).
Read more: Open Automaton Project