1. Project mission.
Goal
Main goal of the project is to create small mobile robot with basic functionality as soon as possible and with the lowest possible budget.
Why
Meeting this goal will allow to study mobile robotics not only on the computer models but on a real robot. Computer models can not match real robotics for any aspect: no model can simulate all or at least a good fraction of real environment. Models allow verifying some basic ideas or algorithms, but you’d rather get some real robot experience to develop or apply these algorithms. After all it is great to feel real embodied robot.
Means
- To speed up the development the number of custom components must be minimized. The majority of components should use from-the-shelf solutions. This principle applies both to the hardware and software
- To minimize project cost the number of sensors and actuators and the complexity of mechanics and software must be minimized. But not less than necessary.
- To make mobile robotics tasks tractable with constrained computational power and not to get bogged with nonessential details accommodations of environment must be allowed.
What
The main robot’s function can be formulated in one sentence: the robot should safely travel from point A to point B where points are marked on the given map. It means obstacle avoidance, odometry integration, localization, path planning etc. Half-way checkpoint is to make robot come to whistle (like Heffalump).
2. Architecture
Hardware
Ackerman mobile platform constructed from Lego Mindstorms. One servomotor drives the car, the second one steers and the third one controls turning turret with microphone. All motors have built-in tachometers. Robot’s brain is Lego Mindstorms NXT intelligent brick. The main processor is Atmel® 32-bit ARM® processor, 256 KB FLASH, 64 KB RAM, 48 MHz.
There are three rangefinders: forward ultrasonic and lateral infrared sensors. IR-rangefinder are not Lego default sensors, Sharp GP2D12 with I2C, they are connected to NXT through I2C bus through custom 2-functional device (self-manufactured). There is a microphone on rotating turret.
Robot’s video system is CMUCam3. The CMUcam3 is an ARM7TDMI based fully programmable embedded computer vision sensor. The main processor is the NXP LPC2106 connected to an Omnivision CMOS camera sensor module. Custom C code can be developed for the CMUcam3 using a port of the GNU tool chain. Executables can be flashed onto the board using the serial port. It is connected to NXT through the mentioned above 2-functional device, which is necessary because camera and CPU have incompatible hardware interfaces. CMUCam is needed because NXT computational power is not enough for video processing. The camera is placed under a conical mirror (hand-made from polished beer can), altogether they form omnidirectional camera.
“Custom 2-functional device” is based on AVR ATmega168 microcontroller. It acts as I2C bus splitter and as i2c-to-rs232 converter.
Power source is 6 AA batteries inside Lego intelligent brick box.
Software
There is Java virtual machine LeJOS inside Lego NXT. The entire program is written on Java. Architecture of robot’s control system is a variation of AURA, behavior-based architecture by Ronald Arkin. There is a number of independent simultaneously running “behaviors”. Each one has its limited sphere of competence and works with its own rate: low-level behaviors’ rate is by order of magnitude more than high-level behaviors. Each one command behavior is represented as potential field. Architecture is cooperative: results of all command behaviors are fused (through vector superposition) and resulting command is passed to actuators. Some of the command behaviors are “avoid obstacles”, “dodge”, “go to goal”, “go forward”. Path planning is not implemented yet.
There is also a number of perceptual behaviors like “odometry integrator”, “sound finder”, “localizer”. Odometry is integrated through a simple model of four-wheeled ackerman vehicle. “Localizer” uses probabilistic Monte-Carlo localization, it is under construction now.
CmuCam3 is a source of landmarks for localization algorithm. The map consists of a set of artificial colored landmarks with known positions. Video-system unwraps panoramic conical shot, extracts pixels of specified landmark colors, forms spots (arrays of adjacent pixels) and calculates centers of masses of spots - features. Then it passes a set of features’ polar angles to localization algorithm.
3. Progress
Task | Status |
---|---|
Problem statement | Done |
Robot assembling | Done |
Control system architecture development | Done |
Control system implementation (dodge, avoid obstacles, go forward) | Done |
Odometry integration | Done |
CMUCam3 mounting | Done |
I2C-to-RS232 converter development | Done |
I2C-to-RS232 manufacturing | Done |
Conical mirror manufacturing | Done |
Lateral IR-rangefinders mounting | Done |
I2C-to-RS232 microprogramm coding | Done |
CMUCam3 color extraction algorithm implementation | Done |
CMUCam3 calculation of color spots centers of masses algorithm development | Done |
IR-rangefinder Java driver implementation | Done |
Local map development | Done |
CMUCam3 conical image unwrapping | In progress |
Map representation | In progress |
Probabilistic Monte-Carlo localization | |
Path planning |
4. Gallery
Heffalump: comes to whistle
Heffalump v0: Lego default sensors
Heffalump v1: CMUCCam mounted, bulb-mirror
Heffalump v2: I2C-to-RS232 converter mounted, IR-rangefindders mounted, conical mirror mounted
Omnidirectional camera shot: can't see anything
Local map tracking: Local map is small (11 x 11 x 15cm) occupancy grid implemented for compensation of missing side and back rangefinder sensors. It maps sensed obstacles and then tracks them accordingly to odometry information (rotations introduce uncertainty). All local behaviors interact with it instead of direct sensor polling. Local map introduces intermidiate "model" - which is not very good, but this model is very simple and "vanishes" with time. On the picture: the sequence of heffalump-centred local map snapshots, x axis shows the direction of robot, on the first snapshot the white arrow shows approximate trajectory of robot motion. Heffalump sees two obstacles, passes them by with translation and rotation, one obstacle vanishes and eventually it senses two new obstacles.
5. Bibliography
- Thrun, Burgard, Fox. Probabilistic robotics.
- R Arkin. Behavior based robotics.
- Siegwart, Nourkbakhsh. Autonomous Mobile Robots.
- M.S.Schut. Simulation of Collective Intelligence.
- E Kendall. Multiagent system design based on object oriented patterns.
- Brooks. A Robust Layered Control System for a Mobile Robot.
- Jonathan H. Connell. SSS: A Hybrid Architecture Applied to Robot Navigation.
- R. Arkin. AURA principles and practice in review.
- Julio K. Rosenblatt. DAMN: A Distributed architecture for mobile navigation.
- Erann Gat. On Three-Layer Architectures.
- Matarić. Situated Robotics.
- Dieter Fox. Monte Carlo Localization: Efficient Position Estimation for Mobile Robots.
- Mataric. Integration of Representation Into Goal-Driven Behavior-Based Robots.
- Pirjanian. Multiple Objective Behavior-Based Control.
- Coombs, Murphy. Driving autonomously offroad up to 35km/h.
- Guo, Qu, Wang. A new performance-based motion planner for nonholonomic mobile robots.
- Wang, Qu, Guo, Yang. A Reduced-Order Analytical Solution to Mobile Robot Trajectory Generation in the Presence of Moving Obstacles.
- Hentschel, Wulf, Wagner. A hybrid feedback controller for car-like robots.
- Egerstedt, Hu, Stotsky. Control of a car-like robot using a dynamic model.
- Crowley. Mathematical foundation of navigation and perception for an autonomous mobile robot.
- Das, Fierro: Real-Time Vision-Based Control C of a Nonholonomic Mobile Robot.
- Lefebvre, Lamiraux, Pradalier. Obstacle Avoidance for Car-like Robots, Integration And Experiments on Two Robots.
- Yang, Gu, Mita, Hu. Nonlinear tracking control of a car-like mobile robot via dynamic feedback linearization.
- Proetzsch, Luksch, Berns. Fault-Tolerant Behavior-Based Motion Control for offroad Navigation.
- Singh, Simmons, Smith. Recent Progress in Local and Global Traversability for Planetary Rovers.
- Chen, Yasunobu. Soft target based obstacle avoidance for car-like mobile robot in dynamic environment.
- Usher, Ridley, Corke. Visual Servoing of a Car-like Vehicle – An Application of Omnidirection Vision.
- Iagnemma, Golda, Spenko. Experimental Study of High-Speed Rough Terrain Mobile Robot Models for Reactive Behaviors.
- Chen, Quinn. A crash avoidance system based upon the cockroach escape response circuit.
- Latombe. A Fast Path Planner for a Car-like Indoor Mobile Robot.
- Minguez, Montano. Reactive Navigation for Non-holonomic Robots using the Ego-Kinematic Space.
- Borenstein, Koren. The vector field histgram - fast obstacle avoidance for mobile robots.
- Ulrich, Borenstein. VFH+: Reliable Obstacle Avoidance for Fast Mobile Robots.
- Simmons. The Curvature-Velocity Method for Local Obstacle Avoidance. CVM.
- Ko, Simmons. The LaneCurvature Method for Local Obstacle Avoidance.
- Fox, Burgard, Thrun. The dynamic window approach to collision avoidance.
- Brock, Khatib. High-speed navigation using the global dynamic window approach.
- Khatib, Chatila. An Extended Potential Field Approach For Mobile Robot Sensor-Based Motions.
- Jensfelt, Austin. Feature Based Condensation for Mobile Robot Localization.
- Krose, Vlassis, Bunschoten. Omnidirectional Vision for Appearance-based Robot Localization.
- Lenser, Veloso. Sensor Resetting Localization for Poorly Modeled Mobile Robots.
- Schulz, Fox. Bayesian Color Estimation for Adaptive Vision-based Robot Localization.
6. Authors
- Matvey Stepanov ur.xednay|zwehttam#ur.xednay|zwehttam
- Alexander Piscariov