Heffalump project
Alexander.jpg
Matvey.jpg

1. Project mission.

Goal

Main goal of the project is to create small mobile robot with basic functionality as soon as possible and with the lowest possible budget.

Why

Meeting this goal will allow to study mobile robotics not only on the computer models but on a real robot. Computer models can not match real robotics for any aspect: no model can simulate all or at least a good fraction of real environment. Models allow verifying some basic ideas or algorithms, but you’d rather get some real robot experience to develop or apply these algorithms. After all it is great to feel real embodied robot.

Means

  1. To speed up the development the number of custom components must be minimized. The majority of components should use from-the-shelf solutions. This principle applies both to the hardware and software
  2. To minimize project cost the number of sensors and actuators and the complexity of mechanics and software must be minimized. But not less than necessary.
  3. To make mobile robotics tasks tractable with constrained computational power and not to get bogged with nonessential details accommodations of environment must be allowed.

What

The main robot’s function can be formulated in one sentence: the robot should safely travel from point A to point B where points are marked on the given map. It means obstacle avoidance, odometry integration, localization, path planning etc. Half-way checkpoint is to make robot come to whistle (like Heffalump).

2. Architecture

Hardware

Ackerman mobile platform constructed from Lego Mindstorms. One servomotor drives the car, the second one steers and the third one controls turning turret with microphone. All motors have built-in tachometers. Robot’s brain is Lego Mindstorms NXT intelligent brick. The main processor is Atmel® 32-bit ARM® processor, 256 KB FLASH, 64 KB RAM, 48 MHz.

There are three rangefinders: forward ultrasonic and lateral infrared sensors. IR-rangefinder are not Lego default sensors, Sharp GP2D12 with I2C, they are connected to NXT through I2C bus through custom 2-functional device (self-manufactured). There is a microphone on rotating turret.
Robot’s video system is CMUCam3. The CMUcam3 is an ARM7TDMI based fully programmable embedded computer vision sensor. The main processor is the NXP LPC2106 connected to an Omnivision CMOS camera sensor module. Custom C code can be developed for the CMUcam3 using a port of the GNU tool chain. Executables can be flashed onto the board using the serial port. It is connected to NXT through the mentioned above 2-functional device, which is necessary because camera and CPU have incompatible hardware interfaces. CMUCam is needed because NXT computational power is not enough for video processing. The camera is placed under a conical mirror (hand-made from polished beer can), altogether they form omnidirectional camera.
“Custom 2-functional device” is based on AVR ATmega168 microcontroller. It acts as I2C bus splitter and as i2c-to-rs232 converter.
Power source is 6 AA batteries inside Lego intelligent brick box.

Software

There is Java virtual machine LeJOS inside Lego NXT. The entire program is written on Java. Architecture of robot’s control system is a variation of AURA, behavior-based architecture by Ronald Arkin. There is a number of independent simultaneously running “behaviors”. Each one has its limited sphere of competence and works with its own rate: low-level behaviors’ rate is by order of magnitude more than high-level behaviors. Each one command behavior is represented as potential field. Architecture is cooperative: results of all command behaviors are fused (through vector superposition) and resulting command is passed to actuators. Some of the command behaviors are “avoid obstacles”, “dodge”, “go to goal”, “go forward”. Path planning is not implemented yet.

There is also a number of perceptual behaviors like “odometry integrator”, “sound finder”, “localizer”. Odometry is integrated through a simple model of four-wheeled ackerman vehicle. “Localizer” uses probabilistic Monte-Carlo localization, it is under construction now.
CmuCam3 is a source of landmarks for localization algorithm. The map consists of a set of artificial colored landmarks with known positions. Video-system unwraps panoramic conical shot, extracts pixels of specified landmark colors, forms spots (arrays of adjacent pixels) and calculates centers of masses of spots - features. Then it passes a set of features’ polar angles to localization algorithm.

3. Progress

Task Status
Problem statement Done
Robot assembling Done
Control system architecture development Done
Control system implementation (dodge, avoid obstacles, go forward) Done
Odometry integration Done
CMUCam3 mounting Done
I2C-to-RS232 converter development Done
I2C-to-RS232 manufacturing Done
Conical mirror manufacturing Done
Lateral IR-rangefinders mounting Done
I2C-to-RS232 microprogramm coding Done
CMUCam3 color extraction algorithm implementation Done
CMUCam3 calculation of color spots centers of masses algorithm development Done
IR-rangefinder Java driver implementation Done
Local map development Done
CMUCam3 conical image unwrapping In progress
Map representation In progress
Probabilistic Monte-Carlo localization
Path planning

4. Gallery

Heffalump: comes to whistle

Heffalump v0: Lego default sensors

Heffalump v1: CMUCCam mounted, bulb-mirror


Heffalump v2: I2C-to-RS232 converter mounted, IR-rangefindders mounted, conical mirror mounted


Omnidirectional camera shot: can't see anything


Local map tracking: Local map is small (11 x 11 x 15cm) occupancy grid implemented for compensation of missing side and back rangefinder sensors. It maps sensed obstacles and then tracks them accordingly to odometry information (rotations introduce uncertainty). All local behaviors interact with it instead of direct sensor polling. Local map introduces intermidiate "model" - which is not very good, but this model is very simple and "vanishes" with time. On the picture: the sequence of heffalump-centred local map snapshots, x axis shows the direction of robot, on the first snapshot the white arrow shows approximate trajectory of robot motion. Heffalump sees two obstacles, passes them by with translation and rotation, one obstacle vanishes and eventually it senses two new obstacles.

5. Bibliography

  1. Thrun, Burgard, Fox. Probabilistic robotics.
  2. R Arkin. Behavior based robotics.
  3. Siegwart, Nourkbakhsh. Autonomous Mobile Robots.
  4. M.S.Schut. Simulation of Collective Intelligence.
  5. E Kendall. Multiagent system design based on object oriented patterns.
  6. Brooks. A Robust Layered Control System for a Mobile Robot.
  7. Jonathan H. Connell. SSS: A Hybrid Architecture Applied to Robot Navigation.
  8. R. Arkin. AURA principles and practice in review.
  9. Julio K. Rosenblatt. DAMN: A Distributed architecture for mobile navigation.
  10. Erann Gat. On Three-Layer Architectures.
  11. Matarić. Situated Robotics.
  12. Dieter Fox. Monte Carlo Localization: Efficient Position Estimation for Mobile Robots.
  13. Mataric. Integration of Representation Into Goal-Driven Behavior-Based Robots.
  14. Pirjanian. Multiple Objective Behavior-Based Control.
  15. Coombs, Murphy. Driving autonomously offroad up to 35km/h.
  16. Guo, Qu, Wang. A new performance-based motion planner for nonholonomic mobile robots.
  17. Wang, Qu, Guo, Yang. A Reduced-Order Analytical Solution to Mobile Robot Trajectory Generation in the Presence of Moving Obstacles.
  18. Hentschel, Wulf, Wagner. A hybrid feedback controller for car-like robots.
  19. Egerstedt, Hu, Stotsky. Control of a car-like robot using a dynamic model.
  20. Crowley. Mathematical foundation of navigation and perception for an autonomous mobile robot.
  21. Das, Fierro: Real-Time Vision-Based Control C of a Nonholonomic Mobile Robot.
  22. Lefebvre, Lamiraux, Pradalier. Obstacle Avoidance for Car-like Robots, Integration And Experiments on Two Robots.
  23. Yang, Gu, Mita, Hu. Nonlinear tracking control of a car-like mobile robot via dynamic feedback linearization.
  24. Proetzsch, Luksch, Berns. Fault-Tolerant Behavior-Based Motion Control for offroad Navigation.
  25. Singh, Simmons, Smith. Recent Progress in Local and Global Traversability for Planetary Rovers.
  26. Chen, Yasunobu. Soft target based obstacle avoidance for car-like mobile robot in dynamic environment.
  27. Usher, Ridley, Corke. Visual Servoing of a Car-like Vehicle – An Application of Omnidirection Vision.
  28. Iagnemma, Golda, Spenko. Experimental Study of High-Speed Rough Terrain Mobile Robot Models for Reactive Behaviors.
  29. Chen, Quinn. A crash avoidance system based upon the cockroach escape response circuit.
  30. Latombe. A Fast Path Planner for a Car-like Indoor Mobile Robot.
  31. Minguez, Montano. Reactive Navigation for Non-holonomic Robots using the Ego-Kinematic Space.
  32. Borenstein, Koren. The vector field histgram - fast obstacle avoidance for mobile robots.
  33. Ulrich, Borenstein. VFH+: Reliable Obstacle Avoidance for Fast Mobile Robots.
  34. Simmons. The Curvature-Velocity Method for Local Obstacle Avoidance. CVM.
  35. Ko, Simmons. The LaneCurvature Method for Local Obstacle Avoidance.
  36. Fox, Burgard, Thrun. The dynamic window approach to collision avoidance.
  37. Brock, Khatib. High-speed navigation using the global dynamic window approach.
  38. Khatib, Chatila. An Extended Potential Field Approach For Mobile Robot Sensor-Based Motions.
  39. Jensfelt, Austin. Feature Based Condensation for Mobile Robot Localization.
  40. Krose, Vlassis, Bunschoten. Omnidirectional Vision for Appearance-based Robot Localization.
  41. Lenser, Veloso. Sensor Resetting Localization for Poorly Modeled Mobile Robots.
  42. Schulz, Fox. Bayesian Color Estimation for Adaptive Vision-based Robot Localization.

6. Authors

  • Matvey Stepanov ur.xednay|zwehttam#ur.xednay|zwehttam
  • Alexander Piscariov

Questions and answers

questions about this robot

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License