Home | Team Info | Documents | News | Progress | Releases | Robots | Concepts | Site Map


Object Recognition Algorithm

Assumptions

  • Chairs are marked with the appropriate color tag (the color the robot associate with a chair)
  • The wander module has been loaded and invoked

 

Algorithm Description

This module employs two of the sensory devices mounted on the robot. First is the camera that is used for color recognition. We marked the chairs with pink paper and trained the ACTS software to respond to this color. Using the Saphira interface to ACTS, we queried the trained color channel for recognized objects (blobs). ACTS returns the location of blob data in terms of XY coordinates. These XY coordinates map to the 160x120 frame captured by the camera. In order to translate these coordinates to something meaning to the robot we had to use the following formula:

WIDTH = 160
HEIGHT = 120

xRel = (blob.getXCG() - WIDTH/2.0) / WIDTH
yRel = (blob.getYCG() - HEIGHT/2.0) / HEIGHT

This formula calculates the relative position of the object on a XY plane with respect the origin (0, 0).

The resulting relative position provides us with the information to adjust the heading of the robot. If xRel is less than -.1 we need to turn to the left, likewise if xRel is greater than .1 we need to turn to the right.

The second sensory device used is the laser. The laser is used to determine how close the robot is while approaching a chair. We use Saphira and ARIA to extract the laser readings and determine when to halt the robot in front of the chair. The getClosetBox() function allows us to only concern ourselves with laser readings in a specific area around the robot. In this instance, we are only concerned with the area directly in front of the robot. Consider the following diagram:

The green field is the laser range. In this case, we have drawn an imaginary box to determine whether or not we are in front of a chair. When the distance from a chair is less than or equal to MIN_DIST, we halt the robot.

In release 2 we added localization and used this to get the global coordinates of detected chairs. To do this we use the ArTransform class. This class is initialized with the current position of the robot. After being initialized, this class can transform local robot coordinates to global map coordinates.