Home | Team Info | Documents | News | Progress | Releases | Robots | Concepts | Site Map


RAFS - Object Recognition Details

Object Recognition Details

As defined in our Project Design Document, we will be relying heavily on image processing techniques to determine the placement of chairs in EB2029. Using the ActivMedia Color Training Software (ACTS), we will locate chairs while roaming about the room. This technology, combined with the sonar device of the robot, will enable the robot to approach the chair and log its position.

 

Early Reading About Image Processing

We are fortunate enough to have ACTS to provide us with a suite of tools to handle the bulk of the image processing details. We considered implementing our own code, but ACTS turned out to be perfect for what we need. This suite of tools has been tested, used in the field, and provides a clean, easy to use interface. As a result, our research will consist of reading the ACTS manual and completing the tutorials within.

 

Image Processing Research

Now that we have defined our image-processing environment, we need to fully understand the API interfaces that interact with it. Both Saphira and ARIA have C++ class interfaces to collect and analyze the data gathered by ACTS. These classes define procedures and data types to extract meaningful data from the camera device.

 

Study Examples (ACTS)

Saphira and ARIA came packaged with several example programs that take advantage of the C++ class interfaces to ACTS. These examples help give us an idea of how to use the classes and interpret the data returned from ACTS.

 

Sonar Research

Saphira and ARIA have C++ class interfaces to collect data from the sonar device of the robot. We will study a series of examples that highlight their basic functionality and usage.

 

Interpretation of Image Processing and Sonar Data

We must analyze these two sets of data and devise rules for robot action. When certain image-processing values are received, the robot must react appropriately. Likewise, certain sonar data will result in defined reactions. For example, when an object is recognized by its color, the color recognition operation will stop, and an operation of movement toward the recognized item will begin.

 

Design Finite State Diagram

To better understand how our code modules will interact, we will devise a finite state diagram to model the behaviors of this release.

 

Fine Tuning of Data

Up until this point we will have been working primarily with test objects. To further enhance the release, we will alter our algorithms to accommodate chairs and other relevant objects.

 

Alter Heading of Robot

The camera device has a limited scope of vision. Instances may occur in which the robot is not directly facing the recognized object. In this case, we need to alter the robots heading and have it turn directly toward the object. When aligned, the robot may confidently approach the recognized object.

  • To aid this process, we will review some example source code that involves a similar activity.

 

Algorithm for Sonar and Image Processing Data

After successfully interpreting the data returned from ACTS and the sonar device, we will need to devise an algorithm to process the data. With this data, the algorithm will implement the functionality depicted in the finite state diagram.

 

Implementation of Chair Class

This class will extend the SfArtifact class in Saphira. It will contain the coordinates of the chair and other extended Meta information. During the operation cycle of Release 1, we will assemble a std::list of recognized chair objects.

 

Adjust Brightness and Contrast Levels for EB2029

The majority of our development will be done in EB1022 (The Mobile Robotics Laboratory). To tune our release for the target environment, we will spend some time adjusting the brightness and contrast levels for the camera in EB2029.