Comp150-07: Intelligent Robotics
Lab 6: Localization
Due Tuesday, March 3, in class

The objective of this lab is to use sonar sensors to localize the robot in the workspace as precisely as possible.

We are building up to Project 1 where we build a robot capable of navigating an unknown maze and locating a goal landmark. To achieve that capability, the robot must be able to map its environment, localize itself in it, and plan a path from its location to the goal or a waypoint. In previous labs, we assumed a correct map and initial localization, and we worked on path planning. Now we relax our assumptions to allow realistic, imperfect sensors, and we learn to localize using that noisy sensory information.

The following two lab parts are not equal in scope and difficulty. Moreover, results from part 1 (Sensor characteristics) will be directly used in part 2 (Localization). So you should plan on work on both of them as a team. This lab is time-consuming. Start early!

Part 1: Sensor characteristics

To understand what information the sensors provide to the robot, we need to characterize the sonar sensors: learn what values to expect from them. These bench tests should be performed for each sensor you plan on using in localization.

Test #1: Distance

Fix the sonar at a point similar in height to how you will mount it on a robot. Place obstacles at 10cm intervals (from 10cm to 1m) on a straight line perpendicular to the sonar, facing the sensor. Take a few readings at each point and record them.
  1. What is the variance of your sensor?
  2. Do you have any problems with outliers or noisy readings at any of the points? Document them.
  3. How will you deal with outlier and noisy readings issues when using this sensor for localization?
  4. Extra credit: What happens if you face a wide obstacle at a non-90-degree angle to the sonar? Give a suggestion for interpreting sonar readings when you don't know which way an obstacle may be facing.
Hand in: A graph of your readings, with means and standard error bars; answers to the questions above.

Test #2: Angle of perception

You are given an angular grid. Pick a couple of distances between 10cm and 1m and position your obstacle orthogonally (facing towards the sensor) at angles every 15 degrees in the range of +/- 45 degrees. Take a few readings at each point and record them.
  1. What is the error and variance of your sensor at different angles? What does the sensor's perception cone look like?
  2. Do you have any outliers or particularly noisy readings? Document them.
  3. Extra credit: What happens if you face a wide obstacle at a non-90-degree angle to the sonar? Is there a qualitative difference between behavior you've observed in question 4? Why?
Hand in: A graph of your readings and errors with statistics (mean, variance); answers to the questions above.

Part 2: Localization

To use sensory information for localization, we need to translate sensor readings into relative pose estimates. For this part of the lab, you should mount at least one sonar sensor on your robot. Hint: two sonars at a known distance and orientation from each other will work much better.

During demo on Tuesday, we will place your sonar-equipped robot at an undisclosed position (within sonar range) and orientation somewhere in a 250cm x 125cm workspace, in which you know there are 3 "point-like" obstacles (they will be small-footprint but tall enough to be sensed) at positions (50,50), (100,50), and (50,100). The robot's objective is to determine its initial pose in the workspace.

Rules:

Grading for this part of the lab will be a function of pose estimation accuracy. Strive for an error of less than 5cm or 5 degrees.

Hand in: Your report should contain

Hand in your hardcopy report (one document for both parts of the lab) on Tuesday, March 3, in class.

Tips and things to think about

1. Using millimeters as your unit of choice may help with the limitations of NXC's integer-only arithmetic. We are using cm instead of inches because the NXT ultrasound sensors return a distance estimate in cm.

2. In the first part of the lab, you discover the properties of your sonar sensors' response over a range of distances and angles. This response should be fairly consistent most of the time, but it's likely to contain regions of weirdness and/or occasional outliers.

Depending on the level of noise you measure, you may not need to perform any filtering on your sonar signal (a low-pass filter is already implemented inside the LEGO sensor). If you do encounter noise, you can implement a simple linear filter such as a moving window average: you keep a small buffer of recent values and treat their average as your sensor value instead of the raw value.

3. Your localization algorithm will need to disregard the outliers (if any) while making use of good information. You can tell outliers apart by noticing that they are significantly different from the mean of nearby/recent values. Since you know the variance of your sensor from part 1 of this lab, you can choose to discard any value that is different from the mean by more than 2 &sigma or 3 &sigma (&sigma2 = variance). However, be sure to make a distinction between outliers and a legitimate change in sensor readings (e.g., because the obstcale face has ended and there is nothing in front of the sensor anymore).

4. You will most probably need to use active sensing to localize, since you only have 2 sonars. Active sensing means: use robot motion and odometry information to get distance estimates from 2 or more robot positions and orientations, then compute original pose from this information.


Paulina Varshavskaya, paulina [at] cs.tufts.edu