Comp150-07: Intelligent Robotics
Lab 3: Subsumption Architecture and Odometry
Due Tuesday, February 10, in class

In this lab, we continue programming mobile robots. The two contrasting objectives of this lab are to 1) design and implement a subsumption-style behavior-based controller, and 2) design and implement precise motor control. There are two equally weighted parts to this lab.

Part I: Subsumption architecture

The objective of this part of the lab is to make a vehicle that avoids obstacles while still doing what it was designed to do. You will augment your controller from either Part I or Part II of Lab 2 (whichever you prefer) with an extra layer of control responsible for obstacle avoidance, developed as a subsumption-style program.

The robot will start with its lower-layer (part I or II) task, while the lab assistant places an obstacle in a random location along the robot's path. The robot must take evasive action when close to an obstacle, and not touch it, while resuming its original task when no obstacles are in close range. We may move the obstacle while the robot is running!

Grading points: (best of 3 trials if needed)
Robot takes evasive action - 10
Does not touch the obstacle - +5
Does not "jitter" from too fast behavior switching - +5
Continues (and completes) lower-level behavior correctly after obstacle avoidance - +10
Program has more than 1 useful behavior task - +5
Tasks use shared resources correctly - +5
Failure modes and redesign discussed - +10
Other behavior and design choices, good or bad, may be graded at the instructor's discretion.

Tips and things to think about

You may add a touch sensor to make sure evasive action is taken even if the robot comes in contact with the obstacle. You may add an ultrasound sensor for obstacle avoidance, but be careful where on the robot you place it. Obstacles may be of smaller height than your robot. Light sensors can be used in ACTIVE mode to detect changes in reflected light (and thus the presence of an obstacle). Look in the NXC Tutorial and Programmer's Guide documents to find out ways to read and interpret light and touch sensors. Light threshold values will depend on the particulars of ambient light. Experiment with initializing threshold values by "guessing" them from sensor readings.

After your robot has taken evasive action, it may be facing away from the light source (or other desired direction). You will probably need to include some code (perhaps another behavior layer) for random wandering to give the robot a chance to find good light sensor readings again.

Part II: Odometry

The objective of this part of the lab is to design a robot capable of accurate locomotion. You may keep your wheeled robot from Part I, but a better chassis (body) design may result in more accurate locomotion.

The robot will need to drive in a 15"x15" square pattern, first forward, then in reverse, and return to its original position. The robot has 2 minutes to complete the motion.

Grading points (best of 2 trials):
Demo: out of 30
Report: out of 20
Your total grade for this part is out of 50. Demo points will be taken off proportionally to the error your robot accrues after the forward square, and the reverse square (no interference between these tests is allowed).
Report points are given for a thorough discussion of the design process, early failures and redesign, as well as dead reckoning and odometry results.

Tips and things to think about

Dead reckoning means no feedback, do not use PID or any other feedback when reporting your dead reckoning results.

Chassis type (drive) will influence the precision of your robot, make it part of your design.

Read section VIII "More about motors" of the NXC Tutorial by Daniele Benedettelli and section 3.3 "Output Module" of the NXC Programmer's Guide for information on advanced NXC motor commands.

You will need to provide P, I and D gains to the NXT PID controller. Choose those either by trial and error, or by following the Ziegler-Nichols method approximately (you will need a way to measure the period of system oscillations).

Hand in


Paulina Varshavskaya, paulina [at] cs.tufts.edu