After finishing the simple battery-powered car, we moved on to more sophisticated machines: small, programmable Lego robot cars (named, for inexplicable reasons, after Disney princesses). We're beginning to explore the exciting world of programming, using PicoBlocks, a simple visual language developed for elementary schoolers by our very own Engineering Evangelist Professor Berg. It's an exciting program filled with color and interlocking parts, helpfully oriented so that you can input only the appropriate data into certain commands (for example, it is impossible to put a non-Boolean into a command that requires a Boolean input). The first day, after a brief introduction to the feedback-and-control ideas (and awe-inspiring videos of the Google Car), we played with various different kinds of sensors (light and touch), and also two kinds of emitters (sound and light). We learned the basic commands to change light colors and intensities (and play simple tunes), and quickly discovered that PicoBlocks commands were pleasingly intuitive.
One of the light sensors:
The top dot in the orange face is a photosensor that quantifies the amount of light which hits it; the bottom dot is an LED, which emits a small amount of red light which the top sensor absorbs.
At the end of the class, we tried to program the robot to change its motion based on input from the touch sensor. This proved more difficult than expected, because of certain electromagnetic properties of the switch: the robot often mistook one press of the switch for multiple presses, so various "wait until" commands were necessary to ensure the correct motion.
Our various programming attempts:

We had most success with the stack in the bottom left corner, which would cause the robot to stop when the touch sensor was pressed.
Last class, we began programming our robot to move based on variable inputs from other sensors. We now used an ultrasonic sensor to give the robot information about distance (which works similarly to echolocation--the ultrasonic pulses are reflected back from the nearest object to give a certain number, which may then be put into a less-than Boolean command to trigger an action). Our assignment: program the robot to follow a piece of Delrin when the plastic was moved away, or back up when the plastic was brought toward the sensor.
We quickly tackled this challenge by discovering the less-than Boolean input, and set the robot to move backwards if the ultrasonic value were less than 20 (i.e., an object was about a foot away), but move forwards otherwise. After this success, we tried to make various robots follow each other in a sort of conga line; the robots with smaller following distances (smaller less-than values in the Boolean input), like ours, enjoyed greater success, though the robots still moved with imperfect straightness and the train quickly fell apart.
Next, we were challenged to use the light sensor to program the robot to follow a twisting white line on a brown floor. My partner and I initially thought to change the robot's direction quickly if it started to miss the line (i.e., if the photosensor read less than 630, which meant the robot was partially off the white and onto the brown, the robot would rapidly bear to the left and right in an alternating, wiggly fashion). Although this was effective on a straight line, we soon realized that this design would impede our robot's ability to turn. We eventually decided to throw the robot into a spin until it found the line again, then move straight forward. We enjoyed some limited success with this method, but the robot would usually turn 180 degrees and then start to move backwards along the line until it moved off again, which proved rather ineffective. After some more brainstorming, we decided the ideal movement would be a 90 degree spin left, alternating with a 180 degree spin right, until the robot again found the line. After many annoyances and wishing we had a count-down clock input, we decided to try something a little different: bearing right while on the line, and spinning left the instant the robot no longer sensed the line. This was particularly effective on straight-away parts of the track and on the turns--and at the end of the track, the robot turned around and trundled back along the line again!
Our code:
The top left collection of blocks is the program, with "slowspin" and "bearR" defined with the corresponding yellow-rhombus-green-block strings.
Excited with our success, we look forward to the next class working with robots!