Pi Robot Meets ROS
The primary goal of ROS (pronounced "Ross") is to provide a unified and open source programming framework for controlling robots in a variety of real world and simulated environments. ROS is certainly not the first such effort; in fact, doing a Wikipedia search for "robot software" turns up over 25 such projects. But Willow Garage is no ordinary group of programmers banging out free software. Propelled by some serious funding, strong technical expertise, and a well planned series of developmental milestones, Willow Garage has ignited a kind of programming fervor among roboticists with hundreds of user-contributed ROS packages already created in just a few short years. ROS now includes software for tasks ranging from localization and navigation (SLAM), stereo object recognition, action planning, motion control for multi-jointed arms, machine learning and even playing billiards. In the meantime, Willow Garage has also designed and manufactured a $400,000 robot called the PR2 to help showcase its operating system. Using the latest in robot hardware including two stereo cameras, a pair of laser scanners, arms with 7 degrees of freedom and an omni-directional drive system, only a lucky few will be able to run ROS directly on the PR2, including 11 research institutions that were awarded free PR2s as part of a beta-test contest. However, you do not need a PR2 to leverage the power of ROS; packages have already been created to support lower-cost platforms and components including the iRobot Create, WowWee Rovio, Lego NXT, Phidgets, ArbotiX and Robotis Dynamixels. The guiding principle underlying ROS is “don't reinvent the wheel”. Many thousands of very smart people have been programming robots for nearly five decades—why not bring all that brain power together in one place? Fortunately, the Web is the perfect medium for sharing code. All it needed was a little boost from a well organized company, an emphasis on open source, and some good media coverage. Many universities now openly share their ROS code repositories, and with free cloud space available through services such as Google Code, anyone can share their own ROS creations easily and at no cost. Is ROS for Me?ROS has
made its biggest impact in university robotics labs. For this reason,
the project might appear beyond the reach of the typical hobby
roboticist. To be sure, the ROS learning curve is a little steep
and a complete beginner might find it somewhat intimidating. For
one thing, the full ROS framework only
runs under Linux at the moment, though parts of it can be made to work
under Windows or other OSes. So you'll need a Linux machine
(preferably Ubuntu) or a Linux installation alongside your existing
OS. (Ubuntu can even be
installed under Windows as just another application without the need
for repartitioning.) Once you have Linux up and running, you can
turn to the ROS Wiki
for installation instructions and a set of excellent beginner tutorials
for both Python and C++. In the end, the time put into learning ROS amounts to a tiny fraction of what it would take to develop all the code from scratch. For example, suppose you want to program your robot to take an object from your location in the dining room to somebody else in the bedroom, all while avoiding obstacles. You can certainly solve this problem yourself using visual landmarks or a laser scanner. But whole books have been written on the subject (called SLAM) by some of the best roboticists in the world, so why not capitalize on their efforts? ROS allows you to do precisely this by plugging your robot directly into the pre-existing navigation stack, a set of routines designed to map laser scan data and odometery information from your robot into motion commands and automatic localization. All you need to provide are the dimensions of your robot plus the sensor and encoder data and away you go. The hundreds or thousands of hours you just saved by not reinventing the wheel can now be spent on something else such as having your robot tidy your room or fold the laundry. ROS HighlightsOver the next series of articles, we will have a lot more to say about the things Pi (or your own robot) can do with ROS. For now, let us take a brief look at some of the highlights. 3D Robot Visualizer and SimulatorAs the image above illustrates, ROS
makes it possible to display a 3D model of your robot using a
visualization tool called RViz. Creating the model involves editing an
XML file (written in URDF or Unified Robot Description Format)
specifying the dimensions of your robot and how the joints
are offset from one another. You can also specify physical
parameters of the various parts
such as mass and inertia in case you want to do some accurate
physical simulations. (The actual physics is simulated in a
another
set of tools called Player/Stage/Gazebo which pre-date ROS but can be
used with it.) Once you have your URDF file, it can be brought up
in RViz and move around your virtual robot with the mouse as shown in the following video:
You
can also create poses for your virtual robot using a tool called
the joint state publisher
which includes a graphical slider control,
one slider for each joint. The pose illustrated in the first
image above was created this way. However, the real power (and
fun)
of RViz comes from being able to view a live representation of your
robot as it moves about in the world. This way your robot might
be
around a corner and out of sight, but you can still view its virtual
doppelganger in RViz (examples below).
Navigation and Obstacle AvoidanceAs
mentioned in the introductory paragraphs, the ROS navigation system
enables a robot to move from point A to B without running into
things. To do true SLAM (simultaneous localization and mapping),
your robot will generally need a laser range finder or good stereo
vision (visual SLAM or VSLAM). However, basic obstacle avoidance
and
navigation by dead reckoning can be accomplished with an inexpensive
alternative dubbed the "Poor Man's Lidar" or PML. (Thanks to Bob
Mottram and Michael Ferguson--see the references below.) A PML
consists of a low cost IR sensor mounted on a panning servo that
continually sweeps the sensor through an arc in front of the robot. The
servo-plus-sensor can record 30 readings per 180-degree sweep which
takes 1 second in each direction. As a result, there is a bit of
a
lag between the motion of the robot and the updated range readings
indicated by the orange balls in the images above and the videos
below. By comparison, the lowest cost laser range finder (about $1300) takes
over
600 distance readings per 240-degree sweep and covers the entire arc
10 times per second (1/10th of a second per sweep). The
photos below show our PML setup:
Our PML
has a rather limited range compared to a laser scanner. The Sharp GP2Y0A02YK can measure distances between 20 cm (0.2
meters) and 1.5 meters. A
typical laser scanner has a range between 2 cm (0.02 meters) to 5.5
meters. Longer range
IR sensors are available such as the GP2Y0A700K0F which measures
between 1.0 and 5.5 meters but this means the robot would be blind to
objects within 1 meter. We could also use a pair of short and
long range sensors, but for this article we'll use just a single
sensor. Despite its limitations,
we can still use the PML with ROS to move Pi Robot around a room
while avoiding obstacles. In the video
below, the
grid squares in RViz are 0.25 meters on a side (about 10 inches) and
you can see that the user
clicks on a location with the mouse to tell Pi where to go next.
(The
green arrow indicates the orientation we want the robot to have once it
gets
to the target location.) ROS then figures out the best path to
follow to get there (indicated by the faint green line) and
incorporates the
data points from the PML scanner to avoid obstacles along the
way. When an obstacle is detected, a red square is placed on the
map to
indicate that the cell is occupied. The grey squares add a little
insurance based on the dimensions of Pi's base just to make sure we
don't get caught on an edge. Be sure to view the video in full
screen mode by clicking on the little box with 4 arrows at the bottom
right corner of the video.
|
|