!!Machine Vision Localization System
Project Files
OverviewThis is a machine vision system that tracks multiple targets and can send out the positions via the serial port. It was developed specifically for the Swarm Robotics project, but can be adapted for other uses. It is based upon the Indoor Localization System, but has several enhancements and bug fixes. Refer to Indoor Localization System for a basic overview of the setup of the system and the workings of the pattern identification algorithm. Major Enhancements/Changes
Major Bug Fixes
Target Patterns
OpenCV DocumentationOperationSetting up the CamerasThe cameras are set up and operate as described in the Indoor Localization System entry, but with one change: targets at the border of the camera frame are now discarded. This changes prevents the misidentification of patterns if one or more dots in the pattern fall off the screen, but it also means that there must be enough overlap that when the target is in the dead-zone of one camera, it is picked up by another camera. The height of the cameras will determine how small your patterns can be before becoming indistinguishable. Changing the height also changes the focus, so be sure to manually adjust the focus of the cameras when the height is changed. How to Use The SystemCamera SetupThe four cameras used were standard Logitech QuickCam Communicate Deluxes. For future use, the videoInput library used is very compatible and works with most capture devices. As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane). Before attaching the cameras, several considerations must be made. 1. Choose a desired ALL WHITE area to cover. ***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera*** Keep in mind that there is a trade-off between area, and resolution. In addition, the size of the patterns will have to be increased above the threshold of noise.
2. Ensure that the cameras are all facing the same direction. As viewed from above, the "top" of the cameras should all be facing the same direction (N/S/E/W). For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented). 3. Try to mount the cameras as "normal" as possible. Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.
Computer SetupIn the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library). The system should run in Windows XP or Vista. To setup the computer to develop and run the software, the three required libraries must be installed. 1. Download and install the Logitech QuickCam Communicate Deluxe Webcam Drivers - Available at \\mcc.\dfs\me-labs\lims\Swarms\LogitechDrivers.zip or http://www./index.cfm/435/3057&cl=us,en. If you download them from the logitech website you need BOTH versions of the software (qc1150 and lws110). These are available under downloads as "Logitech webcam software with Vid", and "Quickcam v. 11.5". Install qc1150 first, and restart as requested. Install lws110 (Note that you do not need the Vid software), and restart as requested. Device Manager -> Imaging devices should display a list of 4 cameras. For each camera right click, Choose update driver software, Browse my computer, Let me pick from a list, select QuickCam Communicate Deluxe Version: 11.5.0.1145 Hit next to install the drivers. Do this for all 4 cameras before restarting. You now have the new software (for controlling cameras individually), and the old drivers (for working with the vision system.). 2. Install Microsoft Visual Studio Professional (Express Edition is incompatible with some of the code) 3. Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&displaylang=en 4. Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&displaylang=en 5. Download and install Intel OpenCV Library - http:///projects/opencvlibrary/ 6. Download and install the videoInput Library - http:///school/spring05/videoInput/ 7. Download the source code for the vision system here: Vision System Localization Project The project will probably not compile right after you open it. You will have to put the correct directories into the environment. In Visual C++, go to Tools>Options>Project and Solutions>VC++ directories and select Include files in the drop-down menu. Add (the directory paths on your computer may be different depending on your system, and the versions you installed.):
select Library files from the drop-down menu and add:
Starting the Program
Camera CalibrationTheoryThe purpose of camera calibration is to map from pixel coordinates in the camera's image to coordinates on the floor. We assume that the camera can be modeled with the pinhole model; all rays of light from the floor pass through a point and are projected onto the camera's image. Essentially we are mapping points from one plane (the image) to another (the floor) through a point (the pinhole). Homography is the mathematical term for this type of mapping when the camera has little distortion. Let (x,y) be the coordinates of a point on the floor plane and (u,v) be a point on the camera image plane, and λ be a scalar. We use homogenous coordinates to represent the transformation from the image plane to the world plane as: . Suppose we have a group of points on the floor and their corresponding image coordinates are . Using (ui,vi) and (xi,yi), we need to figure out hij. We follow the procedures in Section 5.2 in Image Formation and Camera Calibration and obtain the following equation: Since there are 8 parameters (hij) to obtain, we need at least four points on the floor, i.e. . Once we obtain M and C from the point coordinates in the image and on the floor, it is easy to compute H (e.g. in matlab, use ). Putting the obtained H into the matrix form, we obtain the homography from (u,v) to (x,y). Then given any image coordinate , we can compute its corresponding floor coordinate according to the homography relationship above. Do not forget to divide the scaling factor λ. Note that the above homography 8-parameter calibration works well when the distortion of the camera is not noticeable. If the camera distortion is significant, advanced calibration techniques, such as the matlab calibration toolbox, must be used. For our webcams, we tried the matlab calibration toolbox and found that the distortion coefficients are fairly small. Therefore, we choose the homography method to calibrate our cameras. This will make the calibration matrix and the computation a lot simpler than the full matlab calibration, yet still yields sufficiently good answers. Calibration ProceduresThe calibration routine for the vision localization system uses 9 equally-spaced points per camera to find a mapping from the image coordinates to the real-world coordinates. The field of view of the four cameras must overlap the center horizontal and vertical dots that form a '+' shape. The center dot should be seen by all four cameras. The size of the dots is not significant as the system utilizes the center of mass of each dot to determine the "center" of the dot.
To calibrate:
The program should now be running. Your calibration information will be recorded in Quadrant0.txt, Quadrant1.txt, Quadrant2.txt, Quadrant3.txt, and calibration_dot_spacing.txt. If you choose not to recalibrate your cameras next time, the data in these files will be used to generate the calibration matrices.
Calibration ResultsUsing the 8-parameter homography calibration, the maximum calibration error is around 5mm, which is around 2 pixels error.
We also attached below the matlab calibration results for the intrinsic parameters of one of the cameras (notice the fairly small distortion coefficients): % Intrinsic Camera Parameters % % This script file can be directly executed under Matlab to recover the camera intrinsic and extrinsic parameters. % IMPORTANT: This file contains neither the structure of the calibration objects nor the image coordinates of the calibration points. % All those complementary variables are saved in the complete matlab data file Calib_Results.mat. % For more information regarding the calibration model visit http://www.vision./bouguetj/calib_doc/ %-- Focal length: fc = [ 835.070614323296470 ; 776.358109604829560 ]; %-- Principal point: cc = [ 448.722941671177130 ; 333.157042750110920 ]; %-- Skew coefficient: alpha_c = 0.000000000000000; %-- Distortion coefficients: kc = [ 0.000000000000000 ; -0.030646660052499 ; 0.000000000000000 ; 0.000000000000000 ; 0.000000000000000 ]; PausingHit 'p' to pause the program, and hit 'p' again to resume. Using the Command ConsoleTo enter the command mode, hit 'c'. When in the command mode, the main loop is not running; the command mode must be exited to resume. Type 'exit' to exit the command mode and resume the main loop. Note: There is no guarantee that all of the robots will receive the command, due to packet loss. It is advisable to send the command out multiple times to make sure the robots all receive it. You can use the arrow keys to find previously sent messages. Enter commands in the following syntax: <command name> <target ID> <parameter 1> <parameter 2> ... <parameter N> To send the command to all robots, use ID 0. To send the command to a specific robot, use that robot's ID. You can also enter multiple commands at a time: sleep 1 sleep 2 wake 4 goto 0 100 -100 deadb 0 150 Commands
Exiting the ProgramPress ESC to quit the program. Additional ConsiderationsThe Machine Vision Localization System is a robust system for tracking the locations of e-pucks with rotationally invariant patterns. However, several problems do exist as the transition from the paper "dice" patterns to LED light boards is being made.
Updates and Notes on Changes MadeSince the first implementation of the Machine Vision Localization System, several updates have been made to transition the system from the paper "dice" patterns to the LED light boards.
|
|