Texturing Point Clouds¶
You can use an additional monocular camera to provide colors for the 3D data of your stereo camera. To do this, you have to calibrate the monocular camera to the view of the stereo camera first.
Calibrating the Monocular Camera with the Calibration Wizard¶
Open NxView.
Select both your stereo camera and the monocular camera that you want to calibrate to it.
Click “Calibrate…” and follow the instructions on the screen. This will calibrate the monocular camera’s internal geometry and set its Link node to the relative position between the two cameras.
Calibrating the Monocular Camera Manually¶
Collect calibration patterns with the CollectPattern command. It is important that you get relatively large calibration patterns in the view field of the monocular camera (for calibrating its internal geometry) as well as patterns that are visible in both cameras.
Use the Calibrate command to calibrate the link between the two cameras. Simply specify the serial numbers of the two cameras in the Cameras parameter. The command will automatically see that you want to calibrate the link between a monocular and a stereo camera.
Use the StoreCalibration command to persistently store the calibration in the camera’s EEPROM.
Combining Data from Stereo and Mono Camera¶
After you successfully calibrated the monocular camera, you can open both of the cameras in NxView. The point cloud will automatically be textured with images from the monocular camera. This is achieved by executing the RenderView command with both cameras specified in the (Cameras) parameter.
To combine the data of both cameras in your program you can:
Render a Mono Camera Image from the Point of View of the Stereo Camera¶
Use the ComputeTexture command and specify the mono camera in the (Texture) parameter and the stereo camera in the (Cameras) parameter. This will update the rectified texture images in the stereo camera.
NxLibCommand computeTexture(cmdComputeTexture);
computeTexture.parameters()[itmCameras] = "STEREO SERIAL";
computeTexture.parameters()[itmTexture] = "MONO SERIAL";
computeTexture.execute();
Render 3D Data from the Point of View of the Mono Camera¶
Use the RenderPointMap command and specify the mono camera in the (Camera) parameter.
NxLibCommand renderPointMap(cmdRenderPointMap);
renderPointMap.parameters()[itmCamera] = "MONO SERIAL";
renderPointMap.execute();
The resulting rendered point map and texture images will have aligned pixel positions which also coincide with the pixels of the mono camera’s rectified image.
To compute normals for the transformed point cloud use the (PointMap) parameter of the ComputeNormals command:
NxLibCommand computeNormals(cmdComputeNormals);
computeNormals.parameters()[itmPointMap] = renderPointMap.result()[itmImages][itmRenderPointMap].path;
computeNormals.execute();
Compute Texture Information for Arbritrary 3D Positions¶
For more advanced usages, you can project arbitrary 3D positions (e.g. from a stereo camera’s point map) to the (undistorted) mono camera image. To do this take the 3D position (in world coordinates), apply the mono camera’s pose transformation to move it to the mono camera frame and project it into the camera with the camera matrix.