Deferred 3D Processing¶
It is possible to defer the 3D processing of the images. This allows time critical applications to capture stereo image pairs quickly and do the computationally demanding stereo matching later. The procedure is as follows:
Image Acquisition¶
Use the Capture or Trigger and Retrieve commands to acquire an image pair
Retrieve the binary image data from the Images/Raw nodes and store them in your application’s memory or file.
Retrieve the calibration data from the Calibration node and store it with the image pair into memory or file.
Start over to capture the next image until your entire sequence is recorded
Stereo Processing¶
There are two possible ways of processing the saved images. Either you load the images back from memory, or you create a file camera from the saved images, which is preferred.
- From a file camera:
Save the images and calibration using the SaveFileCamera command.
Create a file camera from the saved files with the CreateFileCamera command.
Use the file camera like a hardware camera.
- From memory:
Load a stored image pair from your application’s memory into the Images/Raw nodes of the camera.
Restore the matching calibration data into the Calibration node.
Compute disparity and point maps using ComputeDisparityMap and ComputePointMap.
Start over to continue with the next stored image pair.
Note
Although the camera calibration is fixed it is necessary to store the calibration data with every image in order to obtain accurate reconstructions. This is necessary to correctly compensate temperature deformations of the camera. Dynamic calibration effects are currently tracked within the Dynamic node of the camera’s calibration parameters.
Note
By storing the entire Calibration subtree you are safe to obtain the very same reconstructions offline as online, also in future software revisions. Currently it is sufficient to save the entire Calibration subtree once, and only store the Dynamic parameter subtree for every image pair. Before processing the image pairs you can then restore the Calibration subtree (or leave it as it is, if you’re using the same camera as for capturing), and then set the Dynamic parameters for each image pair before doing the stereo matching.
Code Examples¶
The C++ sample shows the preferred way: saving the images as a file camera. The Halcon Script shows how to set the Images/Raw nodes from the images stored in memory.
std::string folderOrZipFile = "save/images/here"; // where to store the images, can be within zip file
NxLibItem root; // References the tree's root item at path "/"
// replace "1234" with your camera's serial number:
NxLibItem camera = root[itmCameras]["1234"]; // References the camera's item at path "/Cameras/1234"
// (1) Capture 10 images and save them together with their calibration data as a file camera.
int NumberOfImagesToCapture = 10;
for (int i = 0; i < NumberOfImagesToCapture; i++) {
// Execute the Capture command with default parameters
NxLibCommand capture(cmdCapture);
capture.parameters()[itmSerialNumber] = "1234";
capture.execute();
NxLibCommand save(cmdSaveFileCamera);
save.parameters()[itmPath] = folderOrZipFile;
save.execute();
}
// (2) Create a file camera from the saved images. You can now work with the file camera like with a
// hardware camera.
NxLibCommand create(cmdCreateFileCamera);
create.parameters()[itmSerialNumber] = "File1234";
create.parameters()[itmPath] = folderOrZipFile;
create.execute();
// ...
* References the tree's root item at path "/"
open_framegrabber ('Ensenso-NxLib', 0, 0, 0, 0, 0, 0, 'default', 0, 'Raw', -1, 'false', 'Item', '/', 0, 0, RootHandle)
* Open the camera and reference the camera's item at path "/Cameras/BySerialNo/1234"
* replace "1234" with your camera's serial number
Serial := '1234'
open_framegrabber ('Ensenso-NxLib', 0, 0, 0, 0, 0, 0, 'default', 0, 'Raw', 'auto_grab_data=0', 'false', 'Stereo', Serial, 0, 0, CameraHandle)
set_framegrabber_param (CameraHandle, 'grab_data_items', ['Images/Raw/Left', 'Images/Raw/Right'])
* (1) Capture 10 images and store them together with their calibration data into tuples
gen_empty_obj (LeftImages)
gen_empty_obj (RightImages)
Calibrations := []
NumberOfImagesToCapture := 10
for Index := 1 to NumberOfImagesToCapture by 1
* Execute the Capture command with default parameters
set_framegrabber_param (RootHandle, 'do_execute', 'Capture')
* Retrieve Raw Images
grab_data (Images, Regions, Contours, CameraHandle, Data)
select_obj (Images, Left, 1)
select_obj (Images, Right, 2)
* Copy calibration data in JSON format into string variable
get_framegrabber_param (CameraHandle, 'Calibration', Calibration)
concat_obj (LeftImages, Left, LeftImages)
concat_obj (RightImages, Right, RightImages)
Calibrations := [Calibrations, Calibration]
endfor
* (2) Load all images and their calibration data into the tree items and generate point clouds for each image pair
set_framegrabber_param (CameraHandle, 'grab_data_items', 'Images/PointMap')
count_obj (LeftImages, NumImages)
for Index := 1 to NumImages by 1
* instead of calling Capture we can now simply write the raw image data into the tree nodes for the left and right image
select_obj (LeftImages, Left, Index)
select_obj (RightImages, Right, Index)
nxLibSetBinary (Left, CameraHandle, 'Images/Raw/Left')
nxLibSetBinary (Right, CameraHandle, 'Images/Raw/Right')
* Restore calibration data from saved JSON representation
set_framegrabber_param (CameraHandle, 'Calibration', ['apply', Calibrations[Index]])
* Now we can compute the disparity map and point cloud as if we captured the image normally
set_framegrabber_param (RootHandle, 'do_execute', 'ComputeDisparityMap')
set_framegrabber_param (RootHandle, 'do_execute', 'ComputePointMap')
grab_data (PointMap, Region, Contours, CameraHandle, Data)
* You can now compute something on the point cloud data
* ...
endfor