• Ensenso
  • Getting Started
  • Software
  • Hardware
  • Guides
  • About
  • Ensenso Community

General Information

  • About Guides

First Steps

  • Software Installation
  • Hardware Installation
  • Opening a Device
    • Trouble-Shooting
      • My Device Is Not Listed
      • My Device Shows a Warning
      • The 3D Data is Bad
      • The MTU is Low
  • Using Sample Data from the Ensenso Website
    • 1. Download Data
    • 2. Create a File Camera in NxView
    • 3. Open the Camera in NxView

Hardware Guides

  • Camera Setup Considerations
    • Limitations of the 3D Reconstruction
      • Effective resolution for surface details
      • Effective resolution for S-series cameras
      • Reconstruction of curved and slanted surfaces
      • Slanted surfaces in outer field-of-view
    • Camera Position and Orientation
      • Direct specular reflection from the projector into the cameras
  • Network Configuration
    • Network Wizard
      • Launching the Network Wizard
      • Privileged Changes Made to Your System
        • Configuration of Network Adapters
        • uEye Services
        • Reverse Path Filtering (RPF)
        • Maximum Transmission Unit (MTU)
    • Network Performance
      • Background
      • Solutions
        • Update Your Network Card Drivers
        • Setting Network Card Properties
          • Receive Buffer Size
          • Maximum Transmission Unit (MTU)
        • Disable Energy Saving Settings
        • Lowering Packet Rate From the Camera
    • Firewall Configuration
      • GigE Vision
      • X-Series Projector
      • XR-Series Devices

API Usage

  • API and Command Error Handling
    • API and Command Errors
    • Error Handling with Exceptions
    • Error Handling with Return Codes
      • Reading/Writing Tree Nodes
      • Command Execution
  • Debugging and Simulation
    • Exporting Debug Information
      • Exporting Debug Information Using NxTreeEdit
      • Exporting Debug Information From Your Application
        • Debug Levels
        • Enable Logging from the Start
        • Retrieving Debug Information
        • Automatic Export with File Rotation
      • Generate Custom Debug Information
    • Using File Cameras
      • Using File Cameras in NxView
        • Saving Image Data
        • Creating a File Camera
      • Using File Cameras With the API
        • Saving Image Data
        • Creating a File Camera
        • Loading Original Camera Parameters
        • Loading the Original Position of File Cameras
    • Using Virtual Cameras
      • Creating a Virtual Camera
      • Using Virtual Cameras in NxView
        • Modifying Objects Manually
        • Throwing Models into the Scene
        • Create a Randomly Filled Bin using the Scene Wizard
      • Using Virtual Cameras in User Applications
      • Accuracy and Limitations of camera simulation
  • CUDA
    • Multiple GPUs
    • Hints and Limitations
  • OpenGL
    • Headless Rendering on Linux
      • Manual Specification of the EGL Platform
      • Headless Rendering with X Server and Nvidia Driver

Camera Operation

  • Binary Pipeline
  • Basic Camera Operations
    • Opening a Camera and Setting Parameters
      • Code Examples
    • Using Parameter Presets
      • Preset Schema
      • Preset Example
        • Structuring a complex condition
      • Code Example
    • Parameter Adjustment in NxView
      • 1. Adjust Capture Settings
      • 2. Select Quality Preset
      • 3. Enable Cuda
      • 4. Post Processing
    • Reading/Writing Camera Parameter Files
      • Format of JSON parameter files
      • Parameter Import/Export from NxView
      • Code Examples
        • Open camera and read parameter file
        • Write parameter file
    • Capturing Images with Hardware Trigger
      • Code Examples
    • Using the Digital Input and Output
      • Code Examples
        • Flash Output
        • Statically setting the output state
        • Reading the input state
    • Running Cameras Hardware Synchronized
      • Code Examples
    • Analyzing Frame Times
      • Case 1: Frame Time Is Compute-Bound
      • Case 2: Frame Time Is Network-Bound
      • Case 3: Frame Time Is Limited by Flash Time
  • Operation of Specific Camera Series
    • B-Series
      • Limiting Projector Power for PoE Power Supplies
    • C-Series
      • Interaction Between Stereo and Color Devices
        • IP Configuration
        • Firmware Updates
        • Device-Internal Trigger
      • Limiting Projector Power for PoE Power Supplies
      • Factory Calibration
    • S-Series
      • Laser Projector Heating
      • Camera Node
      • Computing 3D Data
      • Stereo Matching Parameters and Filtering 3D Data
      • Functional Limitations
      • Unmatched Regions in S-Series Images
    • XR-Series
      • Software Concept
      • Commands
      • Image Acquisition
      • Image Transfer Times
      • Debug Logging
      • Functional Limitations
        • Limitations of Patch Match
      • Known Issues
      • Using the Wifi Function
  • Getting 3D Data
    • Grabbing 3D Data
      • Code Examples
    • Texturing 3D Data
      • Capturing Texture Images
      • Computing Texture from Projector Images
    • Aligning Images With 3D Data
    • Set Z-Range
      • Set distance of measurement volume far/near plane
      • Set MinimumDisparity and NumberOfDisparity
    • Optimize Settings for Performance/Quality
      • General Settings
      • Capture Parameters
      • Stereo Matching Parameters
    • Deferred 3D Processing
      • Image Acquisition
      • Stereo Processing
      • Code Examples
  • Multithreading
    • Parallel Capturing and Processing
      • Code Examples
        • Single Threaded
        • Multi Threaded
    • Parallel Usage of Multiple Cameras
      • Code Examples
        • Multi Threaded
  • Calibration
    • Calibration Patterns
      • Halcon Patterns
      • Ensenso Patterns
        • Single and Custom Single Patterns
        • Flexible Patterns
        • Assembly Patterns
      • Coordinate Systems on Calibration Patterns
      • Reference Points
      • Measurement of the Grid Spacing
      • Printing Calibration Patterns
    • Collecting Calibration Patterns
      • The Global Pattern Buffer
      • Collecting Patterns
      • Code Example
    • Calibrating a Camera
    • Checking Camera Calibrations
      • Measuring Calibration Accuracy
        • Measuring Calibration Accuracy in NxView
        • Measuring Calibration Accuracy with the NxLib
        • Error Metrics
        • Evaluating Measurement Results
      • Common Calibration Errors
      • Dynamic Recalibration
        • Code Example
    • Restore Factory Calibration
      • Overwriting EEPROM content with factory raw calibration data
      • Loading factory calibration patterns to recompute calibration data
      • Backwards Compatibility
  • Multi Camera Setups and Calibrations
    • Link Tree
      • Link Concept
      • Coordinate Systems
      • Link Tree
      • World Coordinate System
    • Multi Camera Setups
      • Example Setup
    • Texturing Point Clouds
      • Calibrating the Monocular Camera with the Calibration Wizard
      • Calibrating the Monocular Camera Manually
      • Combining Data from Stereo and Mono Camera
    • Workspace Calibration
      • Code Examples
    • Hand-Eye Calibration
      • Fixed Camera
      • Moving Camera
      • Steps to Perform a Hand-Eye Calibration
        • Example Code
      • After the Calibration
        • Fixed Camera
        • Moving Camera
        • Example Code
      • Recalibrating Robot Geometry
      • Calibrating Robots with Less Than 6 Degrees of Freedom
      • Calibration Result Improvement
  • PartFinder
    • Activating a PartFinder License
      • Licenses
      • Installation
      • License Activation
        • Evaluation License
        • Runtime License
    • First Steps with PartFinder
      • 1. Select a Camera
      • 2. Open PartFinder
      • 3. Create a Model
      • 4. Search for Parts
    • Generate a PartFinder model and search for it
      • 1. Generating a model
      • 2. Search for Parts
    • Load a PartFinder model file saved from NxView
      • 1. Loading the model
      • 1. Search for Parts
Guides - Ensenso SDK
  • »
  • Multi Camera Setups and Calibrations »
  • Links, Coordinate Systems and LinkTree

Links, Coordinate Systems and LinkTree¶

Link Concept¶

In the NxLib coordinate systems are implicitly defined by the concept of a Link. A Link is a Transformation between two coordinate systems - as any other transformation in the NxLib - with the difference, that the names of the associated coordinate systems are known to the NxLib.

Coordinate Systems¶

Coordinate systems are either defined by a camera link or by user defined links. Each camera can store one Link into a target coordinate system. The target coordinate system can either be the serial number of another camera or a user defined coordinate system, which you can define in the Links node by providing a Transformation between and the names of two coordinate systems.

By default, some of the NxLib’s commands use the following coordinate systems to refer to the world coordinate system (see below). Note that you can always choose the name of the target coordinate system of those commands freely.

  • “Workspace” if you performed a workspace calibration and did not change the default Target parameter.

  • “Workspace” or “Hand” if you performed a hand-eye calibration and did not change the default Target parameter.

Link Tree¶

All links defined by a camera link or by the user defined links are internally combined into one link tree. The following example shows how the NxLib would assemble its LinkTree from two camera links (blue) and five user defined links representing a robot model (orange). The first camera “123” is mounted on the robot and moves around with it, hence it has a static link to the robot hand. The second camera “456” is mounted somewhere in the world and has a workspace calibration with target “Workspace”.

digraph link_tree_example {
    graph [
        rankdir = LR
    ]
    Cam123 [
        label="123"
        style="solid, filled"
        fillcolor="lightblue"]
    Cam456 [
        label="456"
        style="solid, filled"
        fillcolor="lightblue"
    ]
    R0 [
        label="RobotOrigin"
        style="solid, filled"
        fillcolor="orange"
    ]
    R1 [
        label="Joint1"
        style="solid, filled"
        fillcolor="orange"
    ]
    R2 [
        label="Joint2"
        style="solid, filled"
        fillcolor="orange"
    ]
    R3 [
        label="Joint3"
        style="solid, filled"
        fillcolor="orange"
    ]
    R4 [
        label="Hand"
        style="solid, filled"
        fillcolor="orange"
    ]
    W [
        label="Workspace"
        style="solid, filled"
        fillcolor="gray"
    ]
    Cam123 -> R4 -> R3 -> R2 -> R1 -> R0 -> W
    Cam456 -> W
}


In case your link tree contains loops, links that are part of a loop are ignored so that cameras with links ending in a loop will use their own camera coordinate system as world coordinate system. User defined links that refer to a link that is part of a loop are not valid.

World Coordinate System¶

The root of the link tree is called the world coordinate system. If your link tree consists of several subtrees, all root nodes are assumed to be the world coordinate system and internally linked by an identity transformation. The world coordinate system is always the target system of at least one link and it can either have a name or it can be the empty string. A named world coordinate system can be created with a link that has a specified target system, which is neither a camera serial nor specified in the user defined links node. The “Workspace” node in the above graph is an example for a named world coordinate system. A nameless world coordinate system can be created with a link that has an empty target system.

Whenever a camera computes 3D data, the resulting coordinates are in the world coordinate system. This is achieved by resolving the link chain from the camera to the root of the link tree by chaining the transformations along the path. The link chain stops when a world coordinate system has been reached. The result of the resolution is a Transformation from the camera to the world coordinate system. This transformation is then applied to the computed 3D data of the camera. If the camera has no link, the 3D data remains in the camera coordinate system (which at the same time is the world coordinate system).

Applying this to the above example, “Workspace” is the world coordinate system and both cameras “123” and “456” will compute 3D data in the coordinate system “Workspace”. If the robot model would not have been specified and “123” would still be linked to “Hand”, its 3D data would be in the “Hand” coordinate system. Note that the link from “RobotOrigin” to “Workspace” has to be specified, otherwise “RobotOrigin” would be considered the world coordinate system for “123”.

Previous Next

© Copyright 2025 Optonic GmbH.