Vision Parameter Adjustment Guide for Planar Target Objects
About 19802 wordsAbout 66 min
title: Vision Parameter Adjustment Guide for Planar Target Objects createTime: 2026/01/26 15:37:35 permalink: /en/PickWiz/V1.8.2/bs617fuu/
This chapter mainly introduces how to adjust vision parameters according to actual scenarios in ordered loading and unloading, random picking, and positioning assembly of planar target objects.
Getting Started:
Background Introduction


Build a Project
(1)Create a new project for ordered loading and unloading of planar target objects / random picking of planar target objects / positioning assembly of planar target objects / positioning assembly of planar target objects (matching only) (the project name and project path can be customized, but the project name cannot contain Chinese characters)
Target Object type: planar target object (not circular, cylindrical, or quadrilateral, and with only small differences between front and back sides)



(2)Camera and robot configuration
(3)Add a Target Object

- Target Object Information
The Target Object name can be customized. The Target Object type defaults to standard Target Object and cannot be changed. The Target Object ID can be customized and is used to automatically switch Target Objects during robot picking
Point Cloud file: Target Object Point Cloud, create the Target Object Point Cloud in Point Cloud Template Creation
Fine matching Point Cloud Template: used for fine matching
Camera parameters: no need to fill in

- Model Information
Vision model: The 2D recognition solution applied to planar target objects is CAD-based synthetic data training (One-click Connection), and the 3D matching solution is 3D Registration. The vision models for different planar target object applications need to be obtained through One-click Connection training.



Mesh file: usually upload the Target Object CAD. To eliminate some noise, it is necessary to normalize the mesh file. The mesh file can also be normalized in Point Cloud Template Creation
Target Object attributes: elongated, symmetrical, highly reflective, Low solidity
Incoming material form: custom incoming material form------enter the incoming material form; tightly fitted------range of the number of Target Objects in each row and column
Operating environment: enter the environment file. In One-click Connection, the environment used for data generation will be automatically replaced with the entered environment to improve recognition performance
Target Object texture: enter the Target Object texture. In One-click Connection, the entered Target Object texture will be used for data augmentation during model training to improve recognition performance
Mixed random Scene data: after enabling, when training a model with One-click Connection, synthetic data for both random Scenes and ordered Scenes will be generated simultaneously for model training to improve recognition performance
Maximum number of model recognitions: default 20, modify according to Scene requirements
- Pick Point: Set the Pick Point according to the Target Object

Absolute coordinate system: uses the initial point as the origin; the initial point comes with the Target Object Point Cloud and CAD.
Pick Point coordinate system (offset): uses the current Pick Point as the origin.
(4)Add end effector, hand-eye calibration, and ROI
(5)Optional feature options: instance optimization, collision detection, collision detection (new version), visual classification, front/back recognition (via Point Cloud Template)
Instance optimization: optimize model-generated instances and process instance masks.
Collision detection (new version): The collision detection function is used to detect collisions between the end effector and the container, filtering out Picking Poses that may collide. Collision Detection User Guide
Visual classification: used to identify features such as different textures and different orientations of the same Target Object. Visual Classification User Guide
Front/back recognition (via Point Cloud Template): Point Cloud Templates of the front and back sides of the Target Object can be imported to match the front or back side of the picked Target Object, and Pick Points can be configured separately for the front and back sides of the Target Object. Front/Back Recognition (via Point Cloud Template) User Guide
(6)Test data (historical data is provided for subsequent practice; the 2D images and 3D Point Clouds in the historical data foreground\input folder can be used instead of camera image capture to configure ROI)
Ordered loading and unloading data for planar target objects:
Point Cloud file:
Mesh file:
Vision model:
End effector:
Historical data:

Vision Parameter
- 2D recognition: identify and segment instances from actual Scenes
Preprocessing: process 2D images before instance segmentation (commonly used: fill holes in depth maps & edge enhancement & extract top-layer texture & remove image Background outside roi3d)
Instance segmentation: segment instances (scaling factor & lower Confidence Threshold & auto enhancement), acceleration can be achieved by unchecking Return mask
Point Cloud generation: methods for generating instance point clouds, generate instance point clouds using segmented instance masks or bounding boxes / generate instance point clouds using filtered instance masks or bounding boxes
Instance filtering: filter segmented instances
Instance sorting: sort instances
- 3D computation: calculate the pose of instances in the camera coordinate system and generate Pick Points
Preprocessing: preprocess 3D Point Clouds before calculating Pick Points
Pose estimation: calculate the pose of instances in the camera coordinate system (coarse matching, fine matching) and generate Pick Points
- Pick Point processing: filter, adjust, and sort Pick Points
Pick Point filtering: filter Pick Points
Pick Point adjustment: adjust Pick Points
Pick Point sorting: sort Pick Points
1. 2D Recognition
1.1 Preprocessing
The preprocessing for 2D recognition processes 2D images before instance segmentation

1.1.1 Bilateral Filtering
... (TRUNCATED) createTime: 2026/01/26 15:37:35 permalink: /en/PickWiz/V1.8.2/bs617fuu/
This chapter mainly introduces how to adjust vision parameters for real-world scenarios involving ordered loading and unloading, random picking, and positioning/assembly of planar Target Objects.
Getting Started:
Background Introduction


Build a Project
(1)Create a new project for planar Target Object ordered loading/unloading / planar Target Object random picking / planar Target Object positioning and assembly / planar Target Object positioning and assembly (matching only) (the project name and project path can be customized, but the project name cannot contain Chinese characters)
Target Object type: planar Target Object (not circular, cylindrical, or quadrilateral, and with only small differences between the front and back sides)



(2)Configure the Camera and Robot
(3)Add a Target Object

- Target Object Information
The Target Object name can be customized. The Target Object type defaults to standard Target Object and cannot be changed. The Target Object ID can also be customized and is used to automatically switch the Target Object during Robot picking.
Point Cloud file: the Target Object Point Cloud. Create the Target Object Point Cloud in Point Cloud Template Creation.
Fine matching Point Cloud template: used for fine matching
Camera parameters: not required

- Model Information
Vision Model: the 2D recognition solution used for planar Target Objects is CAD-based synthetic data training (one-click integration), and the 3D matching solution is 3D registration. The Vision Model used by different planar Target Object applications must be obtained through one-click integration training.



Mesh file: usually upload the Target Object CAD. To eliminate some noise, it is necessary to normalize the mesh file. You can also normalize the mesh file in Point Cloud Template Creation.
Target Object attributes: elongated, symmetrical, highly reflective, Low solidity type
Incoming material pattern: custom incoming material pattern ------ enter the incoming material pattern; closely fitted ------ range of the number of Target Objects in each row and column
Operating environment: enter the environment file. The environment generated during data generation in one-click integration will be automatically replaced with the entered environment to improve recognition performance.
Target Object texture: enter the Target Object texture. When training the model in one-click integration, the entered Target Object texture will be used for data augmentation to improve recognition performance.
Mixed random-scene data: when enabled, synthetic data for both random scenes and ordered scenes will be generated during one-click integration training to improve recognition performance.
Maximum number of detections: default is 20; adjust it according to scenario requirements
- Pick Point: set Pick Points according to the Target Object

Absolute coordinate system: takes the initial point as the origin. The initial point comes with the Target Object Point Cloud and CAD.
Pick Point coordinate system (offset): takes the current Pick Point as the origin.
(4)Add the Tool, eye-hand calibration, and ROI
(5)Optional functional options: Instance Optimization, Collision Detection, Collision Detection (new version), Image Classification, front/back recognition (via Point Cloud template)
Instance optimization: optimizes the instances generated by the model and processes the instance masks.
Collision Detection (new version): the Collision Detection function is used to detect collisions between the Tool and the container, and to filter out Picking Poses that may collide. Collision Detection User Guide
Image Classification: used to identify different textures, orientations, and other features of the same Target Object. Image Classification User Guide
front/back recognition (via Point Cloud template): Point Cloud templates of the front and back sides of the Target Object can be imported to match whether the picked Target Object is front-side or back-side, and Pick Points can be configured separately for the front and back sides of the Target Object. front/back recognition (via Point Cloud template) User Guide
(6)Test data (historical data is provided for subsequent practice. When configuring the ROI, you can use the 2D images and 3D Point Clouds in the historical data foreground\input folder instead of capturing images with the Camera)
Planar Target Object ordered loading/unloading data:
Point Cloud file:
Mesh file:
Vision Model:
Tool:
Historical data:

Vision Parameters
- 2D recognition: recognize and segment instances from actual scenes
Preprocessing: process the 2D image before Instance Segmentation (commonly used: fill holes in the depth map & edge enhancement & extract the topmost texture & remove the image background outside roi3d)
Instance Segmentation: segment instances (scaling ratio & lower Confidence threshold & automatic enhancement); uncheck the option below to improve speed Return mask
Point Cloud generation: the way to generate the instance Point Cloud, either by using the segmented instance mask or bounding box to generate the instance Point Cloud / using the filtered instance mask or bounding box to generate the instance Point Cloud
Instance filtering: filter the segmented instances
Instance sorting: sort the instances
- 3D computation: compute the pose of the instance in the Camera coordinate system and generate Pick Points
Preprocessing: preprocess the 3D Point Cloud before computing Pick Points
Pose estimation: compute the pose of the instance in the Camera coordinate system (coarse matching, fine matching) and generate Pick Points
- Pick Point processing: filter, adjust, and sort Pick Points
Pick Point filtering: filter Pick Points
Pick Point adjustment: adjust Pick Points
Pick Point sorting: sort Pick Points
1. 2D Recognition
1.1 Preprocessing
Preprocessing for 2D recognition processes the 2D image before Instance Segmentation.

1.1.1 Bilateral Filtering

- Function
Image smoothing based on bilateral filtering
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Maximum depth difference | Maximum depth difference for bilateral filtering | 0.03 | [0.01, 1] |
| Filter kernel size | Convolution kernel size for bilateral filtering | 7 | [1, 3000] |
1.1.2 Convert the Depth Map to a Normal Map

- Function
Compute pixel normal vectors from the depth map and convert the image into a normal vector map
1.1.3 Image Enhancement

- Function
Common image enhancement functions, such as color saturation, contrast, brightness, and sharpness
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Image enhancement type | Enhance a specific element of the image | Contrast | Color saturation, contrast, brightness, sharpness |
| Image enhancement threshold | How much to enhance a specific element of the image | 1.5 | [0.1, 100] |
1.1.4 Histogram Equalization

- Function
Improve image contrast
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Local mode | Local or global histogram equalization. When checked, local histogram equalization is used; when unchecked, global histogram equalization is used. | Checked | / |
| Contrast threshold | Contrast threshold | 3 | [1,1000] |
1.1.5 Filter Depth Map by Color

- Function
Filter the depth map based on color values
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | Size of color filling | 3 | [1,99] |
| Filter depth by HSV - maximum color range value | Maximum color value | [180,255,255] | [[0,0,0],[255,255,255]] |
| Filter depth by HSV - minimum color range value | Minimum color value | [0,0,0] | [[0,0,0],[255,255,255]] |
| Keep the area within the color range | When checked, keeps the area within the color range; when unchecked, keeps the area outside the color range | / | / |
1.1.6 Gamma Image Correction

- Function
Gamma correction changes image brightness
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Gamma compensation coefficient | When this value is less than 1, the image becomes darker; when it is greater than 1, the image becomes brighter | 1 | [0.1,100] |
| Gamma correction coefficient | When this value is less than 1, the image becomes darker and is suitable for overly bright images; when greater than 1, the image becomes brighter and is suitable for overly dark images | 2.2 | [0.1,100] |
1.1.7 Fill Holes in the Depth Map

- Function
Fill hole regions in the depth map and smooth the filled depth map
- Applicable Scenarios
Because of issues such as structural occlusion of the Target Object itself and uneven lighting, parts of the Target Object may be missing in the depth map
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | Size of hole filling | 3 | [1,99] |
The fill kernel size can only be an odd number
- Parameter Tuning
Adjust according to the detection result. If filling is excessive, reduce the Parameter; if filling is insufficient, increase the Parameter.
- Example
1.1.8 Edge Enhancement

- Function
Set the edge portions of the texture in the image to the Background color or to a color with high contrast from the Background color, so as to highlight the edge information of the Target Object
- Applicable Scenarios
Edges are unclear because Target Objects occlude or overlap each other
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Tuning Recommendation |
|---|---|---|---|---|
| Normal z-direction filtering threshold | The filtering threshold for the angle between the normal vector corresponding to each point in the depth map and the positive Z-axis of the Camera coordinate system. If the angle between the normal vector of a point and the positive Z-axis of the Camera coordinate system is greater than this threshold, the color at the corresponding location of that point in the 2D image will be set to the Background color or to a color with high contrast from the Background color. | 30 | [0,180] | For flat Target Object surfaces, this threshold can be smaller. For curved-surface Target Objects, increase it appropriately according to the degree of surface tilt. |
| Background color | RGB color threshold of the Background color | 128 | [0,255] | |
| Automatically adjust contrast Background | Checked After automatic contrast Background adjustment is enabled, the colors of points in the 2D image whose angles exceed the filtering threshold are set to a color with high contrast from the Background color; Unchecked Without automatic contrast Background adjustment, the colors of points in the 2D image whose angles exceed the filtering threshold are set to the color corresponding to the Background color | Unchecked | / |
- Example
1.1.9 Extract the Topmost Texture

- Function
Extract the texture of the topmost or bottommost Target Object, while setting other areas to the Background color or to a color with high contrast from the Background color.
- Applicable Scenarios
Factors such as poor lighting conditions, similar color textures, tightly stacked piles, interleaved stacking, or occlusion may make it difficult for the model to distinguish texture differences between upper-layer and lower-layer Target Objects, which can easily lead to false detections.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Tuning Recommendation |
|---|---|---|---|---|---|
| Distance threshold (mm) | If the distance between a point and the topmost plane (or bottommost plane) is lower than this threshold, the point is considered to be within the topmost plane (or bottommost plane) and should be retained. Otherwise, it is considered to belong to the lower layer (or upper layer), and the color of the lower-layer (or upper-layer) point is set to the Background color or to a color with high contrast from the Background color. | 50 | [0.1, 1000] | mm | Generally set to 1/2 of the Target Object height |
| Number of clustered Point Cloud points | The expected number of points participating in clustering, that is, the number of sampled Point Cloud points within the ROI 3D area | 10000 | [1,10000000] | / | The more Number of clustered Point Cloud points, the slower the model inference speed but the higher the accuracy; the fewer Number of clustered Point Cloud points, the faster the model inference speed but the lower the accuracy |
| Minimum number of category points | Minimum number of points used to filter categories | 1000 | [1, 10000000] | / | / |
| Automatically calculate contrast Background | Checked After automatic contrast Background calculation is enabled, other areas outside the topmost (or bottommost) layer in the 2D image are set to a color with high contrast from the Background color threshold; Unchecked Without automatic contrast Background calculation, other areas outside the topmost (or bottommost) layer in the 2D image are set to the color corresponding to the Background color threshold | Checked | / | / | / |
| Background color threshold | RGB color threshold of the Background color | 128 | [0,255] | / | / |
- Example
1.1.10 Remove Image Background Outside roi3d

- Function
Remove the Background in the 2D image outside the ROI3D area
- Applicable Scenarios
Too much image Background noise affects detection results
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | Size of hole filling | 5 | [1,99] |
| Number of iterations | Number of image dilation iterations | 1 | [1,99] |
| Automatically calculate contrast Background | Checked After automatic contrast Background calculation is enabled, areas outside the roi in the 2D image are set to a color with high contrast from the Background color threshold; Unchecked Without automatic contrast Background calculation, areas outside the roi in the 2D image are set to the color corresponding to the Background color threshold | Checked | / |
| Background color threshold | RGB color threshold of the Background color | 128 | [0,255] |
The fill kernel size can only be an odd number
- Parameter Tuning
If more Background noise needs to be removed from the image, reduce the fill kernel size.
- Example
1.2 Instance Segmentation
1.2.1 Scaling Ratio

- Function
Proportionally scale the original image before inference to improve the accuracy and recall of 2D recognition.
- Applicable Scenarios
Adjust this function when detection performance is poor (for example, no instance is detected, an instance is missed, one bounding box covers multiple instances, or a bounding box does not fully cover an instance).
- Parameter Description
Default value: 1.0
Value range: [0.01, 3.00]
Step size: 0.01
Parameter Tuning
- Run with the default value and check the detection results in the visualization window. If no instance is detected, an instance is missed, one bounding box covers multiple instances, or a bounding box does not fully cover an instance, this function should be adjusted.


In 2D recognition, the percentage shown on an instance is the Confidence score, and the number is the instance ID (the order in which the instance is recognized).
In 2D recognition, the colored shading on an instance is the Mask, and the rectangle surrounding the instance is the bounding box.
- Try different scaling ratios and observe the changes in the detection results to gradually determine the scaling ratio range. If the detection effect improves significantly at a certain scaling ratio, use that scaling ratio as the lower bound; if the detection effect degrades significantly at a certain scaling ratio, use that scaling ratio as the upper bound.
If satisfactory detection results cannot be obtained after trying all scaling ratios, the ROI area can be adjusted
As shown below, when the scaling ratio is 0.33, the detection effect improves significantly, so 0.33 can be determined as the lower bound of the scaling ratio range.



When the scaling ratio is 3, the detection effect is still good, so 3 can be determined as the upper bound of the scaling ratio range.



- If the actual scenario does not require high picking accuracy, a scaling ratio with good detection results can be selected within the [0.33,3] range. If the actual scenario requires higher picking accuracy, the scaling ratio range should be refined further and adjusted with a smaller step size until the scaling ratio with the best detection results is found.
1.2.2 Lower Confidence Threshold

- Function
Retain only recognition results whose deep learning model scores are higher than the lower Confidence threshold
- Applicable Scenarios
Adjust this function when the instances selected by the detection boxes do not meet expectations
- Parameter Description
Default value: 0.5
Value range: [0.01, 1.00]
Parameter Tuning
- If the model detects too few instances, reduce this threshold. If the value is too small, the accuracy of image recognition may be affected.


- If an excessively low lower Confidence threshold causes incorrect instances to be detected and these incorrect instances need to be removed, increase this threshold. If the value is too large, the number of retained detection results may become zero, resulting in no output.
1.2.3 Enable Automatic Enhancement

- Function
Combine all values in the input scaling ratios and rotation angles for inference, and return all results above the configured lower Confidence threshold after combination. This can improve model inference accuracy, but it also increases processing time.
- Applicable Scenarios
A single scaling ratio cannot meet the requirements of the actual scenario, resulting in incomplete detection, or the object is placed with a large tilt angle.
- Example
If Automatic Enhancement - Scaling Ratio is set to [0.8, 0.9, 1.0] and Automatic Enhancement - Rotation Angle is set to [0, 90.0] , then the values in the scaling ratios and rotation angles are combined pairwise. The model automatically generates 6 images internally for inference, and finally merges the results of these 6 inferences, outputting the results above the lower Confidence threshold.
Automatic Enhancement - Scaling Ratio

- Function
Scale the original image multiple times and run inference multiple times to output consolidated inference results
- Applicable Scenarios
A single scaling ratio cannot meet actual scenario requirements, resulting in incomplete detection
- Parameter Description
Default value: [1.0]
Value range: the range of each scaling ratio is [0.1, 3.0]
Multiple scaling ratios can be set, separated by English commas
- Parameter Tuning
Enter multiple scaling ratios from 1.2.1 Scaling Ratio that produce good detection results
Automatic Enhancement - Rotation Angle

- Function
Rotate the original image multiple times and run inference multiple times to output consolidated inference results
- Applicable Scenarios
Use when the object placement deviates significantly from the coordinate axes
- Parameter Description
Default value: [0.0]
Value range: the value range of each rotation angle is [0, 360]
Multiple rotation angles can be set, separated by English commas
- Parameter Tuning
Adjust Automatic Enhancement - Rotation Angle according to the object angle in the actual scenario. The tilt angle can be determined based on sack patterns and bag opening shapes, or carton edges and brand logos.
1.3 Point Cloud Generation

| Instance Point Cloud generation form | Mask form (after segmentation) | — | Generate the Point Cloud using the segmented instance mask |
| Bounding box form (after segmentation) | Bounding box scaling ratio (after segmentation) | Generate the Point Cloud using the segmented instance bounding box | |
| Whether color is required when generating the Point Cloud (after segmentation) | Whether color needs to be attached to the generated instance Point Cloud | ||
| Mask form (after filtering) | — | Generate the Point Cloud using the filtered instance mask | |
| Bounding box form (after filtering) | Bounding box scaling ratio (after filtering) | Generate the Point Cloud using the filtered instance bounding box | |
| Whether color is required when generating the Point Cloud (after filtering) | Whether color needs to be attached to the generated instance Point Cloud |
If acceleration is not required, there is no need to use the Instance Filtering function. Use Mask form (after segmentation) or Bounding box form (after segmentation) to generate the instance Point Cloud. You can view the generated instance Point Cloud in the following project storage folder: \Project Name\data\PickLight\Historical Data Timestamp\Builder\pose\input folder to view the generated instance point cloud;

If acceleration is required, the Instance Filtering function can be used to filter instances. Use Mask form (after filtering) or Bounding box form (after filtering) to generate the instance Point Cloud. You can view the generated instance Point Cloud in the following project storage folder: \Project Name\data\PickLight\Historical Data Timestamp\Builder\pose\input folder to view the generated instance point cloud

1.4 Instance Filtering

1.4.1 Filter Based on Bounding Box Area

- Function Overview
Filter according to the pixel area of the bounding box of the detected instance.
- Applicable Scenarios
Applicable to scenarios where the areas of instance bounding boxes differ greatly. By setting the upper and lower limits of the bounding box area, image noise can be filtered out, improving image recognition accuracy and preventing noise from increasing the time required for subsequent processing.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Minimum area (pixels) | This parameter sets the minimum filtering area of the bounding box. Instances whose bounding box area is lower than this value will be filtered out. | 1 | [1, 10000000] | pixels |
| Maximum area (pixels) | This parameter sets the maximum filtering area of the bounding box. Instances whose bounding box area is higher than this value will be filtered out. | 10000000 | [2, 10000000] | pixels |
- Example
Run with the default values. The bounding box area of each instance can be viewed in the logs, as shown below.


Adjust Minimum area and Maximum area according to the bounding box area of each instance. For example, set Minimum area to 20000 and Maximum area to 30000 to filter out instances whose pixel area is less than 20000 or greater than 30000. The instance filtering process can be viewed in the logs.


1.4.2 Filter Based on Bounding Box Aspect Ratio

- Function Overview
Instances whose bounding box aspect ratios are outside the specified range will be filtered out
- Applicable Scenarios
Applicable to scenarios where instance bounding box aspect ratios differ greatly
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Minimum aspect ratio | Minimum value of the bounding box aspect ratio. Instances whose bounding box aspect ratio is lower than this value will be filtered out. | 0 | [0, 10000000] |
| Maximum aspect ratio | Maximum value of the bounding box aspect ratio. Instances whose bounding box aspect ratio is higher than this value will be filtered out. | 10000000 | [0, 10000000] |
| Use X/Y axis side lengths as the aspect ratio | By default, this is unchecked, and the ratio of the longer side to the shorter side of the bounding box is used as the aspect ratio, which is suitable when the longer and shorter sides of the bounding box differ greatly in length; when checked, the ratio of the side length of the bounding box on the X-axis to that on the Y-axis in the pixel coordinate system is used as the aspect ratio, which is suitable when the longer-side/shorter-side ratio of most normal instance bounding boxes is similar, but some abnormally recognized instance bounding boxes have large differences in the ratio of length on the X-axis to length on the Y-axis. | Unchecked | / |
1.4.3 Filter Instances Based on Category ID

- Function Overview
Filter by instance category
- Applicable Scenarios
Applicable to scenarios where the incoming materials contain multiple types of Target Objects
- Parameter Description
| Parameter | Description | Default Value |
|---|---|---|
| Retained category IDs | Retain instances whose category IDs are in the list; instances whose category IDs are not in the list will be filtered out | [0] |
- Example
1.4.4 Filter Based on Side Lengths of the Instance Point Cloud

- Function Overview
Filter according to the long side and short side of the instance Point Cloud
- Applicable Scenarios
Applicable to scenarios where the distances of the instance Point Cloud on the x-axis or y-axis differ greatly. By setting the distance range of the instance Point Cloud, image noise can be filtered out, improving image recognition accuracy and preventing noise from increasing the time required for subsequent processing.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Short side length range (mm) | Side length range of the short side of the Point Cloud | [0, 10000] | [0, 10000] | mm |
| Long side length range (mm) | Side length range of the long side of the Point Cloud | [0, 10000] | [0, 10000] | mm |
| Lower bound for edge denoising (%) | Extract the lower percentile bound of X/Y values (in the Camera coordinate system) from the instance Point Cloud, and remove Point Clouds outside the upper and lower bounds to avoid noise affecting length calculation | 5 | [0, 100] | / |
| Upper bound for edge denoising (%) | Extract the upper percentile bound of X/Y values (in the Camera coordinate system) from the instance Point Cloud, and remove Point Clouds outside the upper and lower bounds to avoid noise affecting length calculation | 95 | [0, 100] | / |
| Side length type | Filter according to the long side and/or short side of the instance Point Cloud. Instances whose long-side and/or short-side lengths are outside the range will be filtered out. | Instance Point Cloud short side | Instance Point Cloud short side; Instance Point Cloud long side; Instance Point Cloud long side and short side | / |
- Example
1.4.5 Category ID Filtering Based on the Classifier

- Function Overview
Filter instances based on classifier category IDs. Instances not in the reference categories will be filtered out.
- Applicable Scenarios
In multi-category Target Object scenarios, the Vision Model may detect multiple types of Target Objects, but the actual operation may require only one specific category. In such cases, this function can be used to filter out unnecessary Target Objects.
- Parameter Description
The default value is [0], which means that instances with category ID 0 are retained by default. Instances whose category IDs are not in the list will be filtered out.
1.4.6 Filter Based on Three-Channel Color

- Function Overview
Instances can be filtered out by three-channel color thresholds (HSV or RGB).
- Applicable Scenarios
Cases where incorrect instances and correct instances can be clearly distinguished by color.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Maximum color range value | Maximum color value | [180,255,255] | [[0,0,0],[255,255,255]] |
| Minimum color range value | Minimum color value | [0,0,0] | [[0,0,0],[255,255,255]] |
| Filtering percentage threshold | Color pass-rate threshold | 0.05 | [0,1] |
| Reverse filtering | When checked, removes instances whose proportion outside the color range is lower than the threshold. When unchecked, removes instances whose proportion within the color range in the instance image is lower than the threshold. | Unchecked | / |
| Color mode | The color space selected for color filtering | HSV color space | RGB color space; HSV color space |
- Example

1.4.7 Filter Based on Confidence

- Function Overview
Filter according to the Confidence score of the instance
- Applicable Scenarios
Applicable to scenarios where instance Confidence values differ greatly
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Reference Confidence value | Retain instances whose Confidence is greater than the threshold, and filter out instances whose Confidence is less than the threshold. | 0.5 | [0,1] |
| Invert filtering result | After inversion, retain instances whose Confidence is less than the threshold, and filter out instances whose Confidence is greater than the threshold. | Unchecked | / |
- Example
1.4.8 Filter Based on Point Cloud Quantity

- Function Overview
Filter according to the number of points in the downsampled instance Point Cloud
- Applicable Scenarios
The instance Point Cloud contains a large amount of noise
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Minimum Point Cloud quantity | Minimum value of the Point Cloud quantity | 3500 | [1, 10000000] |
| Maximum Point Cloud quantity | Maximum value of the Point Cloud quantity | 8500 | [2, 10000000] |
| Filter instances whose quantity falls within the interval | When checked, filters out instances whose Point Cloud quantity is within the interval between the minimum and maximum values; when unchecked, filters out instances whose Point Cloud quantity is outside the interval | Unchecked | / |
1.4.9 Filter Based on Mask Area

- Function Overview
Filter image masks according to the sum of mask pixels (that is, the pixel area) of the detected instances.
- Applicable Scenarios
Applicable to scenarios where instance mask areas differ greatly. By setting the upper and lower limits of the mask area, noise in image masks can be filtered out, improving image recognition accuracy and preventing noise from increasing the time required for subsequent processing.
- Parameter Setting Description
| Parameter Name | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Reference minimum area | This parameter sets the minimum filtering area of the mask. Instances whose mask area is lower than this value will be filtered out. | 1 | [1, 10000000] | pixels |
| Reference maximum area | This parameter sets the maximum filtering area of the mask. Instances whose mask area is higher than this value will be filtered out. | 10000000 | [2, 10000000] | pixels |
- Example
1.4.10 Filter Based on Visibility

- Function Overview
Filter according to the visibility score of the instance
- Applicable Scenarios
Applicable to scenarios where instance visibility values differ greatly
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Reference visibility threshold | Retain instances whose visibility is greater than the threshold, and filter out instances whose visibility is less than the threshold. Visibility is used to judge the degree to which an instance is visible in the image: the more the Target Object is occluded, the lower its visibility. | 0.5 | [0,1] |
| Invert filtering result | After inversion, retain instances whose visibility is less than the threshold, and filter out instances whose visibility is greater than the threshold. | Unchecked | / |
1.4.11 Filter Instances with Overlapping Bounding Boxes

- Function Overview
Filter out instances whose bounding boxes intersect and overlap
- Applicable Scenarios
Applicable to scenarios where instance bounding boxes intersect each other
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Bounding box overlap ratio threshold | Threshold for the ratio of the intersecting area of bounding boxes to the area of the instance bounding box | 0.05 | [0, 1] |
| Filter the instance with the larger bounding box area | When checked, filters out the instance with the larger area among the two instances whose bounding boxes intersect; when unchecked, filters out the instance with the smaller area among the two instances whose bounding boxes intersect | Checked | / |
- Example

New feature: filter enclosed instances. Run with the default values and view the bounding box intersection status of instances in the logs. After instance filtering, 2 instances remain.

The logs show that 12 instances were filtered out because of bounding box intersections, leaving 2 instances whose bounding boxes do not intersect.

Set Bounding box overlap ratio threshold to 0.1, and check whether to filter larger instances. View the instance filtering process in the logs. Nine instances are filtered out because the ratio of the intersecting area of the bounding boxes to the area of the instance bounding box is greater than 0.1; three instances are retained because the ratio is less than 0.1; and two instances have no bounding box intersections.


Set Bounding box overlap ratio threshold to 0.1, and uncheck whether to filter larger instances. View the instance filtering process in the logs. For 9 instances, the ratio of the intersecting area of the bounding boxes to the area of the instance bounding box is greater than 0.1, but 2 of these instances are retained because their bounding box areas are smaller than those of the instances intersecting them. Therefore, 7 instances are filtered out; 3 instances are retained because the ratio of the intersecting area of the bounding boxes to the area of the instance bounding box is less than 0.1; and 2 instances have no bounding box intersections.


1.4.12 [Expert] Filter Instances with Concave/Convex Masks Based on the Area Ratio of Mask / Circumscribed Polygon of the Mask

- Function Overview
Calculate the area ratio of the mask to the circumscribed polygon of the mask. If the ratio is less than the configured threshold, the instance will be filtered out.
- Applicable Scenarios
Applicable to cases where the Target Object mask has serrations / concave-convex irregularities.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Area ratio threshold | Threshold for the mask / convex hull area ratio. If the ratio is less than the configured threshold, the instance will be filtered out. | 0.1 | [0,1] |
1.4.13 [Expert] Filter Based on the Average Point Cloud Distance

- Function Overview
Filter based on the average value of the distances from points in the Point Cloud to the fitted plane, removing non-flat instance Point Clouds
- Applicable Scenarios
Applicable to scenarios where the Point Cloud of a planar Target Object is bent
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Plane segmentation distance threshold (mm) | Extract a plane from the bent instance Point Cloud. Points whose distance from the plane is less than this threshold are regarded as points on the plane. | 10 | [-1000, 1000] | mm |
| Average distance threshold (mm) | Average value of the distances from points in the instance Point Cloud to the extracted plane | 20 | [-1000, 1000] | mm |
| Remove instances whose average distance is less than the threshold | When checked, filters out instances whose average distance from the points to the extracted plane is less than the average distance threshold. When unchecked, filters out instances whose average distance from the points to the extracted plane is greater than the average distance threshold. | Unchecked | / | / |
1.4.14 [Expert] Filter Occluded Instances Based on the Area Ratio of Mask / Bounding Box

- Function Overview
Calculate the area ratio of the mask to the bounding box. Instances whose ratios are outside the minimum and maximum ranges will be filtered out.
- Applicable Scenarios
Used to filter instances of occluded Target Objects
- Parameter Description
Conversely, a smaller ratio indicates that the instance may be occluded.
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Minimum area ratio | Lower bound of the mask / bounding box area ratio range. The smaller the ratio, the more severely the instance is occluded. | 0.1 | [0,1] |
| Maximum area ratio | Upper bound of the mask / bounding box area ratio range. The closer the ratio is to 1, the less the instance is occluded. | 1.0 | [0,1] |
1.4.15 [Expert] Determine Whether All Top-Layer Instances Have Been Fully Detected

- Function Overview
As one of the foolproof mechanisms, determine whether all top-layer instances have been fully detected. If any top-layer instance has not been detected, an error will be reported and the Workflow will be terminated.
- Applicable Scenarios
Applicable to scenarios where one image is captured for multiple picks or where picking must be performed in sequence, preventing missed picks from affecting subsequent operations due to incomplete instance detection
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Parameter Tuning |
|---|---|---|---|---|---|
| Distance threshold | Used to determine top-layer Target Objects. If the distance between a point and the highest point of the Target Object Point Cloud is less than the distance threshold, the point is considered part of the top-layer Point Cloud; otherwise, it is not considered part of the top-layer Point Cloud. | 5 | [0.1, 1000] | mm | Should be less than the height of the Target Object |
1.5 Instance Sorting

- Function Overview
Group, sort, and extract instances according to the selected strategy
- Applicable Scenarios
Common to depalletizing, random picking, and ordered loading/unloading scenarios
If sorting is not required, you do not need to configure a specific strategy.
1.5.1 Reference Coordinate System

- Function Overview
Set a unified coordinate system for all instances to group and sort instances
- Applicable Scenarios
Common to depalletizing scenarios, random picking scenarios, and ordered loading/unloading scenarios
Coordinate-related strategies should be configured only after setting the reference coordinate system
- Parameter Description
| Parameter | Description | Illustration |
|---|---|---|
| Camera coordinate system | The coordinate system origin is above the object, and the positive Z-axis points downward; XYZ values are the values of the object center point in this coordinate system | ![]() |
| ROI coordinate system | The coordinate system origin is approximately at the center of the stack, and the positive Z-axis points upward; XYZ values are the values of the object center point in this coordinate system | ![]() |
| Robot coordinate system | The coordinate system origin is on the Robot itself, and the positive Z-axis generally points upward; XYZ values are the values of the object center point in this coordinate system | ![]() |
| Pixel coordinate system | The coordinate system origin is at the top-left vertex of the RGB image and is a two-dimensional planar coordinate system; X and Y values are the x value and y value of the bbox recognition box, and Z is 0 | ![]() |
1.5.2 General Grasping Strategy

- Parameter Description
| Parameter | Description | Default Value |
|---|---|---|
| Strategy | Select which value is used for grouping and sorting and how to sort, including the XYZ coordinates of the center of the instance Point Cloud, bounding box aspect ratio, distance from the center of the instance Point Cloud to the ROI center, and so on. Multiple items can be superimposed and executed sequentially in order. | Center X coordinate of instance Point Cloud from small to large (mm) |
| Grouping step size | According to the selected strategy, instances are divided into several groups by the step size. The grouping step size is the interval between groups. For example, if the strategy is "Center Z coordinate of instance Point Cloud from large to small (mm)", then the Z coordinates of all instance Point Cloud centers are sorted from large to small and grouped according to the step size, and the corresponding instances are also divided into several groups. | / |
| Extract the first several groups | How many groups of instances need to be retained after grouping and sorting | 10000 |
| Strategy name* | Description | Grouping step size | Extract the first several groups | |
|---|---|---|---|---|
| Default Value | Value Range | Default Value | ||
| Center XYZ coordinate values of instance Point Cloud from large to small / from small to large (mm) | Use the XYZ coordinate values of the center of each instance Point Cloud for grouping and sorting Before using this strategy for sorting, the reference coordinate system should be set first | 200.000 | (0, 10000000] | 10000 |
| From the middle to both sides / from both sides to the middle along the XY coordinate axes of the center of the instance Point Cloud (mm) | Use the XY coordinate values of the center of each instance Point Cloud and group and sort them in the direction of "from the middle to both sides" or "from both sides to the middle" Before using this strategy for sorting, the reference coordinate system should be set first | 200.000 | (0, 10000000] | 10000 |
| Center XY coordinate values of bounding boxes from large to small / from small to large (mm) | Use the XY coordinate values of the center point of each instance bounding box in the pixel coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
| Bounding box aspect ratio from large to small / from small to large | Use the ratio of the long side to the width side of the bounding box for grouping and sorting | 1 | (0, 10000] | 10000 |
| From the middle to both sides / from both sides to the middle along the XY coordinate axes of the bounding box center (mm) | Use the XY coordinate values of the center point of the bounding box and group and sort them in the direction of "from the middle to both sides" or "from both sides to the middle" | 200.000 | (0, 10000000] | 10000 |
| Target Object type ID from large to small / from small to large | Use the ID of the Target Object type for grouping and sorting, suitable for multi-category Target Object scenarios | 1 | [1, 10000] | 10000 |
| Local feature ID from large to small / from small to large | Use the ID of the local feature for grouping and sorting | 1 | [1, 10000] | 10000 |
| Confidence from large to small / from small to large | Use the Confidence of each instance for grouping and sorting | 1 | (0, 1] | 10000 |
| Visibility from small to large / from large to small | Use the visibility of each instance for grouping and sorting | 1 | (0, 0.1] | 10000 |
| Mask area from large to small / from small to large | Use the mask area of each instance for grouping and sorting | 10000 | [1, 10000000] | 10000 |
| Distance from the center of the instance Point Cloud to the ROI center from near to far / from far to near (mm) | Use the distance between the center of each instance Point Cloud and the center of the ROI coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
| Distance from the center of the instance Point Cloud to the origin of the Robot coordinate system from near to far / from far to near (mm) | Use the distance between the center of each instance Point Cloud and the origin of the Robot coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
- Example
1.5.3 Custom Grasping Strategy

(1) Function Description
Switch Grasping Strategy to Custom Grasping Strategy, then click Add to add one custom grasping strategy.
Customize the grasping order for each Target Object. If the general grasping strategy is difficult to use for picking, or if suitable parameters are difficult to tune because of Point Cloud noise and other issues, you can consider using a custom grasping strategy.
The custom grasping strategy is suitable for depalletizing scenarios and ordered loading/unloading scenarios, but not for random picking scenarios, because the Target Objects for a custom grasping strategy must be ordered (that is, the Target Object order is fixed).
A custom grasping strategy can only be combined with a single general grasping strategy, and the strategy can only select the Z coordinate from small to large
(2) Parameter Description
| Parameter | Description | Default Value | Value Range | Parameter Tuning |
|---|---|---|---|---|
| IOU threshold | Represents the overlap threshold between the annotated bbox and the detected bbox. The overlap is used to determine which image's sorting method should be selected when sorting the current Target Object instance. | 0.7 | [0,1] | The larger the threshold, the stricter the matching, but the worse the anti-interference capability. Small shape or position changes may cause matching to fail, which may lead to matching the wrong custom strategy and sorting in the wrong order. |
| Pixel distance threshold | Represents the size difference between a matched bbox and the detected bbox. | 100 | [0,1000] | The smaller the threshold, the stricter the matching, and the better the anti-interference capability. However, if the Target Object placement between different layers is similar, the wrong custom strategy may still be matched, resulting in incorrect sorting order. |
(3) Select the Reference Coordinate System
When using a custom grasping strategy, only the Camera coordinate system or the pixel coordinate system can be selected
If there are multiple layers of Target Objects, select the Camera coordinate system; if there is only one layer of Target Objects, select the pixel coordinate system
(4) Strategy, Grouping Step Size, and Extract the First Several Groups
| Parameter | Description | Default Value |
|---|---|---|
| Strategy | Only Center Z coordinate value of instance Point Cloud from large to small / from small to large (mm) can be selected | / |
| Grouping step size | According to the strategy of ordering Z coordinates from small to large, the Z coordinates of instances are sorted from small to large and divided into groups according to the step size | 10000 |
| Extract the first several groups | How many groups of instances need to be retained after grouping and sorting | 10000 |
(5) Capture Image / Add Local Image
Click Capture Image to acquire an image from the currently connected Camera, or click Add Local Image to import an image locally. For however many layers there are, or however many different placement forms of Target Objects there are, you need to capture or add the same number of images. If every layer is the same, only one image is required. Right-click an image to delete it.
On the acquired image, click and hold the left mouse button and drag to annotate a bbox. The DELETE key can be used to delete annotated bboxes step by step.
2. 3D Computation
2.1 Preprocessing
Preprocessing for 3D computation processes the 3D Point Cloud before pose estimation and Pick Point generation for instances.
2.1.1 Point Cloud Clustering Denoising

- Function
Remove noise through Point Cloud clustering
- Applicable Scenarios
There is a large amount of noise in the instance Point Cloud
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit | Tuning Recommendation |
|---|---|---|---|---|---|
| Distance threshold for Point Cloud clustering (mm) | Determines whether Point Clouds in space belong to the same category. If the distance between Point Clouds is lower than this threshold, they are considered the same category. | 5 | [0.1, 1000] | mm | Generally does not need to be changed. It should be greater than the point spacing of the Target Object Point Cloud and smaller than the minimum distance between the Target Object Point Cloud and the noise Point Cloud. |
| Minimum point count threshold | Point Cloud clusters with fewer than this number of points will be filtered out | 100 | [1,10000000] | / | Generally does not need to be changed. Increase the minimum point count threshold according to the amount of noise in the instance Point Cloud. |
| Maximum point count threshold | Point Cloud clusters with more than this number of points will be filtered out | 100000 | [1,10000000] | / | Generally does not need to be changed. If the number of points in the Target Object Point Cloud exceeds 100000, increase the maximum point count threshold. |
| Select the top Point Cloud in the ROI | When checked, calculates and sorts the average Z coordinate in the ROI coordinate system for Point Clouds of the same category, and retains the Point Cloud category with the largest average Z coordinate (top Point Cloud). When unchecked, retains all Point Clouds that meet the conditions. | Unchecked | / | / | If the Target Object Point Cloud is above the noise Point Cloud, checking this retains the Target Object Point Cloud; if the Target Object Point Cloud is below the noise Point Cloud, checking this also requires adjusting the Z-axis of the ROI coordinate system downward so that the average Z coordinate of the Target Object Point Cloud is the largest, thereby retaining the Target Object Point Cloud. |
| Visualize process data | When checked, the denoised Point Cloud is saved and can be found in C:_data | Unchecked | / | / | In debugging mode, this can be checked if visualized data needs to be saved |
- Example
(1)Without using Point Cloud clustering denoising, the generated instance Point Clouds are shown below. Instances 0, 1, 2, 3, 4, and 5 all contain noise.
(2)Check Point Cloud clustering denoising and run with the default values. If clustering succeeds, categories in the instance Point Cloud whose point counts are greater than 100 and less than 100000 will all be retained, while categories whose point counts are less than 100 or greater than 100000 will be filtered out.
The Point Cloud after clustering denoising can be viewed in the visualization window
(3)Increase Minimum point count threshold to 400. The 2nd and 3rd Point Cloud categories of instance 0 will be filtered out, the 1st Point Cloud category of instance 1 will be filtered out, the Point Cloud of instance 2 will not be filtered, the 3rd Point Cloud category of instance 3 will be filtered out, the Point Cloud of instance 4 will not be filtered, and the 1st Point Cloud category of instance 5 will be filtered out.
(4)Check whether to select the top Point Cloud in the ROI to retain only the Point Cloud category with the largest average Z coordinate in the ROI coordinate system. For instances 0, 1, 2, 3, 4, and 5, only the top Point Cloud will be retained, and the lower-layer noise will all be filtered out.
If the noise is above and the Target Object Point Cloud is below, the Z-axis of the ROI should be adjusted downward in the visualization interface so that the average Z coordinate of the Target Object Point Cloud is the largest and the Target Object Point Cloud is retained; otherwise, the retained top Point Cloud will be noise
(5)If clustering fails, the log error is shown below. In that case, increase the distance threshold for Point Cloud clustering. The distance threshold for Point Cloud clustering should be greater than the point spacing of the Target Object Point Cloud and smaller than the minimum distance between the Target Object Point Cloud and the noise Point Cloud.

Use MeshLab to measure the point spacing of the Target Object Point Cloud in the generated instance Point Cloud and the minimum distance between the Target Object Point Cloud and the noise Point Cloud, as shown below



If the number of points in the Target Object Point Cloud is greater than the maximum point count threshold, the Target Object Point Cloud may be completely filtered out because its point count exceeds 100000, causing clustering to fail. In that case, increase the maximum point count threshold.
2.1.2 Point Cloud Downsampling

- Function
Sample the Point Cloud according to the specified point spacing to reduce the number of computed points and improve model inference speed, although accuracy may decrease
- Applicable Scenarios
When the number of Point Cloud points in the actual scenario is too large, check Point Cloud Downsampling.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Point spacing for downsampling (mm) | Sample the Point Cloud according to the specified point spacing | 5.0 | [0.1, 1000] | mm |
Parameter Tuning
The larger the value of Point spacing for downsampling, the fewer points remain after downsampling, so Pick Point computation becomes faster, but accuracy may decrease
The smaller the value of Point spacing for downsampling, the more points remain after downsampling, so Pick Point computation becomes slower, but accuracy improves
2.1.3 Compute Normal Vectors

- Function
Compute Point Cloud normal vectors for subsequent Point Cloud processing
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Fix normal vector orientation | Whether to fix the orientation when computing normal vectors. When enabled, the orientation of the normal vector is determined by the orientation reference vector. | Checked | / |
| Neighbor point count for normal vector computation | The larger the value, the more neighboring points are referenced, but local variations may be ignored; the smaller the value, the opposite applies. | 30 | [1,200] |
| Orientation reference vector | Orientation reference vector for normal vector computation | [0,0,1] | / |
- Parameter Tuning
Cannot be changed
2.1.4 Point Cloud Contour Extraction

- Function
Extract the contour of the Target Object from the instance Point Cloud
- Applicable Scenarios
When using 2.2.4 Enable contour mode, Point Cloud contour extraction should also be checked
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit | Tuning Recommendation |
|---|---|---|---|---|---|
| Reference radius (mm) | Search radius for extracting contours in the instance Point Cloud | 10 | [0.1,10000000000] | mm | The reference radius is recommended to be set to 1/2 of the point spacing used for downsampling in 2.1.2Point Cloud Downsampling, and it must be greater than the Point Cloud spacing |
| Point Cloud contour search mode | Mode for searching Point Cloud contours | Normal mode | Normal mode; plane mode | / | Normally select normal mode; for planar Target Objects, select plane mode |
- Example
2.1.5 Filter Point Clouds by HSV Color Range (Hue, Saturation, Value)

- Function
Filter Point Clouds according to hue, saturation, and value in the Point Cloud image, and screen out Point Cloud regions that match the target range
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Filter depth by HSV - maximum color range value | Maximum color value for filtering Point Clouds | [0.9,0.9,0.9] | [[0,0,0],[1,1,1]] |
| Filter depth by HSV - minimum color range value | Minimum color value for filtering Point Clouds | [0.0,0.0,0.0] | [[0,0,0],[1,1,1]] |
- Example
2.1.6 Filter Point Clouds by Three-Channel Color

- Function
Filter Point Clouds by three-channel color to screen out Point Cloud regions that match the target range
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Filter Point Clouds by three-channel color - maximum color value | Maximum color value for filtering Point Clouds | [0.9,0.9,0.9] | [[0,0,0],[1,1,1]] |
| Filter Point Clouds by three-channel color - minimum color value | Minimum color value for filtering Point Clouds | [0.0,0.0,0.0] | [[0,0,0],[1,1,1]] |
- Example
2.1.7 Select Point Clouds Within the ROI Area

- Function
Select Point Clouds within the ROI 3D area from the instance Point Cloud. This default function cannot be deleted.
- Example
2.1.8 Remove Points Whose Normals Exceed the Angle Threshold

- Function
Remove Point Cloud points whose angle between the normal vector and the axis direction of the standard normal vector is greater than the normal vector angle threshold
- Applicable Scenarios
Planar Target Object loading/unloading (materials are mutually isolated)
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit |
|---|---|---|---|---|
| Angle threshold | Point Clouds whose angle is greater than this angle threshold are considered different instances | 15 | [-360, 360] | |
| Standard normal vector axis direction | The angle formed between the Point Cloud normal vector and the standard normal vector axis direction | Z-axis | X/Y/Z-axis | / |
| Whether to use the ROI coordinate system | When checked, calculates the angle between the normal vector and the axes of the ROI coordinate system; when unchecked, calculates the angle between the normal vector and the axes of the Camera coordinate system | Unchecked | / | / |
- Parameter Tuning
2.1.9 Point Cloud Plane Segmentation

- Function
Retain or remove the plane with the largest number of points in the instance Point Cloud
- Applicable Scenarios
The instance Point Cloud contains a noisy plane
- Parameter Description
| Parameter | Description | Default Value | Value Range | Unit | Tuning Recommendation |
|---|---|---|---|---|---|
| Reference distance for plane fitting (mm) | If the distance between a point and the plane is lower than the reference distance, the point is considered to be on the plane; otherwise, it is considered to be outside the plane | 3 | [0.001,10000] | mm | Generally does not need to be changed |
| Remove plane | When checked, removes the plane with the largest number of points; when unchecked, retains the plane with the largest number of points | Unchecked | / | / | If the plane with the largest number of points is the Target Object, retain the plane and leave this unchecked; if the plane with the largest number of points is noise, remove the plane and check this option |
- Example
2.1.10 Point Cloud Outlier Removal

- Function
Identify and remove outlier noise in the Point Cloud to improve Point Cloud quality
- Applicable Scenarios
The instance Point Cloud contains many outlier noise points
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Reference neighbor point count | The number of neighboring points around each point in the Point Cloud, that is, the neighborhood size. For dense Point Clouds, even a small neighborhood is sufficient to reflect the features of the Target Object, so a smaller value can be used; for sparse Point Clouds, a larger neighborhood is needed to reflect the features of the Target Object, so a larger value should be used. | 30 | [1, 10000000] |
| Standard deviation multiplier | Used to identify outlier noise. If the deviation of a point's coordinates from the mean coordinates of the instance Point Cloud exceeds the standard deviation multiplier, the point is considered an outlier. The smaller the value, the more points are considered outliers and removed, but this may cause misjudgment and remove important Target Object features; the larger the value, the fewer points are considered outliers and removed, but some outliers may be retained and affect recognition accuracy. | 0.005 | [0.0001, 2] |
- Parameter Tuning
Generally does not need to be changed. If the Point Cloud becomes too sparse after Point Cloud Outlier Removal, the standard deviation multiplier should be increased
- Example
2.1.11 Filter Out Point Clouds Whose Object Distance Exceeds the Limit

- Function
Filter out Point Clouds in the specified direction to remove noise and improve image recognition accuracy
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Tuning Recommendation |
|---|---|---|---|---|---|
| Specified axis | Specified axis of the Point Cloud, used to filter out Point Clouds in the specified direction | Z-axis | X/Y/Z-axis | / | Specified axis generally does not need to be changed |
| Threshold (mm) | In the direction of the specified axis, if the distance between the lower-layer Point Cloud and the Target Object Point Cloud is greater than this threshold, the lower-layer Point Cloud will be filtered out; if the distance is less than this threshold, the lower-layer Point Cloud will be retained | 750 | [0, 1000] | mm | Adjust the threshold according to the actual scenario. The larger the threshold, the fewer Point Clouds are filtered out; the smaller the threshold, the more Point Clouds are filtered out. |
| Select coordinate system | Filter out Point Clouds under the selected coordinate system | ROI coordinate system | Camera coordinate system; ROI coordinate system; Target Object coordinate system | / |
- Example
2.1.12 Point Cloud Downsampling (Target Object Point Cloud)

- Function Overview
The scene Point Cloud used for fine matching can be downsampled (different from coarse matching). When using it, it is recommended to add it and move it to the top of the preprocessing function list.
- Applicable Scenarios
Scenarios that use fine matching but have high processing time and cannot meet the overall takt time requirement.
- Parameter Description
| Parameter | Description | Default Value | Value Range | Unit |
|---|---|---|---|---|
| Point spacing for downsampling (mm) | Sample the Point Cloud according to the specified point spacing to reduce the number of points and improve vision computation speed | 1 | [0.1,1000] | mm |
The larger the value, the larger the point spacing for downsampling, the fewer points remain after downsampling, and the faster the vision computation speed, but accuracy may decrease;
The smaller the value, the smaller the point spacing for downsampling, the more points remain after downsampling, and the slower the vision computation speed, but accuracy may improve.
After running, the Point Cloud template in the Target Object configuration can be updated to the downsampled Target Object Point Cloud in the historical data, so as to appropriately reduce coarse matching time.
- Example
2.1.13 [Expert] Read Instance Point Cloud

- Function Overview
Read the instance Point Cloud
- Applicable Scenarios
Planar Target Object scenarios
- Parameter Description
| Parameter | Description | Default Value | Value Range | Unit |
|---|---|---|---|---|
| Point Cloud path | Target Object Point Cloud path. If not filled in, the Point Cloud uploaded in the Target Object interface is used. | / | / | / |
2.1.14 Optimize the Mask According to the Point Cloud

- Function
Based on the Point Cloud within ROI 3D, remove Point Cloud data in the mask that is not within ROI 3D, thereby improving mask accuracy
2.2 Point Cloud Matching Pose Estimation
2.2.1 Model Point Cloud Downsampling Size (mm)

- Function
Before coarse matching, the size of the sampling box used when downsampling the template Point Cloud
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, planar Target Object positioning and assembly (matching only)
- Parameter Description
Default value: 5
Value range: [0.001, 500]
Unit: mm
Parameter Tuning
- The larger the size, the fewer points are retained during downsampling, and the fewer points remain in the template Point Cloud after downsampling
It is recommended that the number of points in the template Point Cloud after downsampling be less than 300


- If the log reports the error "The number of model points is greater than 1000. Please appropriately increase the model Point Cloud downsampling size Parameter in 3D matching", then model Point Cloud downsampling size should be increased to reduce the number of points in the template Point Cloud after downsampling;


2.2.2 Repeated Sampling Ratio Coefficient

- Function
The degree of repeated sampling for point pairs in the processed template Point Cloud
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, planar Target Object positioning and assembly (matching only)
- Parameter Description
Default value: 1
Value range: [0.1,1]
Parameter Tuning
- The larger the value, the higher the success rate of coarse matching, but the longer the coarse matching time



- If the log reports the error "Please adjust parameters such as the point spacing for downsampling in 3D matching, model Point Cloud downsampling size, repeated sampling ratio coefficient, etc. For details, refer to the CPFV tuning guide", then Repeated sampling ratio coefficient should be increased. It is generally recommended to start tuning from 1. A value of 1 is the slowest but gives the highest fine matching success rate, while 0.1 is the fastest but gives the poorest fine matching success rate.

- If the log reports the error "Timeout warning triggered, please refer to the CPFV tuning guide to modify the parameters", then Repeated sampling ratio coefficient should be reduced

2.2.3 Pose Angle Prior

- Function
Before coarse matching, provide the approximate orientation of the Target Object in the ROI coordinate system in advance to improve the accuracy of coarse matching
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object positioning and assembly, and planar Target Object positioning and assembly (matching only), where the rotational symmetry of the planar Target Object itself leads to poor coarse matching and fine matching results
Not applicable to planar Target Object random picking scenarios
- Parameter Description
The value contains 4 digits. The first 3 digits represent the direction of the Target Object, and the 4th digit uses 0/1/2 to represent the X/Y/Z axis of the ROI coordinate system respectively.
Default value: []
Value range: [1,0,0,0] indicates the positive X-axis direction, and [-1,0,0,0] indicates the negative X-axis direction;
[0,1,0,1] indicates the positive Y-axis direction, and [0,-1,0,1] indicates the negative Y-axis direction;
[0,0,1,2] indicates the positive Z-axis direction, and [0,0,-1,2] indicates the negative Z-axis direction.
Format: [[1,0,0,0]], with two square brackets; directions of the X/Y/Z axes can be combined, for example, [[1,0,0,0],[0,1,0,1],[0,0,1,2]] indicates the positive X-axis, positive Y-axis, and positive Z-axis directions
- Example
Determine according to the orientation of the planar Target Object. As shown below, the planar Target Object faces the positive X-axis direction, so Pose Angle Prior can be set to [[1,0,0,0]]

2.2.4 Enable Contour Mode

- Function
Extract the contour of the instance Point Cloud and use the downsampled contour of the template Point Cloud and the downsampled contour of the instance Point Cloud for coarse matching
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, and planar Target Object positioning and assembly (matching only). Since the contour features of the Target Object are relatively obvious, using contour mode can avoid poor coarse matching caused by the planar Target Object easily slipping.
When using contour mode, 2.1.4 Point Cloud Contour Extraction should also be checked to extract contours from the instance Point Cloud
Example
- When contour mode is used, the log will display the prompt "Using contour mode, found number of contour points"

Contour Mode

- Function
Extract the contour of the template Point Cloud and use the downsampled contour of the template Point Cloud and the downsampled contour of the instance Point Cloud for coarse matching
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, and planar Target Object positioning and assembly (matching only). If the coarse matching result is poor, check this function to use contour Point Clouds for coarse matching again.
- Parameter Tuning
Normal mode: default
Plane mode: for planar Target Objects, select plane mode
- Example



2.2.5 Search Radius for Coarse Registration Contours

- Function
Use the downsampled contour of the template Point Cloud and the downsampled contour of the instance Point Cloud for coarse matching, and set the search radius for extracting contour Point Clouds from the template Point Cloud and instance Point Cloud
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, planar Target Object positioning and assembly (matching only)
- Parameter Description
Default value: 0.005
Value range: [0.0001,0.5]
Unit: m
2.2.6 Object Pose Correction

Fine Matching Search Radius (mm)

- Function
During fine matching, the template Point Cloud is matched with the instance Point Cloud, and every point in the template Point Cloud needs to search for its nearest point in the instance Point Cloud. The fine matching search radius represents both the search radius in the instance Point Cloud and the distance threshold between each point in the template Point Cloud and the nearest point in the instance Point Cloud. If the distance between a point and its nearest point is less than the fine matching search radius, the two points are considered matchable; otherwise, they are considered not matchable.
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, and planar Target Object positioning and assembly scenarios
- Parameter Description
Default value: 10
Value range: [1, 500]
Unit: mm
- Parameter Tuning
Usually does not need to be changed
Fine Matching Search Mode

- Function
The way the template Point Cloud searches for nearest points in the instance Point Cloud during fine matching
- Applicable Scenarios
If the fine matching result between the template Point Cloud and the instance Point Cloud is poor, this function should be adjusted
- Parameter Description
| Parameter | Description |
|---|---|
| Point-to-point | Each point in the template Point Cloud searches for its nearest point in the instance Point Cloud (the point with the shortest straight-line distance within the search radius), which is suitable for all Target Objects |
| Point-to-plane | Each point in the template Point Cloud searches for its nearest point in the instance Point Cloud along its normal vector, which is suitable for Target Objects with obvious geometric features |
| Combination of point-to-point and point-to-plane | First use point-to-point mode to optimize the Target Object pose in the instance Point Cloud, and then use point-to-plane mode to optimize the Target Object pose in the instance Point Cloud. This is suitable for Target Objects with obvious geometric features.
|
Use Contour Mode

- Function
Extract contour Point Clouds from the template Point Cloud and the instance Point Cloud for coarse matching
- Applicable Scenarios
In planar Target Object ordered loading/unloading, planar Target Object random picking, and planar Target Object positioning and assembly scenarios, if the result of coarse matching using keypoints is poor, this function should be checked to use contour Point Clouds for coarse matching again
- Parameter Tuning
The coarse matching result affects the fine matching result. If the fine matching result is poor, you can check Use Contour Mode
Contour Search Range (mm)

- Function
Search radius for extracting contour Point Clouds from the template Point Cloud and the instance Point Cloud
- Applicable Scenarios
General Target Object ordered loading/unloading, general Target Object random picking, and general Target Object positioning and assembly scenarios
- Parameter Description
Default value: 5
Value range: [0.1, 500]
Unit: mm
- Parameter Tuning
If the value is smaller, the radius for searching contour Point Clouds is smaller, which is suitable for extracting fine Target Object contours, but the extracted contours may contain outlier noise;
If the value is larger, the radius for searching contour Point Clouds is larger, which is suitable for extracting wider Target Object contours, but the extracted contours may ignore some detailed features.
Save Pose Estimation [Fine Matching] Data

- Function
When checked, saves fine matching data
- Applicable Scenarios
Planar Target Object ordered loading/unloading, planar Target Object random picking, planar Target Object positioning and assembly, planar Target Object positioning and assembly (matching only)
- Example
Fine matching data is saved in the following project path: \Project Folder\data\PickLight\Historical Data Timestamp\Builder\pose\output folder。

2.2.7 Pose Adjustment Based on Axis Rotation

- Function
Rotate and translate the instance Point Cloud around the given axis based on the first Pick Point of the Target Object, calculate the matching score between the template Point Cloud and the instance Point Cloud after each rotation and translation, and select the instance Point Cloud with the highest matching score as the final pose of the rotationally symmetric Target Object.
- Applicable Scenarios
Deviation occurs when matching the instance Point Cloud of a rotationally symmetric Target Object with the template Point Cloud, and the Point Cloud can be fully matched only after rotating by a certain angle
Cannot be used together with Recognition Type, front/back recognition (via Point Cloud template), or local feature recognition in the function options
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Tuning Recommendation |
|---|---|---|---|---|
| Rotation angle interval | Rotate the instance Point Cloud by equal angular intervals. The difference between two adjacent rotations is the rotation angle interval. For example, if the first rotation is 30°, the second is 60°, and the third is 90°, then the rotation angle interval is 30°. | 5 | \[1, 180\] | If the Target Object has many features and is difficult to match and high-precision matching is required, a smaller angle interval can be set to perform more matching operations, but the computation load will increase. If the Target Object shape is simple and has fewer features, a larger angle interval can be set to improve computation efficiency. |
| Rotation angle range | How large an angle range the instance Point Cloud can rotate from its initial state. | 90 | \[1, 180\] |
|
| Template Point Cloud file path | Upload the template Point Cloud file of the Target Object. If not uploaded, the Point Cloud template uploaded in the Target Object interface is used. | / | / | |
| Evaluation mode | Evaluate the quality of the matching result from different perspectives | Iso-target | Iso-target; Iso-source; Average; Strict; Loose; Fast |
|
| ICP threshold | The criterion for determining whether registration is successful. If the error of the matching result is less than this threshold, the matching is successful; if the error is greater than this threshold, the matching is unsuccessful. | 0.005 | \[0.000001, 1\] | Generally does not need to be changed. If the actual scenario requires higher matching precision, reduce this threshold; if the actual scenario requires higher matching speed and lower precision, increase this threshold. If the Target Object Point Cloud quality is good, the threshold can be reduced; if the Target Object Point Cloud quality is poor, the threshold can be increased. |
| Rotation axis selection | The axis around which the instance Point Cloud rotates | Z-axis | X/Y/Z-axis | Generally does not need to be changed. If one axis of the rotationally symmetric Target Object is critical for recognizing the Target Object pose, that feature axis can be used as the rotation axis. |
| Save visualization data | Whether to save visualization data | Unchecked | / | |
| Use edges for optimization | Use the edge contour of the Target Object to optimize matching, reduce matching result error, and make the matching between the instance Point Cloud and the template Point Cloud finer | Unchecked | / | If the edge contour of the Target Object has unique geometric features, check this function to improve matching accuracy. For example, for Target Objects with complex shapes and large differences in edge contours, using the edge contour Point Cloud can identify the Target Object pose more accurately. |
| Edge Point Cloud file path | Upload the edge Point Cloud file of the Target Object. If not uploaded, the edge Point Cloud is extracted from the Point Cloud template uploaded in the Target Object interface. | / | / | |
| Optimize rotation result | After finding the best pose during matching, optimize it again to reduce matching result error and make the matching between the instance Point Cloud and the template Point Cloud finer | Checked | / | Checked by default to improve matching accuracy and generally does not need to be changed |
| Optimization mode | Mode for optimizing the matching result | Point | Point; Plane; Full |
|
| Optimization threshold | The criterion for determining whether registration has reached the expected accuracy during optimization. If the registration error is less than this threshold, optimization succeeds; if the registration error is greater than this threshold, optimization fails and iteration must continue. | 0.002 | \[0.0001, 1\] | Generally does not need to be changed. Reduce this threshold for scenarios with high registration precision requirements; increase this threshold for scenarios with lower registration precision requirements. |
| Enable center movement mode | When enabled, translation mode is also added, so translation is superimposed while rotating the instance Point Cloud | Unchecked |
| |
| x-axis movement range | Movement range along the Pick Point x-axis in center movement mode (mm) | 0 | \[0,100\] |
|
| x-axis movement step size | Movement step size along the Pick Point x-axis in center movement mode (mm) | 2 | \[0.01,10\] | The step size must be set reasonably according to offset conditions and accuracy requirements. A step size that is too small will greatly increase takt time. |
| y-axis movement range | Movement range along the Pick Point y-axis in center movement mode (mm) | 0 | \[0,100\] |
|
| y-axis movement step size | Movement step size along the Pick Point y-axis in center movement mode (mm) | 2 | \[0.01,10\] | The step size must be set reasonably according to offset conditions and accuracy requirements. A step size that is too small will greatly increase takt time. |
| z-axis movement range | Movement range along the Pick Point z-axis in center movement mode (mm) | 0 | \[0,100\] |
|
| z-axis movement step size | Movement step size along the Pick Point z-axis in center movement mode (mm) | 2 | \[0.01,10\] | The step size must be set reasonably according to offset conditions and accuracy requirements. A step size that is too small will greatly increase takt time. |
- Example



Axial matching offset

After enabling center movement mode
- Description of the number of poses evaluated in pose adjustment based on axis rotation
Total number of pose computations = number of x-axis movement points * number of y-axis movement points * number of z axis movement points * number of angle points
An excessively large movement range or an excessively small step size will increase the total number of poses that need to be evaluated and lengthen takt time. The range and step size parameters must be set reasonably.
2.3 Empty ROI Determination

- Function
Determine whether any Target Objects (Point Clouds) remain within ROI 3D. If the number of 3D points in ROI 3D is less than this value, it indicates that no Target Object Point Cloud remains, and in this case no Point Cloud is returned
- Parameter Description
Default value: 1000
Value range: [0, 100000]
- Usage Workflow
Set the minimum point-count threshold for ROI 3D. If it is lower than this threshold, the Target Object Point Cloud in ROI 3D is insufficient, and it is therefore determined that there is no Target Object in ROI 3D;
In the Robot configuration, add a new vision status code to facilitate subsequent Robot signal processing.
3. Pick Point Processing
This section mainly explains functions related to Pick Point filtering and adjustment, along with Parameter tuning recommendations.
3.1 Pick Point Adjustment

3.1.1 Rotate the Picking Pose when it is outside the angle range

- Function description
When the Picking Pose is outside the configured angle range, it is rotated counterclockwise by a certain angle around a fixed axis. If it is still outside the configured angle range after rotation, a warning is issued.
- Usage scenario
This function is only applicable to depalletizing scenarios. It can keep the robot's approach direction stable during picking and prevent the end effector from repeatedly rotating during the picking process. In 180° cases, it can prevent exceptions such as cable twisting.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Fixed axis | An axis of the Picking Pose. The pose is rotated counterclockwise around this fixed axis | Z-axis | X/Y/Z-axis | / |
| Rotation angle | The angle by which the pose is rotated counterclockwise around the fixed axis. Adjust this angle so the Picking Pose satisfies the angle range | 0 | [-360,360] | degree |
| Angle range | The angle range of the Picking Pose. Set the angle range according to factors such as material placement, end effector type, and cycle time | [0,180] | [-180,180] | degree |
| Use current robot Euler Angles | By default, pose calculation uses Euler Angles "XYZ". When selected, the Euler Angles configured for the current robot are used so the pose remains consistent with the robot teach pendant. | Unchecked | / | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Robot arm coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system | / |
- Example
Without using this function, the generated Pick Points are shown below.

When this function is used with the default values, the RZ angles of the Picking Poses for instances 0, 1, and 2 are all within the angle range [0,180], so no processing is performed. The RZ angle of the Picking Pose for instance 4 is -90°, which is outside the angle range [0,180], so the Picking Pose of instance 4 is rotated by 0° around the fixed Z-axis.




If you want to adjust the RZ angle of the Picking Pose for instance 4 into the angle range, you can change the rotation angle to 180 and rotate the Picking Pose of instance 4 by 180° around the fixed Z-axis.


3.1.2 Rotate the Picking Pose so the rotation axis direction matches the target axis direction

- Function description
Rotate the Picking Pose once around the fixed axis so that the direction of the rotation axis (determined by the right-hand rule) matches the positive or negative direction of the target axis in the target coordinate system.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Rotation axis | An axis of the Picking Pose. Determined by the right-hand rule, the Picking Pose is rotated counterclockwise once around the fixed axis so that the direction of the rotation axis matches the positive or negative direction of the target axis in the target coordinate system | X-axis | X/Y/Z-axis |
| Fixed axis | The Picking Pose is rotated counterclockwise once around the fixed axis so that the direction of the rotation axis matches the positive or negative direction of the target axis in the target coordinate system | Z-axis | X/Y/Z-axis |
| Target axis | An axis of the target coordinate system. The Picking Pose is rotated counterclockwise once around the fixed axis so that the direction of the rotation axis matches the positive or negative direction of the target axis in the target coordinate system | X-axis | X/Y/Z-axis |
| Negative target axis direction | If selected, the direction of the rotation axis is aligned with the negative direction of the target axis in the target coordinate system; otherwise, it is aligned with the positive direction of the target axis in the target coordinate system | Unchecked | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system |
- Example
3.1.3 Rotate the Picking Pose so the angle between the rotation axis and the target axis is minimized

- Function description
Rotate the Picking Pose around the fixed axis by 0, 90, 180, and 270 degrees respectively, calculate the angle between the rotated rotation axis and the positive or negative direction of the target axis in the camera coordinate system, and finally output the Picking Pose with the smallest angle after rotation.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Fixed axis | An axis of the Picking Pose. Rotate the pose counterclockwise around this fixed axis | Z-axis | X/Y/Z-axis |
| Rotation axis | An axis of the Picking Pose. When rotating the pose, calculate the angle between this rotation axis and the positive or negative direction of the target axis | X-axis | X/Y/Z-axis |
| Target axis | An axis of the camera coordinate system. When rotating the pose, calculate the angle between the rotation axis and the positive or negative direction of this target axis | X-axis | X/Y/Z-axis |
| Negative target axis direction | If selected, calculate the angle between the rotation axis and the negative direction of the target axis; otherwise, calculate the angle between the rotation axis and the positive direction of the target axis | Selected | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system |
- Example


3.1.4 Flip the Picking Pose so the angle between the rotation axis and the target axis is minimized

- Function description
Rotate the Picking Pose once around the fixed axis so that the angle formed between the rotation axis and the positive or negative direction of the target axis in the ROI coordinate system is acute.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Fixed axis | An axis of the Picking Pose. Rotate the Picking Pose counterclockwise around this fixed axis | Z-axis | X/Y/Z-axis |
| Rotation axis | An axis of the Picking Pose. Rotate the Picking Pose so that the direction of this rotation axis matches the positive or negative direction of the target axis | X-axis | X/Y/Z-axis |
| Target axis | An axis in the ROI coordinate system. Rotate the Picking Pose so that the direction of the rotation axis matches the positive or negative direction of this target axis | X-axis | X/Y/Z-axis |
| Negative target axis direction | If selected, rotate the Picking Pose so that the direction of the rotation axis matches the negative direction of the target axis; otherwise, rotate the Picking Pose so that the direction of the rotation axis matches the positive direction of the target axis | Selected | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system |
- Example


3.1.5 Point a Picking Pose axis toward the ROI center

- Function
Rotate the Picking Pose around a fixed axis so that the pointing axis of the Picking Pose points to the ROI center.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Pointing axis | The axis in the Picking Pose that needs to be adjusted | X-axis | X/Y/Z-axis |
| Fixed axis | The axis that remains unchanged during rotation | Z-axis | X/Y/Z-axis |
| Reverse align | If selected, reverse-align the pointing axis to the ROI center; otherwise, align the pointing axis to the ROI center | Selected | / |
| Strict pointing | If selected, force the Picking Pose to rotate so the pointing axis points to the ROI center | Unchecked | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system |
- Example


3.1.6 Rotate the Picking Pose so the Z-axis direction matches the Z-axis of the target coordinate system

- Function description
Rotate the Picking Pose so that its Z-axis direction matches the Z-axis of the target coordinate system.
- Usage scenario
Usually this is used by default only in depalletizing scenarios and cannot be deleted. It is used to make the Z-axis of the Picking Pose perpendicular to the Z-axis of the ROI coordinate system (4-axis) or consistent with the direction of the Target Object surface (6-axis).
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Robot configuration | Set according to the on-site robot configuration. You can choose 4-axis or 6-axis. If a 6-axis robot is actually used as a 4-axis robot, it should be set to 4-axis | 4-axis | 4-axis/6-axis |
| Use ROI Z-axis as target direction | When the robot configuration is set to 4-axis, if selected, the pose is rotated around the X-axis so that the Z-axis direction of the rotated pose matches the positive direction of the ROI Z-axis ; if not selected, the pose is rotated around the X-axis so that the Z-axis direction of the rotated pose matches the positive direction of the Z-axis of the camera coordinate system . When the robot configuration is set to 6-axis, regardless of whether it is selected, the pose is rotated around the X-axis so that the Z-axis direction of the rotated pose matches the positive direction of the Z-axis of the object's own coordinate system | Unchecked | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Camera coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system |
- Example
3.1.7 Rotate the Picking Pose around a fixed axis

- Function description
Rotate the Picking Pose by a certain angle around a fixed axis.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Rotation angle | The angle by which the pose is rotated counterclockwise around the fixed axis | 90 | [-360, 360] | degree° |
| Fixed axis | An axis of the Picking Pose. Rotate the pose counterclockwise around this fixed axis | Z-axis | X/Y/Z-axis | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system | / |
- Example


3.1.8 Translate the Picking Pose

- Function description
Move the Picking Pose by a certain distance along the translation axis.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Translation amount (mm) | The distance the Picking Pose moves along the translation axis. A positive translation amount means translating in the positive direction of the translation axis, and a negative translation amount means translating in the negative direction of the translation axis | 0 | [-1000, 1000] | mm |
| Translation axis | The direction in which the Picking Pose moves | X-axis | X/Y/Z-axis | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Robot arm coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system | / |
- Example


3.1.9 Pick Point Teaching

- Function description
Record the Pick Point coordinates generated by the software and the Pick Point coordinates taught under the current operating condition, then output the transformed Picking Pose based on the offset between the two.
- Usage scenario
When the Pick Points generated by the vision system have an obvious systematic offset and the robot TCP coordinate accuracy is limited or difficult to calibrate, this method can be used to directly map the same offset pattern to subsequent Pick Points, thereby avoiding robot TCP calibration.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Vision Pose | Pick coordinates of the detection result | ||
| X(mm) | X coordinate of the Vision Pose | 0.00 | ±10000000, meaning no limit. |
| Y(mm) | Y coordinate of the Vision Pose | 0.00 | ±10000000, meaning no limit. |
| Z(mm) | Z coordinate of the Vision Pose | 0.00 | ±10000000, meaning no limit. |
| RX(°) | X-axis rotation amount of the Vision Pose | 0.00 | ±180 |
| RY(°) | Y-axis rotation amount of the Vision Pose | 0.00 | ±180 |
| RZ(°) | Z-axis rotation amount of the Vision Pose | 0.00 | ±180 |
| Picking Pose | Manually taught Pick Point | ||
| X(mm) | X coordinate of the Picking Pose | 0.00 | ±10000000, meaning no limit. |
| Y(mm) | Y coordinate of the Picking Pose | 0.00 | ±10000000, meaning no limit. |
| Z(mm) | Z coordinate of the Picking Pose | 0.00 | ±10000000, meaning no limit. |
| RX(°) | X-axis rotation amount of the Picking Pose | 0.00 | ±180 |
| RY(°) | Y-axis rotation amount of the Picking Pose | 0.00 | ±180 |
| RZ(°) | Z-axis rotation amount of the Picking Pose | 0.00 | ±180 |
3.1.10 Refine Object Pose based on plane Normal

- Function description
Correct the Object Pose by fitting the plane Normal so that the Z-axis direction of the Object Pose remains consistent with the direction of the plane Normal of the Target Object.
- Usage scenario
When the Target Object contains a plane and there is a tilt deviation in the plane when the template Point Cloud is matched with the actual Point Cloud, use this function to fine-tune the Target Object plane and improve picking accuracy.
Not applicable to depalletizing scenarios
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Distance Threshold | Distance Threshold for fitting a plane from the Point Cloud | 10 | [-1000, 1000] | mm |
| Save visualization data | If selected, the visualization data will be saved under the historical data timestamp | Selected | / | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Camera coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system | / |
- Example
3.1.11 Sort Pick Points by inter-axis angle

- Function
Sort Pick Points according to the angle between an axis of the Picking Pose and the target axis of the ROI.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Axis selection | An axis of the Picking Pose | Z-axis | X/Y/Z-axis |
| Target axis selection | An axis of the ROI coordinate system | Z-axis | X/Y/Z-axis |
| Select reverse direction | If selected, calculate the angle with the negative direction of the target axis; otherwise, calculate the angle with the positive direction of the target axis | Unchecked | / |
| Select descending order | If selected, sort Pick Points from small to large by angle; otherwise, sort Pick Points from large to small by angle | Unchecked | / |
3.1.12 [Advanced] Rotate the Picking Pose and automatically compensate for excessive angles to the specified axis

- Function description
Determine whether the angle formed between the specified axis of the Picking Pose and the target axis is within the specified range. If not, adjust the Picking Pose into the specified range.
- Usage scenario
Avoid collisions between the robot end effector and the bin.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Angle range | Adjust the Picking Pose into the angle range | 30 | [0, 180] | degree° |
| Specified axis | An axis of the Picking Pose. Adjust this axis so that it falls within the angle range relative to the target axis of the ROI coordinate system | Z-axis | X/Y/Z-axis | / |
| Target axis | An axis of the ROI coordinate system. Compare the angle range with the specified axis of the Picking Pose | Z-axis | X/Y/Z-axis | / |
| Compare with the negative half-axis of the ROI | If not selected, compare the angle range with the positive direction of the target axis of the ROI coordinate system; if selected, compare the angle range with the negative direction of the target axis of the ROI coordinate system | Unchecked | / | / |
| Custom coordinate system | The coordinate system of the Picking Pose | Default coordinate system | Default coordinate system; camera coordinate system; ROI coordinate system; robot arm coordinate system | / |
3.1.13 [Advanced] Symmetry center Object Pose optimization

- Function
Search for the symmetry center of the Target Object based on the instance Mask, then combine it with the plane of the instance or the pose of the ROI 3D center point to calculate the optimal Picking Pose.
Before using this function, first make sure the instance Mask is symmetrical
- Usage scenario
Applicable when the instance Mask of a symmetrical Target Object is also symmetrical, but the Picking Pose is not near the expected center; at the same time, the Target Object has a plane that can be used as a reference, for example, there is a plane on the top of the object, or ROI 3D can be used as a reference for the projected pose.
Applicable project scenarios include brake discs (general circles), refractory bricks (depalletizing), symmetrical irregular parts, fuel fillers, and so on.
- Parameter description
| Parameter | Description | Default | Range | Tuning recommendation |
|---|---|---|---|---|
| Target Object Symmetry type | Target Object Symmetry type of the instance Mask | Rotational symmetry | Rotational symmetry: after the Target Object rotates by a certain angle around the center point, its shape completely overlaps with the original position; mirror symmetry: the Target Object uses a certain axis / plane as the mirror, and the left-right or upper-lower sides are completely symmetrical. | Circles and rectangles are both rotationally symmetrical and mirror-symmetrical, so rotational symmetry is preferred; for trapezoids and other shapes that are symmetrical only along a certain axis or plane, choose mirror symmetry. |
| Gaussian blur level | Tolerance for determining whether the actual Point Cloud overlaps after rotation | 3 | \[1,99\] |
|
| Rotation angle setting | When the symmetry mode is rotational symmetry, it indicates the rotation angle interval, that is, the angle difference between two adjacent rotations. When the symmetry mode is mirror symmetry, it indicates the rotation range, that is, the angle interval within which the Point Cloud can rotate around the symmetry axis. | 180 | \[1,360\] |
|
| Image scaling ratio | Adjusts the size of the Point Cloud image. The larger this ratio, the smaller the Point Cloud image size and the lower the GPU memory usage, but image detail loss increases, resulting in reduced calculation accuracy . | 2 | \[1,10000000\] | |
| Search range | Based on the initially determined center of the Target Object, this defines the range expanded outward to search for Point Cloud features. The actual range is (search range*2*image scaling ratio) | 10 | \[1,10000000\] | For example, for a square Target Object, the initially determined center position of the Target Object is point O. If the search range is set to 10 and the image scaling ratio is 1, then the actual search range is a square region centered at point O with a side length of 10×2×1=20. Point Cloud features are searched within this region to further determine the symmetry center of the Target Object and the optimal Picking Pose. As another example, for a circular Target Object, if the search range is set to 8 and the image scaling ratio is 2, then the actual search range is a circular region centered at the initially determined center of the Target Object with a diameter of 8×2×2=32. Point Cloud features are searched within this region to further determine the Object Pose of the Target Object and the optimal Picking Pose. |
| Use ROI3D as the reference projection plane | If selected, ROI3D is used as the reference projection plane | Unchecked | / | Select this when the Point Cloud has no obvious plane and the projection plane is difficult to determine; leave it unchecked when the Point Cloud has a clear plane. |
| Save symmetry center process data | If selected, the debug data generated during the symmetry center process is saved. You can view it in the `\ProjectName`\data`\PickLight`\HistoricalDataTimestamp`\find`\_symmetry_center folder | Unchecked | / | Select this when you need to inspect the detailed process images |
| Symmetry axis prior type | Effective in ``{=html}mirror symmetry``{=html} mode. Specifies the known Target Object Symmetry type and fixes the asymmetric orientation | Automatic search | Automatic searchSymmetric along the long axisSymmetric along the short axis | If the symmetry axis of the Target Object is the long axis, choose "Symmetric along the long axis". If the symmetry axis of the Target Object is the short axis, choose "Symmetric along the short axis". If uncertain, choose "Automatic search" |
| Pose adjustment type | Whether to inherit pose-related information from the input pose | Default pose | Default poseInherit rotationInherit translation | / |
| Symmetry score Threshold | Symmetry results with a symmetry score lower than this Threshold are abnormal results. When set to 0, no filtering is performed | 0.0 | \[0.0, 1.0\] | / |
- Example
3.2 Pick Point Filtering

3.2.1 Filter by fine matching score

- Function description
Filter Pick Points based on the pose fine matching score.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Score Threshold | Retain Pick Points whose fine matching score is greater than this Threshold | 0.5 | [0, 1] |
- Example
3.2.2 Filter Pick Points of occluded Target Objects

- Function description
Determine whether there are too many occluding object Point Clouds in the target detection area along the specified ROI axis or the Picking Pose axis at the Pick Point of the grasped Target Object. If so, the Target Object is considered occluded and the Pick Point is filtered out.
- Usage scenario
Applicable to depalletizing and ordered scenarios in which Target Objects are picked layer by layer, but the model recognizes lower-layer Target Objects. When picking a lower-layer Target Object, the gripper may collide with the upper-layer Target Object.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Cuboid length in X direction | Set the cuboid length in the X direction of the Picking Pose | 1500 | [1, 10000] | mm |
| Cuboid length in Y direction | Set the cuboid length in the Y direction of the Picking Pose | 1500 | [1, 10000] | mm |
| Cuboid length in Z direction | Set the cuboid length in the Z direction of the Picking Pose | 800 | [1, 10000] | mm |
| Distance Threshold between detection area and Pick Point origin | Along the ROI axis, the nearby cuboid surface area farther than this distance Threshold from the Pick Point origin is regarded as the target detection area | 50 | [1, 1000] | mm |
| Point Cloud count Threshold in detection area | If the number of occluding object Point Clouds in the target detection area exceeds this Threshold, the Pick Point is considered occluded | 1000 | [0, 100000] | / |
| Specified axis direction | Based on the pose reference specified axis direction, set the specific location of the target detection area within the cuboid space (for example, near the front/back/left/right/top/bottom surface of the cuboid) | [0,0,-1] | [1,0,0]: positive X-axis[-1,0,0]: negative X-axis[0,1,0]: positive Y-axis[0,-1,0]: negative Y-axis[0,0,1]: positive Z-axis[0,0,-1]: negative Z-axis | / |
| Use ROI 3D pose reference | If selected, adjust the collision detection area according to the ROI 3D pose reference | Unchecked | / | / |
| Save visualization data | If selected, the visualization data is stored according to the saved data path to help observe whether the generated cuboid is reasonable; if not selected, it is not saved | Unchecked | / | / |
- Example
3.2.3 Filter by Picking Pose angle range

- Function description
Determine whether the angle of the Picking Pose is within the constrained angle range, and filter out all Pick Points that do not meet the condition.
- Usage scenario
Prevent collisions caused by abnormal robot arm Picking Pose angles.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Angle filtering Threshold | Calculate the maximum angle between the specified axis of the ROI and the specified axis of the Picking Pose. Pick Points whose angle is greater than the current Threshold will be filtered out | 30 | [-360, 360] | degree° |
| Invert ROI specified axis direction | If selected, use the negative direction of the specified ROI axis for angle calculation; otherwise, use the positive direction of the specified ROI axis for angle calculation | Selected | / | / |
| Specified Picking Pose axis | Specify an axis of the Picking Pose for angle calculation | Z-axis | X/Y/Z-axis | / |
| Specified ROI axis | Specify an axis of the ROI coordinate system for angle calculation | Z-axis | X/Y/Z-axis | / |
- Example
3.2.4 Filter Pick Points outside the ROI 3D type region

- Function description
Determine whether the Pick Point is within the ROI 3D range, and remove Pick Points that are outside the ROI 3D area.
- Usage scenario
Prevent picking outside the ROI area, which may cause collisions between the robot arm and the target object.
- Parameter description
| Parameter | Description | Default |
|---|---|---|
| ROI3D type region | Usually "workspace"; "pick area" is a smaller ROI region than "workspace", which can restrict Pick Points to an ROI region smaller than the "workspace" to avoid some collision cases. | Workspace |
- Example
As shown in the figure below, when the ROI3D area and ROI2D are area a, the corresponding Pick Point is in the upper-right corner.



When the ROI3D area and ROI2D are changed to area b, the original Pick Point is outside the ROI area, so that Pick Point is removed and a new Pick Point is generated within area b.



3.2.5 [New] Filter Pick Points where the Target Object collides with the gripper (including the original function)
[New] Filter Pick Points where the Target Object collides with the gripper

- Function description
Collision detection between the gripper and the Point Cloud near the Pick Point. If the number of Point Clouds in contact with the gripper exceeds the pick collision Threshold, the Pick Point of the Target Object is considered to have a collision risk.
- Usage scenario
Used when collision detection is required between the gripper and the Point Cloud near the Target Object being picked.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Collision Threshold | Collision distance Threshold. If the distance between the scene and the gripper surface is smaller than this Threshold, it is considered a collision. The larger the Threshold, the stricter it is. Unit: mm | 7 | 1-1000 |
| Collision Point Cloud sampling | Sampling size for collision Point Clouds. The larger the value, the faster the cycle time; the smaller the value, the slower the cycle time. Effective only in "Target Object scene Point Cloud only" and "bin + Target Object scene Point Cloud" modes. Unit: mm | 5 | 1 - 1000 |
| Save visualization data for gripper collision detection | Save visualization data for collision detection between the gripper and the picked Target Object | Unchecked | Selected/Unchecked |
Filter Pick Points where the Target Object collides with the gripper

- Function description
Collision detection between the gripper and the Point Cloud near the Pick Point. If the number of Point Clouds in contact with the gripper exceeds the pick collision Threshold, the Pick Point of the Target Object is considered to have a collision risk.
- Usage scenario
Used when collision detection is required between the gripper and the Point Cloud near the Target Object being picked.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Pick collision Threshold | The maximum number of Point Clouds the gripper may contain near the Pick Point. For example, 20 means that if the number of scene Point Clouds contained by the gripper exceeds 20, it is considered a collision | 20 | 0-10000 |
| Collision Point Cloud sampling (m) | Downsampling size of the Point Cloud in the collision area. The larger the value, the faster the detection speed, but the lower the accuracy. Applicable scenario: scenarios requiring high cycle rates | 0.002 | 0.0001 - 0.5000 |
| Save visualization data for gripper collision detection | Save visualization data for collision detection between the gripper and the picked Target Object | Unchecked | Selected/Unchecked |
| Import gripper model | Select and import the gripper model used for collision detection from a folder | / | / |
**The gripper should be simplified to fewer than 500 faces**

3.2.6 [Advanced] Retain the one Pick Point with the largest/smallest pose value among instance Pick Points and filter the remaining Pick Points

- Function description
Convert the pose to the specified coordinate system, sort poses according to the value of the specified sorting axis, and retain the pose with the maximum or minimum value. This is suitable for cylindrical Target Objects when keeping the top or bottom Pick Point.
- Parameter description
| Parameter | Description | Default | Range |
|---|---|---|---|
| Specified coordinate system | Select which coordinate system the pose should be converted to for processing | ROI coordinate system | ROI coordinate system/camera coordinate system |
| Specified sorting axis | Select which axis value of the pose to sort by | Z-axis | X/Y/Z-axis |
| Take minimum value | If selected, retain the pose with the minimum value on the sorting axis; otherwise, retain the pose with the maximum value on the sorting axis | Unchecked | / |
- Example
3.2.7 [Advanced] Filter Pick Points close to the previous N Pick Points

- Function description
If the variation between the current Pick Point and any Pick Point in the cache is within the Threshold range, the Pick Point will be filtered out.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Upper limit of Pick Point change (+) | ||||
| X(mm) | Upper limit of X coordinate | 2 | [0, 10000000] | mm |
| Y(mm) | Upper limit of Y coordinate | 2 | [0, 10000000] | mm |
| Z(mm) | Upper limit of Z coordinate | 2 | [0, 10000000] | mm |
| RX(°) | Upper limit of RX rotation amount | 1 | [0, 180] | degree° |
| RY(°) | Lower limit of RY rotation amount | 1 | [0, 180] | degree° |
| RZ(°) | Lower limit of RZ rotation amount | 1 | [0, 180] | degree° |
| Lower limit of Pick Point change (-) | ||||
| X(mm) | Lower limit of X coordinate | 2 | [0, 10000000] | mm |
| Y(mm) | Lower limit of Y coordinate | 2 | [0, 10000000] | mm |
| Z(mm) | Lower limit of Z coordinate | 2 | [0, 10000000] | mm |
| RX(°) | Lower limit of RX rotation amount | 1 | [0, 180] | degree° |
| RY(°) | Lower limit of RY rotation amount | 1 | [0, 180] | degree° |
| RZ(°) | Lower limit of RZ rotation amount | 1 | [0, 180] | degree° |
| Pick Point cache count | Number of Pick Points cached. After the current Pick Point comparison is completed, it will be added to the cache in real time | 5 | [1, 100] | / |
3.2.8 [Advanced] Filter Object Poses close to the previous N Object Poses

- Function description
If the variation between the current Object Pose and any Object Pose in the cache is within the Threshold range, the Object Pose will be filtered out. When an Object Pose is determined to be similar, all Pick Points on that Target Object will be filtered out.
- Parameter description
| Parameter | Description | Default | Range | Unit |
|---|---|---|---|---|
| Upper limit of Object Pose change (+) | ||||
| X(mm) | Upper limit of X coordinate | 2 | [0, 10000000] | mm |
| Y(mm) | Upper limit of Y coordinate | 2 | [0, 10000000] | mm |
| Z(mm) | Upper limit of Z coordinate | 2 | [0, 10000000] | mm |
| RX(°) | Upper limit of RX rotation amount | 1 | [0, 180] | degree° |
| RY(°) | Lower limit of RY rotation amount | 1 | [0, 180] | degree° |
| RZ(°) | Lower limit of RZ rotation amount | 1 | [0, 180] | degree° |
| Lower limit of Object Pose change (-) | ||||
| X(mm) | Lower limit of X coordinate | 2 | [0, 10000000] | mm |
| Y(mm) | Lower limit of Y coordinate | 2 | [0, 10000000] | mm |
| Z(mm) | Lower limit of Z coordinate | 2 | [0, 10000000] | mm |
| RX(°) | Lower limit of RX rotation amount | 1 | [0, 180] | degree° |
| RY(°) | Lower limit of RY rotation amount | 1 | [0, 180] | degree° |
| RZ(°) | Lower limit of RZ rotation amount | 1 | [0, 180] | degree° |
| Object Pose cache count | Number of vision Object Poses cached. After the comparison of the current Object Pose is completed, it will be added to the cache in real time | 5 | [1, 100] | / |
3.2.9 [Advanced] Filter Pick Points outside the upper and lower limits of Pick coordinates

- Function description
Retain other Pick Points within the specified range of a reference Pick Point and filter out abnormal Pick Points.
- Usage scenario
Prevent incorrect robot picking and ensure picking accuracy.
This function is not applicable to depalletizing scenarios
- Parameter description
| Parameter | Description | Default | Unit |
|---|---|---|---|
| Reference Pick coordinates | |||
| X(mm) | X coordinate of the reference Pick Point | 0 | mm |
| Y(mm) | Y coordinate of the reference Pick Point | 0 | mm |
| Z(mm) | Z coordinate of the reference Pick Point | 0 | mm |
| RX(°) | RX rotation amount of the reference Pick Point | 0 | degree |
| RY(°) | RY rotation amount of the reference Pick Point | 0 | degree |
| RZ(°) | RZ rotation amount of the reference Pick Point | 0 | degree |
| Upper limit of Pick coordinates (+) | |||
| X(mm) | Upper limit of the X coordinate. For example, if the X coordinate of the reference Pick Point is 100 and the upper limit is set to 10, the allowed range is: [100-lower limit, 110] | 10000000, meaning no limit. | mm |
| Y(mm) | Upper limit of the Y coordinate. For example, if the Y coordinate of the reference Pick Point is 100 and the upper limit is set to 10, the allowed range is: [100-lower limit, 110] | 10000000 | mm |
| Z(mm) | Upper limit of the Z coordinate. For example, if the Z coordinate of the reference Pick Point is 100 and the upper limit is set to 10, the allowed range is: [100-lower limit, 110] | 10000000 | mm |
| RX(°) | Upper limit of the RX rotation amount. For example, if the RX rotation amount of the reference Pick Point is 180 and the upper limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -170], [180-lower limit, 180]] | 180, meaning no limit. | degree° |
| RY(°) | Upper limit of the RY rotation amount. For example, if the RY rotation amount of the reference Pick Point is 180 and the upper limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -170], [180-lower limit, 180]] | 180 | degree° |
| RZ(°) | Upper limit of the RZ rotation amount. For example, if the RZ rotation amount of the reference Pick Point is 180 and the upper limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -170], [180-lower limit, 180]] | 180 | degree° |
| Lower limit of Pick coordinates (-) | |||
| X(mm) | Lower limit of the X coordinate. For example, if the X coordinate of the reference Pick Point is 100 and the lower limit is set to 10, the allowed range is: [100-lower limit value, 110] | 10000000 | mm |
| Y(mm) | Lower limit of the Y coordinate. For example, if the Y coordinate of the reference Pick Point is 100 and the lower limit is set to 10, the allowed range is: [100-lower limit, 110] | 10000000 | mm |
| Z(mm) | Lower limit of the Z coordinate. For example, if the Z coordinate of the reference Pick Point is 100 and the lower limit is set to 10, the allowed range is: [100-lower limit, 110] | 10000000 | mm |
| RX(°) | Lower limit of the RX rotation amount. For example, if the RX rotation amount of the reference Pick Point is 180 and the lower limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -180+upper limit], [170, 180]] | 180, meaning no limit. | degree° |
| RY(°) | Lower limit of the RY rotation amount. For example, if the RY rotation amount of the reference Pick Point is 180 and the lower limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -180+upper limit], [170, 180]] | 180 | degree° |
| RZ(°) | Lower limit of the RZ rotation amount. For example, if the RZ rotation amount of the reference Pick Point is 180 and the lower limit is set to 10, the allowed range is (default angle wraparound applies): [[-180, -180+upper limit], [170, 180]] | 180 | degree° |
3.3 Pick Point Sorting

3.3.1 Reference coordinate system

- Function description
Set a unified coordinate system for all instances to group and sort instances.
- Usage scenario
Common to depalletizing scenarios, random picking scenarios, and ordered loading/unloading scenarios
Strategies related to coordinates should first set the reference coordinate system
- Parameter description
| Parameter | Description | Illustration |
|---|---|---|
| Camera coordinate system | The coordinate system origin is above the object, and the positive Z-axis direction points downward; the XYZ values are the values of the center point of the object in this coordinate system | ![]() |
| ROI coordinate system | The coordinate system origin is approximately at the center of the pallet stack, and the positive Z-axis direction points upward; the XYZ values are the values of the center point of the object in this coordinate system | ![]() |
| Robot arm coordinate system | The coordinate system origin is on the robot arm itself, and the positive Z-axis direction generally points upward; the XYZ values are the values of the center point of the object in this coordinate system | ![]() |
| Pixel coordinate system | The coordinate system origin is at the top-left vertex of the RGB image and is a 2D planar coordinate system; the X and Y values are the x value of the bbox detection box and the y value of the bbox detection box, and Z is 0 | ![]() |
3.3.2 General picking strategy

- Parameter description
| Parameter | Description |
|---|---|
| Strategy | Select which value is used for grouping and sorting and how to sort it, including Pick Point center X/Y/Z coordinate values from large to small/from small to large (mm), from the middle to the sides / from the sides to the middle along the Pick Point XY coordinate axis (mm). Multiple items can be superimposed and executed in order. |
| Grouping step size | According to the selected strategy, divide Pick Points into several groups based on the step size. The grouping step size is the interval between two groups of Pick Points |
| Number of leading groups to keep | After grouping and sorting, how many groups of instances need to be retained |
| Strategy name* | Description | Grouping step size | Number of leading groups to keep | |
|---|---|---|---|---|
| Default | Range | Default | ||
| Pick Point center X/Y/Z coordinate values from large to small / from small to large (mm) | Use the X/Y/Z coordinate values of the Pick Point center for grouping and sorting | 200.000 | [0, 10000000] | 10000 |
| From the middle to the sides / from the sides to the middle along the Pick Point XY coordinate axis (mm) | Use the X/Y coordinate values of the Pick Point center and perform grouping and sorting in the direction of "middle to sides" or "sides to middle" | 200.000 | [0, 10000000] | 10000 |
3.3.3 Carton combination strategy
To solve the problems of low efficiency and limited applicable scenarios in traditional single-pick depalletizing, PickWiz adds a carton combination strategy in depalletizing scenarios to support picking multiple Target Objects in a single operation. It supports the core scenarios of "cartons with consistent dimensions" and "rectangular suction cups", covering more real project scenarios.

3.3.3.1 Multi-pick runtime configuration
(1)In sack single depalletizing or carton single depalletizing scenarios, enable Vision computation configuration - Vision computation acceleration.
(2)Under the Pick Point sorting module, select the carton combination strategy;

(3)Strategy selection: available options are the default combination strategy or combination along a specified carton pose axis. These are two methods for finding the largest number of cartons that can be combined.

Default combination strategy: Find the largest number of cartons that can be combined along the X-axis and Y-axis directions of a carton Picking Pose.
Combine along a certain carton pose axis: Find the largest number of cartons that can be combined along the X-axis or Y-axis direction of a carton Picking Pose. This is suitable for scenarios where cartons are arranged in a straight line. When using this strategy, you need to choose the carton combination direction, namely the Picking Pose X-axis or the Picking Pose Y-axis.

Note:
The combination direction is related to the positive and negative axis directions. Cartons can be combined only when they are on the same axis and in the same direction, and after combination, the orientation of the whole stack of cartons remains consistent with the orientation of a single carton before combination. Therefore, before combining cartons, make sure all cartons to be combined are placed in the same orientation, and unify the coordinates of the cartons to be combined to the same axis direction.
(4)Combination conditions: determine which cartons can be combined and how many can be combined at most.
Maximum cartons per row: the maximum number of cartons that can be combined in one row, default is 2.
Maximum number of combination rows: the maximum number of carton rows that can be combined, default is 1.
Maximum spacing (mm): cartons to be combined cannot be too far apart in the "combination direction", In the combination direction (axis direction), when the spacing between two adjacent cartons or cartons in different rows is less than this value, they can be combined into one group. The default is 10.
- Example: when searching for the maximum number of cartons along the Picking Pose X-axis, if the spacing between two adjacent cartons in the Picking Pose X-axis direction is 8 mm (≤10), they can be combined; if the spacing is 12 mm (>10), they cannot be combined.
Maximum misalignment distance (mm): cartons to be combined cannot be too far apart in the direction "perpendicular to the combination direction" . In the direction perpendicular to the combination direction (axis direction), when the misalignment distance between two adjacent cartons or cartons in different rows is less than this value, they can be combined into one group. The default is 10.
- Example: when searching for the maximum number of cartons along the Picking Pose X-axis, if two adjacent cartons are offset in the Picking Pose Y-axis direction by 8 mm (≤10), they can be combined; if they are offset by 15 mm (>10), they are no longer aligned and cannot be picked together.
Maximum angular deviation (°): cartons to be combined should face almost the same direction. In the combination direction (axis direction), when the rotational deviation angle of the cartons is less than this value, they can be combined into one group. The default is 10.
- Example: if a carton is rotated by 5° relative to the combination direction, as long as it does not exceed 10°, it can be combined; if it is rotated by 15° (>10), the orientation differs too much, the robot will be skewed when picking, and it cannot be combined.

3.3.3.2 Robot configuration
(1)On the robot configuration page, add new placeholders in Vision computation communication message - Robot to PickWiz commands - Vision detection send command: maximum cartons per row and maximum number of combination rows, as shown below;

(2)In Vision computation communication message - PickWiz to robot commands - Pick-related information - Returned information when picking Target Objects, add Object Dimensions length, Object Dimensions width, and Target Object orientation.

After the robot configuration is completed, click the Run button.
3.3.3.3 View multi-pick runtime results
(1)In the 3D Matching window, hover the mouse over an instance to view the combined picking information of a single instance after carton combination, including 2D recognition results, Picking Pose, and instance combination information.

In the visualization window, click the Settings button in the upper right corner to set how the combined instance information is displayed.

Right-click an instance to view the combined picking information and Target Object information of the single instance.

(2)In the 2D recognition window, you can use the relevant combination buttons in the menu bar to view the combined ID, combined Mask, and combined bounding box.








