Guide to Adjusting Visual Parameters for Cylindrical Workpieces
About 13375 wordsAbout 45 min
This document mainly explains how to adjust visual parameters based on the actual scenario for cylinder-based ordered loading and unloading and unordered picking.
1. 2D Recognition
1.1 Preprocessing
The preprocessing for 2D recognition processes the 2D image before Instance Segmentation.

1.1.1 Bilateral Filtering

- Function
Image smoothing based on bilateral filtering.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Maximum depth difference | The maximum depth difference for bilateral filtering | 0.03 | [0.01, 1] |
| Filter kernel size | The convolution kernel size for bilateral filtering | 7 | [1, 3000] |
1.1.2 Depth to Normal Map

- Function
Calculate the pixel Normal from the depth map and convert the image into a Normal map.
1.1.3 Image Enhancement

- Function
Common image enhancement, such as saturation, contrast, brightness, and sharpness.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Image enhancement type | Enhance a specific element of the image | Contrast | Saturation, contrast, brightness, sharpness |
| Image enhancement threshold | How much to enhance a specific element of the image | 1.5 | [0.1, 100] |
1.1.4 Histogram Equalization

- Function
Improve image contrast.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Local mode | Local or global histogram equalization. When selected, local histogram equalization is used; when cleared, global histogram equalization is used | Selected | / |
| Contrast threshold | Contrast threshold | 3 | [1,1000] |
1.1.5 Filter Depth Map by Color

- Function
Filter the depth map based on color values.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | The size of color filling | 3 | [1,99] |
| Filter depth by HSV - maximum color range value | Maximum color value | [180,255,255] | [[0,0,0],[255,255,255]] |
| Filter depth by HSV - minimum color range value | Minimum color value | [0,0,0] | [[0,0,0],[255,255,255]] |
| Keep regions within the color range | If selected, keep the regions within the color range; if cleared, keep the regions outside the color range | / | / |
1.1.6 Gamma Image Correction

- Function
Gamma correction changes image brightness.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Gamma compensation coefficient | If this value is less than 1, the image becomes darker; if it is greater than 1, the image changes | 1 | [0.1,100] |
| Gamma correction coefficient | If this value is less than 1, the image becomes darker and is suitable for overly bright images; if it is greater than 1, the image becomes brighter and is suitable for overly dark images | 2.2 | [0.1,100] |
1.1.7 Fill Holes in the Depth Map

- Function
Fill the hollow regions in the depth map and smooth the filled depth map.
- Use Scenario
Due to issues such as occlusion caused by the Target Object structure itself and uneven lighting, the depth map may miss parts of the Target Object.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | The size of hole filling | 3 | [1,99] |
Only odd numbers can be entered for the fill kernel size.
- Parameter Tuning
Adjust according to the detection result. If overfilling occurs, decrease the parameter; if filling is insufficient, increase the parameter.
- Example
1.1.8 Edge Enhancement

- Function
Set the edge areas of the texture in the image to the Background color or to a color with a large difference from the Background color, so as to highlight the edge information of the Target Object.
- Use Scenario
The edges are unclear because Target Objects occlude or overlap each other.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Parameter Tuning Recommendation |
|---|---|---|---|---|
| Normal Z-direction filtering threshold | The filtering threshold for the angle between the Normal corresponding to each point in the depth map and the positive Z-axis direction of the Camera coordinate system. If the angle between a point's Normal and the positive Z-axis direction of the Camera coordinate system is greater than this threshold, the color at the corresponding position of that point in the 2D image will be set to the Background color or to a color with a large difference from the Background color | 30 | [0,180] | For flat Target Object surfaces, this threshold can be smaller. For curved Target Objects, increase it appropriately according to the surface inclination |
| Background color | The RGB color threshold of the Background color | 128 | [0,255] | / |
| Automatically adjust contrast background | Selected After automatically adjusting the contrast background, the colors of points in the 2D image whose angles are greater than the filtering threshold are set to colors that differ greatly from the Background color. Cleared After automatically adjusting the contrast background, the colors of points in the 2D image whose angles are greater than the filtering threshold are set to the color corresponding to the Background color | Cleared | / | / |
- Example
1.1.9 Extract the Highest Layer Texture

- Function
Extract the texture of the highest layer or the lowest layer of the Target Object, and set the other areas to the Background color or to a color that differs greatly from the Background color.
- Use Scenario
Factors such as poor lighting conditions, similar color textures, dense stacking, interlaced stacking, or occlusion may make it difficult for the model to distinguish the texture differences between upper-layer and lower-layer Target Objects, which can easily lead to false detections.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Parameter Tuning Recommendation |
|---|---|---|---|---|---|
| Distance threshold (mm) | If the distance between a point and the highest-layer plane (lowest-layer plane) is lower than this threshold, the point is considered to be on the highest-layer plane (lowest-layer plane) and should be retained. Otherwise, it is considered a lower-layer (upper-layer) point, and its color is set to the Background color or to a color with a large difference from the Background color | 50 | [0.1, 1000] | mm | It is generally adjusted to 1/2 of the Target Object height |
| Cluster point cloud count | The expected number of points participating in clustering, that is, the number of sampled point clouds within the ROI 3D area | 10000 | [1,10000000] | / | The larger the cluster point cloud count, the slower the model Inference speed and the higher the accuracy; the smaller the cluster point cloud count, the faster the model Inference speed and the lower the accuracy |
| Minimum number of points per category | The minimum number of points used to filter categories | 1000 | [1, 10000000] | / | / |
| Automatically calculate contrast background | Selected After automatically calculating the contrast background, the regions other than the highest layer (lowest layer) in the 2D image are set to colors that differ greatly from the Background color threshold. Cleared After automatically calculating the contrast background, the regions other than the highest layer (lowest layer) in the 2D image are set to the color corresponding to the Background color threshold | Selected | / | / | / |
| Background color threshold | RGB color threshold for the Background color | 128 | [0,255] | / | / |
- Example
1.1.10 Remove the Image Background Outside roi3d

- Function
Remove the background outside the ROI3D area from the 2D image.
- Use Scenario
There is a lot of background noise in the image that affects the detection results.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Fill kernel size | The size of hole filling | 5 | [1,99] |
| Number of iterations | The number of image Dilation iterations | 1 | [1,99] |
| Automatically calculate contrast background | Selected After automatically calculating the contrast background, the region outside the roi in the 2D image is set to a color that differs greatly from the Background color threshold. Cleared After automatically calculating the contrast background, the region outside the roi in the 2D image is set to the color corresponding to the Background color threshold | Selected | / |
| Background color threshold | The RGB color threshold of the Background color | 128 | [0,255] |
Only odd numbers can be entered for the fill kernel size.
- Parameter Tuning
If you need to remove more background noise from the image, decrease the fill kernel size.
- Example
1.2 Instance Segmentation
1.2.1 Scale Ratio

- Function
Scale the original image proportionally before Inference to improve the accuracy and recall of 2D Recognition.
- Use Scenario
If the detection result is poor (for example, no instance is detected, instances are missed, one bounding box covers multiple instances, or the bounding box does not fully cover an instance), this function should be adjusted.
Parameter Description
Default value: 1.0
Value range: [0.01, 3.00]
Step size: 0.01
Parameter Tuning
- Run with the default value and check the detection result in the visualization window. If no instance is detected, instances are missed, one bounding box covers multiple instances, or the bounding box does not fully cover an instance, adjust this function.
In 2D Recognition, the percentage on an instance is the Confidence score, and the number is the instance ID (the recognition order of the instance).
In 2D Recognition, the colored shadow on an instance is the Mask, and the rectangular box around the instance is the bounding box.
- Try different scale ratios and observe the changes in the detection results to gradually determine the range of scale ratios. If the detection effect improves significantly at a certain scale ratio, use that scale ratio as the lower bound; if the detection effect drops significantly at a certain scale ratio, use that scale ratio as the upper bound.
If good detection results cannot be obtained after trying all scale ratios, you can adjust the ROI area.
As shown in the figure below, when the scale ratio is 0.8, the detection effect improves significantly, so 0.8 can be used as the lower bound of the scale ratio range.
When the scale ratio is 1.2, the detection effect drops significantly, so 1.2 can be used as the upper bound of the scale ratio range.
- If the actual scenario does not require high picking accuracy, you can select a scale ratio with a good detection effect within the [0.8,1.2] interval. If the actual scenario requires high picking accuracy, you should further refine the scale ratio range and adjust it with a smaller step size until the scale ratio with the best detection effect is found.
1.2.2 Lower Confidence Threshold

- Function
Retain only the recognition results whose Deep Learning model scores are higher than the lower Confidence threshold.
- Use Scenario
This function can be adjusted when the instances enclosed by the detection results do not meet expectations.
- Parameter Description
Default value: 0.5
Value range: [0.01, 1.00]
Parameter Tuning
If the model detects too few instances, decrease this threshold; if the value is too small, it may affect the accuracy of image recognition.
If an excessively small lower Confidence threshold causes incorrect instances to be detected and these incorrect instances need to be removed, increase this threshold; if the value is too large, it may result in zero retained detection results and no output.
1.2.3 Enable Auto Enhancement

- Function
Combine all values in the input scale ratios and rotation angles for Inference, then return all combined results whose scores are greater than the configured lower Confidence threshold. This can improve model Inference accuracy, but it increases processing time.
- Use Scenario
A single scale ratio cannot satisfy actual scenario requirements, resulting in incomplete detection, or the object placement tilt is relatively large.
- Example
If Auto Enhancement - Scale Ratio is set to [0.8, 0.9, 1.0] and Auto Enhancement - Rotation Angle is set to [0, 90.0] , then the values in the scale ratios and rotation angles will be combined pairwise. The model will automatically generate 6 images for Inference, and finally merge these 6 Inference results together and output the results greater than the lower Confidence threshold.
Auto Enhancement - Scale Ratio

- Function
Scale the original image multiple times and perform Inference multiple times to output a comprehensive Inference result.
- Use Scenario
A single scale ratio cannot satisfy actual scenario requirements, resulting in incomplete detection.
- Parameter Description
Default value: [1.0]
Value range: the range of each scale ratio is [0.1, 3.0]
Multiple scale ratios can be set, separated by English commas.
- Parameter Tuning
Enter multiple scale ratios from 1.2.1 Scale Ratio that produce good detection results.
Auto Enhancement - Rotation Angle

- Function
Rotate the original image multiple times and perform Inference multiple times to output a comprehensive Inference result.
- Use Scenario
Use this when the object placement deviates significantly from the coordinate axes.
- Parameter Description
Default value: [0.0]
Value range: the value range of each rotation angle is [0, 360]
Multiple rotation angles can be set, separated by English commas.
- Parameter Tuning
Adjust Auto Enhancement - Rotation Angle according to the object angle in the actual scenario. The tilt angle can be judged based on sack patterns and bag opening shapes, or carton edges and brand logos.
1.3 Point Cloud Generation

| Instance point cloud generation mode | Mask mode (after segmentation) | - | Generate the point cloud using the segmented instance Mask |
| Bounding box mode (after segmentation) | Bounding box scale ratio (after segmentation) | Generate the point cloud using the segmented instance bounding box | |
| Whether color is required for point cloud generation (after segmentation) | Whether the generated instance point cloud needs attached color | ||
| Mask mode (after filtering) | - | Generate the point cloud using the filtered instance Mask | |
| Bounding box mode (after filtering) | Bounding box scale ratio (after filtering) | Generate the point cloud using the filtered instance bounding box | |
| Whether color is required for point cloud generation (after filtering) | Whether the generated instance point cloud needs attached color |
If acceleration is not required, there is no need to use the Instance Filtering function. Use Mask mode (after segmentation) or Bounding box mode (after segmentation) to generate the instance point cloud, which can be viewed in the generated instance point cloud folder under the project storage folder \ProjectName\data\PickLight\HistoricalDataTimestamp\Builder\pose\input.
If acceleration is required, you can use the Instance Filtering function to filter instances and use Mask mode (after filtering) or Bounding box mode (after filtering) to generate the instance point cloud, which can be viewed in the generated instance point cloud folder under the project storage folder \ProjectName\data\PickLight\HistoricalDataTimestamp\Builder\pose\input.
1.4 Instance Filtering

1.4.1 Filter Based on Bounding Box Area

- Function Introduction
Filter based on the pixel area of the bounding boxes of the detected instances.
- Use Scenario
Suitable for scenarios where instance bounding box areas differ significantly. By setting the upper and lower limits of the bounding box area, noise in the image can be filtered out, improving image recognition accuracy and preventing noise from increasing the processing time of subsequent steps.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Minimum area (pixels) | This parameter is used to set the minimum filtering area for the bounding box. Instances whose bounding box area is lower than this value will be filtered out | 1 | [1, 10000000] | pixel |
| Maximum area (pixels) | This parameter is used to set the maximum filtering area for the bounding box. Instances whose bounding box area is higher than this value will be filtered out | 10000000 | [2, 10000000] | pixel |
- Example
Run with the default values to view the bounding box area of each instance in the log, as shown below.


Adjust **Minimum area ** and Maximum area according to the bounding box area of each instance. For example, set Minimum area to 20000 and Maximum area to 30000 to filter out instances with pixel areas smaller than 20000 or larger than 30000. The instance filtering process can be viewed in the log.


1.4.2 Filter Based on Bounding Box Aspect Ratio

- Function Introduction
Instances whose bounding box aspect ratio is outside the specified range will be filtered out.
- Use Scenario
Suitable for scenarios where the aspect ratios of instance bounding boxes differ significantly.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Minimum aspect ratio | The minimum value of the bounding box aspect ratio. Instances whose bounding box aspect ratio is lower than this value will be filtered out | 0 | [0, 10000000] |
| Maximum aspect ratio | The maximum value of the bounding box aspect ratio. Instances whose bounding box aspect ratio is higher than this value will be filtered out | 10000000 | [0, 10000000] |
| Use X/Y-axis side lengths as the aspect ratio | Cleared by default, the ratio of the longer side to the shorter side of the bounding box is used as the aspect ratio, which is suitable when the lengths of the longer and shorter sides of the bounding box differ greatly; When selected, the ratio of the side length of the bounding box on the X-axis to that on the Y-axis in the pixel coordinate system is used as the aspect ratio, which is suitable when the long-side/short-side ratio of most normal instance bounding boxes is close, but the ratio of the X-axis length to the Y-axis length of some abnormally recognized instance bounding boxes differs significantly. | Cleared | / |
1.4.3 Filter Instances by Category ID

- Function Introduction
Filter according to the instance category.
- Use Scenario
Suitable for scenarios where incoming materials contain multiple types of Target Objects.
- Parameter Description
| Parameter | Description | Default Value |
|---|---|---|
| Category IDs to retain | Retain instances whose category IDs are in the list. Instances whose category IDs are not in the list will be filtered out | [0] |
- Example
1.4.4 Filter Based on Side Lengths of the Instance Point Cloud

- Function Introduction
Filter based on the long side and short side of the instance point cloud.
- Use Scenario
Suitable for scenarios where the distances of the instance point cloud on the x-axis or y-axis differ greatly. By setting the distance range of the instance point cloud, noise in the image can be filtered out, improving image recognition accuracy and preventing noise from increasing the processing time of subsequent steps.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Short side length range (mm) | The side length range of the short side of the point cloud | [0, 10000] | [0, 10000] | mm |
| Long side length range (mm) | The side length range of the long side of the point cloud | [0, 10000] | [0, 10000] | mm |
| Lower edge denoising limit (%) | Extract the lower percentage limit of X/Y values in the instance point cloud (Camera coordinate system), and remove the point cloud outside the upper and lower limits to avoid noise affecting length calculation | 5 | [0, 100] | / |
| Upper edge denoising limit (%) | Extract the upper percentage limit of X/Y values in the instance point cloud (Camera coordinate system), and remove the point cloud outside the upper and lower limits to avoid noise affecting length calculation | 95 | [0, 100] | / |
| Side length type | Filter by the long side and short side of the instance point cloud. Instances whose long-side or short-side lengths are not within the range will be filtered out | Short side of instance point cloud | Short side of instance point cloud; Long side of instance point cloud; Long and short sides of instance point cloud | / |
- Example
1.4.5 Filter by Category ID Based on the Classifier

- Function Introduction
Filter instances by category ID based on the classifier. Instances that are not in the reference categories will be filtered out.
- Use Scenario
In scenarios with multiple Target Object categories, the vision model may detect various types of Target Objects, but the actual task may require only one category. In this case, this function can be used to filter out unnecessary Target Objects.
- Parameter Description
The default value is [0], which means instances with category ID 0 are retained by default, and instances whose category ID is not in the list will be filtered out.
1.4.6 Filter Based on Three-Channel Colors

- Function Introduction
Instances can be filtered out by three-channel color thresholds (HSV or RGB).
- Use Scenario
Cases where the colors of incorrect instances and correct instances are clearly distinguishable.
- Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Maximum color range value | Maximum color value | [180,255,255] | [[0,0,0],[255,255,255]] |
| Minimum color range value | Minimum color value | [0,0,0] | [[0,0,0],[255,255,255]] |
| Filtering percentage threshold | Color pass-rate threshold | 0.05 | [0,1] |
| Reverse filtering | If selected, remove instances whose proportion outside the color range is lower than the threshold. If cleared, remove instances whose proportion within the color range in the instance image is lower than the threshold | Cleared | / |
| Color mode | The color space selected in color filtering | HSV color space | RGB color spaceHSV color space |
- Example

1.4.7 Filter Based on Confidence

- Function Introduction
Filter based on the Confidence score of the instance.
- Use Scenario
Suitable for scenarios where instance Confidence differs greatly.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Reference Confidence | Retain instances with Confidence greater than the threshold, and filter out instances with Confidence lower than the threshold. | 0.5 | [0,1] |
| Reverse filtering result | After reversal, retain instances with visibility Confidence lower than the threshold, and filter out instances with Confidence greater than the threshold. | Cleared | / |
- Example
1.4.8 Filter Based on Point Cloud Count

- Function Introduction
Filter based on the downsampled instance point cloud count.
- Use Scenario
The instance point cloud contains a large amount of noise.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Minimum point cloud count | The minimum point cloud count | 3500 | [1, 10000000] |
| Maximum point cloud count | The maximum point cloud count | 8500 | [2, 10000000] |
| Filter instances whose count is within the range | If selected, filter instances whose point cloud count is within the interval between the minimum and maximum values. If cleared, filter instances whose point cloud count is outside the interval | Cleared | / |
1.4.9 Filter Based on Mask Area

- Function Introduction
Filter image masks according to the sum of mask pixels (that is, the pixel area) of detected instances.
- Use Scenario
Suitable for scenarios where instance Mask areas differ greatly. By setting the upper and lower area limits of the Mask, noise in image masks can be filtered out, improving image recognition accuracy and preventing noise from increasing the processing time of subsequent steps.
- Parameter Setting Description
| Parameter Name | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Reference minimum area | This parameter is used to set the minimum filtering area for the Mask. Instances whose Mask area is lower than this value will be filtered out | 1 | [1, 10000000] | pixel |
| Reference maximum area | This parameter is used to set the maximum filtering area for the Mask. Instances whose Mask area is higher than this value will be filtered out | 10000000 | [2, 10000000] | pixel |
- Example
1.4.10 Filter Based on Visibility

- Function Introduction
Filter according to the visibility score of the instance.
- Use Scenario
Suitable for scenarios where the visibility of instances differs greatly.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Reference visibility threshold | Retain instances whose visibility is greater than the threshold, and filter out instances whose visibility is lower than the threshold. Visibility is used to determine how visible an instance is in the image. The more the Target Object is occluded, the lower the visibility. | 0.5 | [0,1] |
| Reverse filtering result | After reversal, retain instances whose visibility is lower than the threshold, and filter out instances whose visibility is greater than the threshold. | Cleared | / |
1.4.11 Filter Instances with Overlapping Bounding Boxes

- Function Introduction
Filter instances whose bounding boxes intersect and overlap.
- Use Scenario
Suitable for scenarios where instance bounding boxes intersect each other.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range |
|---|---|---|---|
| Bounding box overlap ratio threshold | The threshold for the ratio of the intersection area of bounding boxes to the area of the instance bounding box | 0.05 | [0, 1] |
| Filter the instance with the larger bounding box area | If selected, filter out the instance with the larger area among two instances whose bounding boxes intersect. If cleared, filter out the instance with the smaller area among two instances whose bounding boxes intersect | Selected | / |
- Example

Newly added filter enclosed instances. Run with the default values and check the overlapping instance bounding boxes in the log. After instance filtering, 2 instances remain.

The log shows that 12 instances were filtered out because of overlapping bounding boxes, leaving 2 instances whose bounding boxes do not overlap.

Set Bounding box overlap ratio threshold to 0.1 and select Whether to filter larger instances. Check the instance filtering process in the log. 9 instances are filtered out because the ratio of the bounding box overlap area to the instance bounding box area is greater than 0.1, 3 instances are retained because the ratio is less than 0.1, and 2 instances have non-overlapping bounding boxes.


Set Bounding box overlap ratio threshold to 0.1 and clear Whether to filter larger instances. Check the instance filtering process in the log. For 9 instances, the ratio of the bounding box overlap area to the instance bounding box area is greater than 0.1, but 2 of them are retained because their bounding box areas are smaller than those of the overlapping instances, so 7 instances are filtered out. 3 instances are retained because the ratio of the bounding box overlap area to the instance bounding box area is less than 0.1, and 2 instances have non-overlapping bounding boxes.


1.4.12 [Master] Filter Concave/Convex Mask Instances Based on the Area Ratio of Mask / Mask Circumscribed Polygon

- Function Introduction
Calculate the area ratio of the Mask to the polygon circumscribed around the Mask. If it is smaller than the configured threshold, the instance will be filtered out.
Use Scenario Suitable for cases where the Target Object Mask contains jagged edges / concave-convex irregularities.
Parameter Description
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Area ratio threshold | The threshold for the Mask / convex hull area ratio. If it is smaller than the configured threshold, the instance will be filtered out. | 0.1 | [0,1] |
1.4.13 [Master] Filter Based on Average Point Cloud Distance

- Function Introduction
Filter based on the average distance from points in the point cloud to the fitted plane, removing uneven instance point clouds.
- Use Scenario
Suitable for scenarios where the point cloud of planar Target Objects is bent.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Plane segmentation distance threshold (mm) | Extract a plane from the bent instance point cloud. Points whose distance to the plane is less than this threshold are regarded as points on that plane | 10 | [-1000, 1000] | mm |
| Average distance threshold (mm) | The average value of the distances from points in the instance point cloud to the extracted plane | 20 | [-1000, 1000] | mm |
| Remove instances whose average distance is smaller than the threshold | If selected, filter out instances whose average distance from points to the extracted plane is smaller than the average distance threshold. If cleared, filter out instances whose average distance from points to the extracted plane is greater than the average distance threshold | Cleared | / | / |
1.4.14 [Master] Filter Occluded Instances Based on the Area Ratio of Mask / Bounding Box

- Function Introduction
Calculate the area ratio of the Mask to the bounding box. Instances whose ratio is outside the minimum and maximum range will be filtered out.
- Use Scenario
Used to filter instances of occluded Target Objects.
- Parameter Description
On the contrary, it indicates that the instance may be occluded.
| Parameter | Description | Default Value | Value Range |
|---|---|---|---|
| Minimum area ratio | The lower limit of the Mask / bounding box area ratio range. The smaller the ratio, the higher the degree of occlusion of the instance | 0.1 | [0,1] |
| Maximum area ratio | The upper limit of the Mask / bounding box area ratio range. The closer the ratio is to 1, the lower the degree of occlusion of the instance | 1.0 | [0,1] |
1.4.15 [Master] Determine Whether All Highest-Layer Instances Have Been Fully Detected

- Function Introduction
One of the foolproof mechanisms. It determines whether all instances in the highest layer have been fully detected. If any highest-layer instance has not been detected, an error will be reported and the Workflow will end.
- Use Scenario
Suitable for scenarios where one shot is used for multiple picks or where picking must be performed in sequence, to prevent missed picks in subsequent tasks caused by incomplete instance detection.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Parameter Tuning |
|---|---|---|---|---|---|
| Distance threshold | Used to determine the highest-layer Target Object. If the distance between a point and the highest point of the Target Object point cloud is less than the distance threshold, the point is considered a highest-layer point cloud point; otherwise, it is not considered a highest-layer point cloud point. | 5 | [0.1, 1000] | mm | It should be smaller than the height of the Target Object |
1.5 Instance Sorting

- Function Introduction
Group, sort, and extract instances according to the selected strategy.
- Use Scenario
Applicable to Depalletizing, unordered picking, and ordered loading and unloading scenarios.
If sorting is not required, you do not need to configure a specific strategy.
1.5.1 Reference Coordinate System

- Function Introduction
Set a unified coordinate system for all instances to group and sort instances.
- Use Scenario
Applicable to Depalletizing, unordered picking, and ordered loading and unloading scenarios.
Coordinate-related strategies should be used only after the reference coordinate system is set.
- Parameter Description
| Parameter | Description | Illustration |
|---|---|---|
| Camera coordinate system | The origin of the coordinate system is above the object, and the positive Z-axis points downward; the XYZ values are the values of the object center point in this coordinate system | ![]() |
| ROI coordinate system | The origin of the coordinate system is approximately at the center of the stack, and the positive Z-axis points upward; the XYZ values are the values of the object center point in this coordinate system | ![]() |
| Robot coordinate system | The origin of the coordinate system is on the robot itself, and the positive Z-axis generally points upward; the XYZ values are the values of the object center point in this coordinate system | ![]() |
| Pixel coordinate system | The origin of the coordinate system is the upper-left vertex of the RGB image and is a two-dimensional plane coordinate system; the X and Y values are the x and y values of the bbox detection box, and Z is 0 | ![]() |
1.5.2 General grasping strategy

- Parameter Description
| Parameter | Description | Default Value |
|---|---|---|
| Strategy | Select which value is used for grouping and sorting and how to sort, including the XYZ coordinate values of the instance point cloud center, bounding box aspect ratio, distance from the instance point cloud center to the ROI center, etc. Multiple items can be overlaid and will be executed in sequence | X coordinate value of the instance point cloud center from small to large (mm) |
| Grouping step size | According to the selected strategy, instances are divided into several groups based on the step size. The grouping step size is the interval between two groups of instances. For example, if the strategy selected is "Z coordinate value of the instance point cloud center from large to small (mm)", then the Z coordinates of all instance point cloud centers are sorted from large to small, and then the Z coordinates are grouped according to the step size, so the corresponding instances are also divided into several groups | / |
| Extract the first several groups | After grouping and sorting, how many groups of instances need to be retained | 10000 |
| Strategy name* | Description | Grouping step size | Extract the first several groups | |
|---|---|---|---|---|
| Default Value | Value Range | Default Value | ||
| XYZ coordinate values of the instance point cloud center from large to small / from small to large (mm) | Use the XYZ coordinate values of the point cloud center of each instance for grouping and sorting This strategy should be used only after the reference coordinate system is set | 200.000 | (0, 10000000] | 10000 |
| From the middle to both sides / from both sides to the middle along the XY coordinate axes of the instance point cloud center (mm) | Use the XY coordinate values of the point cloud center of each instance and perform grouping and sorting in the direction of "from the middle to both sides" or "from both sides to the middle" This strategy should be used only after the reference coordinate system is set | 200.000 | (0, 10000000] | 10000 |
| XY coordinate values of the bounding box center from large to small / from small to large (mm) | Use the XY coordinate values of the center point of each instance's bounding box in the pixel coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
| Bounding box aspect ratio from large to small / from small to large | Use the ratio of the longer side to the width side of the bounding box for grouping and sorting | 1 | (0, 10000] | 10000 |
| From the middle to both sides / from both sides to the middle along the XY coordinate axes of the bounding box center (mm) | Use the XY coordinate values of the center point of the bounding box and perform grouping and sorting in the direction of "from the middle to both sides" or "from both sides to the middle" | 200.000 | (0, 10000000] | 10000 |
| Target Object type ID from large to small / from small to large | Use the ID of the Target Object type for grouping and sorting, suitable for multi-category Target Object scenarios | 1 | [1, 10000] | 10000 |
| Local feature ID from large to small / from small to large | Use the ID of the local feature for grouping and sorting | 1 | [1, 10000] | 10000 |
| Confidence from large to small / from small to large | Use the Confidence of each instance for grouping and sorting | 1 | (0, 1] | 10000 |
| Visibility from small to large / from large to small | Use the visibility of each instance for grouping and sorting | 1 | (0, 0.1] | 10000 |
| Mask area from large to small / from small to large | Use the Mask area of each instance for grouping and sorting | 10000 | [1, 10000000] | 10000 |
| Distance from the instance point cloud center to the ROI center from near to far / from far to near (mm) | Use the distance from the point cloud center of each instance to the center of the ROI coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
| Distance from the instance point cloud center to the robot coordinate origin from near to far / from far to near (mm) | Use the distance from the point cloud center of each instance to the origin of the Robot coordinate system for grouping and sorting | 200.000 | (0, 10000000] | 10000 |
- Example
1.5.3 Custom grasping strategy

(1) Function Description
Switch grasping strategy to Custom grasping strategy, then click Add to add a custom grasping strategy.
Customize the picking order of each Target Object. If it is difficult to achieve picking using the general grasping strategy, or if it is difficult to tune appropriate parameters because of issues such as point cloud noise, you can consider using a custom grasping strategy
The custom grasping strategy is suitable for Depalletizing scenarios and ordered loading and unloading scenarios, but not for unordered picking scenarios, because the Target Objects in a custom grasping strategy must be ordered (that is, the order of the Target Objects is fixed)
A custom grasping strategy can only be combined with a single general grasping strategy, and the strategy can only be selected as Z coordinates from small to large
(2) Parameter Description
| Parameter | Description | Default Value | Value Range | Parameter Tuning |
|---|---|---|---|---|
| IOU threshold | Represents the overlap threshold between the annotated bbox and the detected bbox. The overlap is used to determine which image's sorting method should be selected for sorting the current Target Object instance. | 0.7 | [0,1] | The larger the threshold, the stricter the matching, the worse the anti-interference capability, and slight shape or position changes may cause matching failure, resulting in a wrong custom strategy being matched and sorting being performed in the wrong order |
| Pixel distance threshold | Represents the size difference between a bbox that can be matched and the detected bbox. | 100 | [0,1000] | The smaller the threshold, the stricter the matching and the better the anti-interference capability. If the placement of Target Objects between different layers is similar, a custom strategy may also be matched incorrectly, causing an incorrect sorting order. |
(3) Select the Reference Coordinate System
When using a custom grasping strategy, only the Camera coordinate system or the pixel coordinate system can be selected.
If there are multiple layers of Target Objects, select the Camera coordinate system; if there is only one layer of Target Objects, select the pixel coordinate system.
(4) Strategy, Grouping Step Size, and Extract the First Several Groups
| Parameter | Description | Default Value |
|---|---|---|
| Strategy | Only Z coordinate value of the instance point cloud center from large to small / from small to large (mm) can be selected | / |
| Grouping step size | According to the strategy of Z coordinates from small to large, sort the Z coordinates of the instances from small to large and divide the instances into several groups according to the step size | 10000 |
| Extract the first several groups | After grouping and sorting, how many groups of instances need to be retained | 10000 |
(5) Take Photo to Acquire Image / Add Local Image
Click Take Photo to Acquire Image to obtain an image from the currently connected Camera, or click Add Local Image to import an image locally. However many layers there are, or however many different placement forms there are for the Target Objects, that many images need to be obtained by taking photos or adding local images. If each layer is identical, only one image is needed. Right-click the image to delete it.
On the acquired image, press and hold the left mouse button and drag to annotate the bbox. The DELETE key can be used to delete the annotated bbox step by step.
2. 3D Calculation
2.1 Preprocessing
The preprocessing for 3D Calculation processes the 3D point cloud before the Deep Learning model performs calculations.
2.1.1 Point Cloud Clustering Denoising

- Function
Remove noise by point cloud clustering.
- Use Scenario
There is a large amount of noise in the instance point cloud.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit | Parameter Tuning Recommendation |
|---|---|---|---|---|---|
| Distance threshold for point cloud clustering (mm) | Determines whether point clouds in space belong to the same category. If the distance between point clouds is lower than this threshold, they are considered the same category | 5 | [0.1, 1000] | mm | Generally does not need to be changed. It should be greater than the point spacing of the Target Object point cloud and smaller than the minimum distance between the Target Object point cloud and the noise point cloud |
| Minimum point count threshold | Point cloud clusters with fewer points than this threshold will be filtered out | 100 | [1,10000000] | / | Generally does not need to be changed. Increase the minimum point count threshold according to the amount of noise in the instance point cloud |
| Maximum point count threshold | Point cloud clusters with more points than this threshold will be filtered out | 100000 | [1,10000000] | / | Generally does not need to be changed. If the number of Target Object point cloud points is greater than 100000, increase the maximum point count threshold |
| Whether to select the top point cloud of the ROI | If selected, calculate and sort the average Z coordinate of the same category of point clouds in the ROI coordinate system, and retain the point cloud category with the largest average Z coordinate (top point cloud). If cleared, retain all point clouds that meet the conditions | Cleared | / | / | If the Target Object point cloud is above the noise point cloud, selecting this option retains the Target Object point cloud; if the Target Object point cloud is below the noise point cloud, select this option and adjust the Z-axis of the ROI coordinate system downward to retain the Target Object point cloud |
| Whether to visualize process data | If selected, save the denoised point cloud, which can be viewed in C:_data | Cleared | / | / | In debug mode, select this option if visualized data needs to be saved |
- Example
2.1.2 Point Cloud Downsampling

- Function
Sample the point cloud according to the specified point spacing during downsampling.
- Use Scenario
If the Camera accuracy is high and causes the instance point cloud count to be too large, and the log reports the error "The number of instance point cloud points input to pose estimation exceeds the PMFE algorithm limit", this option should be selected.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit |
|---|---|---|---|---|
| Point spacing for downsampling (mm) | Sample the point cloud according to the specified point spacing | 5.0 | [0.1, 1000] | mm |
Parameter Tuning
- Set according to the point spacing of the instance point cloud. The larger the value, the fewer the downsampled point cloud points
2.1.3 Calculate Normal

- Function
Calculate the point cloud Normal for use in subsequent cylindrical fitting.
- Use Scenario
cylindrical-based ordered loading and unloading, cylindrical-based unordered picking.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Fix Normal orientation | Whether to fix the orientation when calculating the Normal. After it is enabled, the Normal orientation is determined by the orientation reference vector | Selected | / |
| Number of neighboring points for Normal calculation | The larger the value, the more neighboring points are referenced, but local changes may be ignored. The opposite is true for smaller values | 30 | [1,200] |
| Orientation reference vector | Orientation reference vector for Normal calculation | [0,0,1] | / |
- Parameter Tuning
Cannot be changed.
2.1.4 Point Cloud Contour Extraction

- Function
Extract the Target Object contour from the instance point cloud.
- Use Scenario
When using 2.1.5 **Use contour mode **, Point cloud contour extraction should also be selected.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit | Parameter Tuning Recommendation |
|---|---|---|---|---|---|
| Reference radius (mm) | The search radius for extracting the contour from the instance point cloud | 10 | [0.1,10000000000] | mm | The reference radius is recommended to be set to half of the downsampling point spacing in 2.1.2Point cloud downsampling, and it must be greater than the point cloud spacing |
- Example
2.1.5 Filter Point Clouds by HSV Color Range (Hue, Saturation, Value)

- Function
Filter point clouds according to hue, saturation, and brightness in the point cloud image, and screen out point cloud regions that match the target range.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Filter depth by HSV - maximum color range value | Maximum color value for filtering point clouds | [0.9,0.9,0.9] | [[0,0,0],[1,1,1]] |
| Filter depth by HSV - minimum color range value | Minimum color value for filtering point clouds | [0.0,0.0,0.0] | [[0,0,0],[1,1,1]] |
- Example
2.1.6 Filter Point Clouds by Three-Channel Colors

- Function
Filter point clouds by three-channel colors and screen out point cloud regions that match the target range.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Filter point clouds by three-channel colors - maximum color value | Maximum color value for filtering point clouds | [0.9,0.9,0.9] | [[0,0,0],[1,1,1]] |
| Filter depth by three-channel colors - minimum color value | Minimum color value for filtering point clouds | [0.0,0.0,0.0] | [[0,0,0],[1,1,1]] |
- Example
2.1.7 Select Point Clouds Within the ROI Area

- Function
Select the point clouds within the ROI 3D area from the instance point cloud. This default function cannot be deleted.
- Example
2.1.8 Remove Points Whose Normals Exceed the Angle Threshold

- Function
Remove point clouds whose angle between the Normal and the axis direction of the standard Normal is greater than the Normal angle threshold.
- Use Scenario
cylindrical-based unordered picking, cylindrical-based ordered loading and unloading.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range | Unit |
|---|---|---|---|---|
| Angle threshold | Point clouds greater than this angle threshold are considered different instances | 15 | [-360, 360] | / |
| Standard Normal axis direction | The angle formed by the point cloud Normal and the axis direction of the standard Normal | Z-axis | X/Y/Z-axis | / |
| Whether to use the ROI coordinate system | If selected, calculate the angle between the Normal and the axis of the ROI coordinate system. If cleared, calculate the angle between the Normal and the axis of the Camera coordinate system | Cleared | / | / |
- Parameter Tuning
2.1.9 Point Cloud Plane Segmentation

- Function
Retain or remove the plane with the largest number of points in the instance point cloud.
- Use Scenario
There is a noisy plane in the instance point cloud.
- Parameter Description
| Parameter | Description | Default Value | Value Range | Unit | Parameter Tuning Recommendation |
|---|---|---|---|---|---|
| Reference distance for plane fitting (mm) | If the distance from a point to the plane is lower than the reference distance, it is considered a point on the plane; otherwise, it is considered a point outside the plane | 3 | [0.001,10000] | mm | Generally unchanged |
| Remove plane | If selected, remove the plane with the largest number of point cloud points. If cleared, retain the plane with the largest number of point cloud points | Cleared | / | / | If the plane with the largest number of point cloud points is the Target Object, retain the plane and leave it cleared; if the plane with the largest number of point cloud points is noise, remove the plane and select it |
- Example
2.1.10 Remove Outliers from the Point Cloud

- Function
Identify and remove outlier noise in the point cloud to improve point cloud quality.
- Use Scenario
The instance point cloud contains many outlier noise points.
- Parameter Description
| Parameter Name | Description | Default Value | Value Range |
|---|---|---|---|
| Reference neighbor point count | The number of neighboring points around each point in the point cloud, that is, the neighborhood size. For dense point clouds, even a small neighborhood is sufficient to reflect the Target Object features, so a smaller value can be used; for sparse point clouds, a larger neighborhood is needed to reflect the Target Object features, so a larger value should be used. | 30 | [1, 10000000] |
| Standard deviation multiplier | Used to identify outlier noise. If the deviation of a point's coordinates from the average coordinates of the instance point cloud exceeds the standard deviation multiplier, the point is considered an outlier. The smaller the value, the more points are considered outliers and removed, but this may lead to misjudgment and removal of important Target Object features; the larger the value, the fewer points are considered outliers and removed, but some outliers may be retained and affect Target Object recognition accuracy. | 0.005 | [0.0001, 2] |
- Parameter Tuning
Generally unchanged. If the point cloud becomes too sparse after Remove outliers from the point cloud, increase the standard deviation multiplier.
- Example
2.1.11 Filter Out Point Clouds That Exceed the Object Distance Limit

- Function
Filter out point clouds in a specified direction to remove noise and improve image recognition accuracy.
- Parameter Description
| Parameter | Description | Default Value | Parameter Range | Unit | Parameter Tuning Recommendation |
|---|---|---|---|---|---|
| Specified axis | The specified axis of the point cloud, used to filter out point clouds in the specified direction | Z-axis | X/Y/Z-axis | / | Specified axis generally does not need to be changed |
| Threshold (mm) | Along the specified axis, if the distance between the lower-layer point cloud and the Target Object point cloud is greater than this threshold, the lower-layer point cloud will be filtered out; if the distance between the lower-layer point cloud and the Target Object point cloud is less than this threshold, the lower-layer point cloud will be retained | 750 | [0, 1000] | mm | Adjust the threshold according to the actual scenario. The larger the threshold, the fewer point clouds are filtered out; the smaller the threshold, the more point clouds are filtered out |
| Select coordinate system | Filter out point clouds in the selected coordinate system | ROI coordinate system | Camera coordinate system; ROI coordinate system; Object's own coordinate system | / |
- Example
2.1.12 Optimize the Mask Based on the Point Cloud

- Function
Based on the point cloud within ROI 3D, remove the point cloud in the Mask that is not within ROI 3D to improve Mask accuracy.
2.2 Cylinder Pose Estimation

2.2.1 Fitting Reference Distance (mm)

- Function
The model calculates an ideal cylindrical based on the instance point cloud, and point clouds whose distance to the ideal cylindrical is less than the fitting reference distance are fitted into the cylindrical.
- Use Scenario
cylindrical-based ordered loading and unloading, cylindrical-based unordered picking.
- Parameter Description
Default value: 2
Value range: [0.1, 1000]
Unit: mm
Parameter Tuning
- The log for a fitted cylindrical is shown below.



2.1.2 Fitting Score Threshold

- Function
Calculate the ratio of the number of point clouds fitted into a cylindrical to the number of points in the instance point cloud. Fitted cylinders whose ratio is lower than the fitting score threshold will be filtered out.
- Use Scenario
cylindrical-based ordered loading and unloading, cylindrical-based unordered picking.
- Parameter Description
Default value: 0.5
Value range: [0,1]
- Parameter Tuning
If the log reports "No cylinder point cloud meeting the requirements was detected", it indicates that the cylindrical cannot be fitted, and the fitting score threshold should be decreased.

2.2.3 Enable Size Prior

- Function
After it is enabled, constrain the size of the fitting result.
- Use Scenario
cylindrical-based unordered picking, cylindrical-based ordered loading and unloading.
- Parameter Tuning
Selected by default.
2.2.4 Object Pose Correction

Fine Matching Search Radius (mm)

- Function
During fine matching, the template point cloud is matched with the instance point cloud, and each point in the template point cloud needs to search for the nearest point in the instance point cloud. The fine matching search radius represents both the search radius in the instance point cloud and the distance threshold between each point in the template point cloud and its nearest point in the instance point cloud. If the distance between a point and its nearest point is smaller than the fine matching search radius, the two points are considered matched; otherwise, they are considered unmatched.
- Use Scenario
Ordered loading and unloading of planar Target Objects, unordered picking of planar Target Objects, and positioning assembly scenarios for planar Target Objects.
- Parameter Description
Default value: 10
Value range: [1, 500]
Unit: mm
- Parameter Tuning
Usually unchanged.
Fine Matching Search Mode

- Function
The method by which the template point cloud searches for the nearest point in the instance point cloud during fine matching.
- Use Scenario
If the fine matching effect between the template point cloud and the instance point cloud is poor, this function should be adjusted.
- Parameter Description
| Parameter | Description |
|---|---|
| Point-to-point | Each point in the template point cloud searches for the nearest point in the instance point cloud (the point with the shortest straight-line distance within the search radius). Suitable for all Target Objects |
| Point-to-plane | Each point in the template point cloud searches for the nearest point in the instance point cloud along its Normal. Suitable for Target Objects with obvious geometric features |
| Combination of point-to-point and point-to-plane | First use point-to-point mode to optimize the pose of the Target Object in the instance point cloud, then use point-to-plane mode to optimize the pose of the Target Object in the instance point cloud. Suitable for Target Objects with obvious geometric features
|
Use Contour Mode

- Function
Extract contour point clouds from the template point cloud and the instance point cloud for coarse matching.
- Use Scenario
Ordered loading and unloading of planar Target Objects, unordered picking of planar Target Objects, and positioning assembly scenarios for planar Target Objects. If the result of coarse matching using keypoints is poor, this function should be selected to use contour point clouds for coarse matching again.
- Parameter Tuning
The result of coarse matching affects the fine matching result. If the fine matching result is poor, select Use Contour Mode
Contour Search Range (mm)

- Function
The search radius for extracting contour point clouds from the template point cloud and the instance point cloud.
- Use Scenario
Ordered loading and unloading of general Target Objects, unordered picking of general Target Objects, and positioning assembly scenarios for general Target Objects.
- Parameter Description
Default value: 5
Value range: [0.1, 500]
Unit: mm
- Parameter Tuning
If the value is small, the search radius for contour point clouds is small, which is suitable for extracting detailed Target Object contours, but the extracted contours may contain outlier noise;
If the value is large, the search radius for contour point clouds is large, which is suitable for extracting wider Target Object contours, but some detailed features may be ignored in the extracted contours.
Save Pose Estimation [Fine Matching] Data

- Function
If selected, save the fine matching data.
- Use Scenario
Ordered loading and unloading of planar Target Objects, unordered picking of planar Target Objects, positioning assembly of planar Target Objects, and positioning assembly of planar Target Objects (matching only).
- Example
The fine matching data is saved in the \ProjectFolder\data\PickLight\HistoricalDataTimestamp\Builder\pose\output folder under the project save path.

2.2.5 Cylinder Pose Normalization

- Function
Search for the optimal picking direction based on the projection of valid inliers along the axis direction, and perform direction normalization on the cylinder pose.
- Use Scenario
cylindrical-based ordered loading and unloading, cylindrical-based unordered picking.
2.3 Empty ROI Determination

- Function
Determine whether any Target Object (point cloud) remains in ROI 3D. If the number of 3D points in ROI 3D is smaller than this value, it means that no Target Object point cloud remains, and no point cloud is returned in this case.
- Parameter Description
Default value: 1000
Value range: [0, 100000]
- Usage Process
Set the minimum point count threshold for ROI 3D. If it is smaller than this threshold, the Target Object point cloud in ROI 3D is insufficient, so it is determined that there is no Target Object in ROI 3D;
In the robot configuration, add a vision status code to facilitate subsequent robot signal processing.



