FINCH Series Camera User Manual
About 3897 wordsAbout 13 min
1. Product Introduction
The binocular structured-light 3D Camera embedded software based on a MEMS galvanometer is software for a structured-light 3D Camera based on MEMS galvanometer projection. It is based on the core hardware Ainstec MEMS chip, Sony imaging chip, and NvidiaJetson Nano computing module. Data transmission uses a GigE interface and supports multiple exposure modes. This software is suitable for 3D scanning and industrial 3D defect inspection, and can be used with industrial robots in scenarios such as industrial random bin picking and loading/unloading.
2. Safety Precautions
Please read the safety precautions carefully and operate in strict accordance with the following specifications. Otherwise, the camera may be seriously damaged. Dexforce is not responsible for any resulting maintenance issues
Do not immerse in water
Do not expose to fire
Do not disassemble the device
Do not connect the power supply privately
Do not extend the network cable without authorization
Do not use the camera in humid environments, environments with condensation, or high-dust environments
Do not use the camera in environments with strong magnetic fields or high-voltage discharge equipment (such as electric welders)
Avoid external impact or drops. If this occurs, contact staff for inspection and repair
Keep away from equipment such as laser marking machines and engraving machines that may damage the camera. If such equipment must be used, contact company personnel for confirmation
Do not use beyond the specified range. Operating distance: XEMA-P:100-150mm;XEMA-DCW:0.3-0.5m;XEMA-SCW:0.5-1m;XEMA-LCW:1-2.5m;SPARROW:0.3-0.5m;FINCH:1.5-3.5m:For special customization, communicate requirements with the camera team in advance
Use the camera strictly within the allowable high- and low-temperature range specified in the product specification.
3. Specifications
| Parameter | Model | FINCH-W-2002 |
|---|---|---|
| Hardware Parameters | Field of View (H/V) | 57°/48° |
| Dimensions (mm) | 465*130*×69 | |
| Weight (kg) | 2.7 | |
| Baseline Length (mm) | 400 | |
| Resolution | Mono:1624*1240 Color:2272*1648 | |
| Interface Type | Gigabit Ethernet | |
| Computing Unit | NVIDIA Jexson Nano | |
| Rated Voltage | DC 24V ≥2A | |
| Technical Parameters | Near-end Field of View (mm) | 1844*1332 |
| Far-end Field of View (mm) | 3796*3109 | |
| Recommended Working Distance (mm) | 1500~3500 | |
| Z-axis Accuracy | 0.65mm@3m | |
| Z-axis Repeatability σ(μm) | 72@1.5m-3.5m | |
| Pixel Interval | Mono:1.13mm@1.5m 2.63mm@3.5m Color:0.93mm@1.5m 2.17mm@3.5m | |
| Typical Capture Time | 0.8-2.0s | |
| Output Data | Point Cloud, Depth Map, Grayscale Image (Color Image) | |
| Supported Operating Systems | Windows, Linux (Ubuntu20.04) | |
| SDK Interface | C/C++/C#/Python |
4. Installation and Connection
4.1 Unboxing Inspection

4.2 Hardware Installation
There are two installation methods. One is direct communication through a network cable (use a network cable to connect the camera and industrial PC), and the other is interaction through a router or switch (connect one end of the network cable to the router and the other end to the camera).
- Connect through a router/switch

- Direct communication through a network cable

When there is no DHCP server on the network (that is, the camera is directly connected to the computer), the camera's DHCP mechanism will attempt to obtain an IP for 30 seconds. If it still fails, it switches to the AVAHI mechanism to negotiate an IP, and the negotiated IP will be in the 169.254.x.x range.
- Connect the power supply to the camera and turn on the power
Before powering on the camera, first confirm that the power cable and network cable are securely connected. After the power is turned on, the "Power" indicator remains on. The camera starts up in about 30 seconds. At this time, the green indicator on the network port remains on and the orange indicator flashes, indicating gigabit network bandwidth. The camera's "Act" working indicator lights up during image capture and data transmission and is off at other times.
4.3 Connect the Camera
Before connecting the camera, download and install DexSense. Please download and install DexSense from http://cloud.open3dv.site:8087/nas/dexsense_release/
Open DexSense and search for the connection directly in the DexSense software interface
Use a network cable to connect the camera and industrial PC, turn on the camera power, then click the
Camerapanel on the main interface to open theCamerainterface

- On the camera interface, select the camera brand to connect. Options include XEMA, FINCH, SPARROW, KINGFISHER, and KINGFISHER-R. Then click
Searchfor cameras of the current brand or cameras previously connected, and connect to the camera with the corresponding IP address.



You can also directly enter the camera's IP address and then click the Connect Camera button.

- After the camera is connected successfully, select a camera for the current task on the
Task Informationinterface. After selecting the camera, the camera connection status in the status bar changes to "Connected", as shown below.

5. Camera Configuration
- Each camera has multiple default configurations, and none of the default configurations can be changed. Select the corresponding default configuration, click
Trigger Capture, and use the imaging parameters to capture a 2D image, Point Cloud, and Depth Map. You can view the imaging quality in the visualization window on the left.





- The camera's default configurations cannot be changed. If the 2D images captured by all default configurations cannot reach normal exposure, click
+to copy the current default camera configuration, add an identical camera configuration, and directly enter the camera configuration interface to modify the parameters.

After copying the current default camera configuration, you can switch to the newly added camera configuration and click the settings button to enter the camera configuration interface and modify the parameters.


Click — to delete the newly added camera configuration.

- Click
Save Imageto save the 2D image, Point Cloud, and Depth Map captured with the current configuration to the local machine, as shown below. The save path is project folder/config/camera/image/capture time. Files with thebmpsuffix are 2D images, files with theplysuffix are Point Clouds, and files with thetiffsuffix are Depth Maps.


- Click
View Intrinsicsto view the camera Intrinsic Parameter, including lens focal length, principal point coordinates, distortion coefficients, etc.


- Click
Disconnectto disconnect the camera and select a camera to reconnect.

5.1 Import Camera Configuration
Enter the camera configuration interface and click Import Camera Configuration to import an existing camera configuration into the camera.

5.2 Functional Operations
The camera configuration interface provides the following operations:
- Show Overexposed Areas
After enabling Show Overexposed Areas, the visualization window will display the overexposed areas of the current image, as shown below.

- Trigger Capture
Click Trigger Capture to capture a 2D image, Point Cloud, and Depth Map using the current camera configuration. You can view the imaging quality of the current camera configuration in the visualization window.




- Continuous Acquisition
Click Continuous Acquisition, and the camera will keep taking pictures continuously. Click Stop Acquisition to stop.

- Save Image
Click Save Image to save the captured 2D image, Point Cloud, and Depth Map.


- Camera Accuracy
Click Camera Accuracy to view and verify the camera accuracy

5.3 Camera Accuracy
5.3.1 View Camera Accuracy
- Place the Calibration Board within the camera's field of view. In the camera configuration interface, click
Camera Accuracyto open the camera accuracy interface, as shown below

- Select the corresponding Calibration Board. If the selected Calibration Board does not match the actual one, a warning dialog saying "Please check the Calibration Board type" will pop up, as shown below. The warning appears when the Calibration Board type is A3


- After selecting the correct Calibration Board type, click
View Camera Accuracy. PickWiz will automatically calculate the current camera accuracy. If the camera accuracy meets the requirement, only adjust the camera imaging parameters; if the camera accuracy is too large, a prompt saying "The error is too large. Camera accuracy calibration is recommended" will pop up.

Camera accuracy below 0.8mm for the FINCH Series Camera indicates that the accuracy meets the requirement
5.3.2 Calibrate Camera Accuracy
During actual camera use, and after Extrinsic Parameter calibration, the current camera accuracy needs to be verified to determine whether it meets the requirements.
When viewing camera accuracy, if the camera accuracy error is abnormal, the camera accuracy should be calibrated;
If the Point Cloud captured by the current camera shows large fluctuations, the camera accuracy should be calibrated
Before calibrating camera accuracy, enable
Show Overexposed Areasto check the exposure level of the camera imaging. If there are overexposed areas, adjust the exposure time of the single exposure to ensure normal camera imaging exposure.After calibration is complete, switch the camera imaging parameters back to the original configuration

- In the camera configuration interface, click
Camera Accuracy

- Select the corresponding Calibration Board. If the selected Calibration Board does not match the actual one, a warning dialog saying "Please check the Calibration Board type" will pop up, as shown below. The warning appears when the Calibration Board type is A3


- After selecting the correct Calibration Board type, click
Calibrate Camera Accuracy

- Place the Calibration Board horizontally in the center of the camera's field of view, then click
Add Sample. The camera starts sampling and validating the sample. If the collected sample passes validation, it will be added belowCollected Samples, as shown below


Click Sample x to view the collected sample. The centers of all concentric circles on the Calibration Board in a validated sample turn green

- Move the Calibration Board to the four corners within the camera's field of view and add samples




At each corner, prop up the Calibration Board at an angle. The placement angle of the Calibration Board should be 15-30°; the tilt angle should be neither too large nor too small
- After adding 5 samples, click
Calculate Calibration Results

- After calibration is complete, a prompt saying "Camera accuracy calibration completed. The calibrated accuracy is x, the error is normal. Do you want to overwrite the current camera parameter configuration?" will pop up

If you choose to overwrite the current camera parameter configuration, the camera accuracy calibration result will be updated to the camera, and a camera restart is required for it to take effect

6. Camera Parameter Tuning
6.1 Required Parameters
6.1.1 Engine Mode
Engine modes include Normal, Highly Reflective, and Black. Among them, Highly Reflective mode is excellent for highly reflective parts, Black mode is excellent for black workpieces, and Normal mode can handle ordinary workpieces.
| Engine Mode | Description |
|---|---|
| Normal | Suitable for general workpieces |
| Highly Reflective | Suitable for highly reflective workpieces |
| Black | Suitable for black workpieces |


6.1.2 Exposure Type
(1)Single Exposure
Single exposure can be used for workpieces with ordinary textures
(2)High Dynamic Range:
For highly reflective workpieces, the HDR function can be used for Point Cloud fusion
The High Dynamic setting of the exposure type should be used together with the High Dynamic setting of the engine mode
High Dynamic Range Imaging (HDRI or HDR for short) is a set of techniques used to achieve a larger exposure dynamic range (that is, greater differences between light and dark) than ordinary digital imaging technology.
High dynamic range makes image layers more distinct and the contrast between light and dark more obvious (especially when dealing with reflective workpieces).
Parameter tuning recommendation:
- When using high dynamic range, you can choose the number of high dynamic exposures according to the specific Scene and workpiece, with a value range of 2~6 and a default of 2. If the 3D Point Cloud quality and 2D image quality are poor, increase the number of groups to expose the object multiple times to achieve the best imaging quality.
It is recommended to use fewer high dynamic exposures while meeting Point Cloud quality requirements.
(3)Repeated Exposure: The number of times the camera repeats shooting. Its function is to improve the signal-to-noise ratio (the ratio of signal to noise). The higher the signal-to-noise ratio, the better, because random noise will be suppressed and effective information will be increased. The value range is 0~10.
Repeated exposure can be used for black objects to optimize the Point Cloud through multiple exposures
6.1.3 Exposure Time
Camera exposure time: Exposure time is the time the shutter remains open while reflected light from the Scene passes through the lens and reaches the imaging photosensitive material. The longer the exposure time, the more light enters. If the exposure time is too long, overexposure will occur, which affects the Point Cloud, so adjust it according to actual conditions
Range: 1700-100000
Exposure time and projection brightness form a group. One high dynamic exposure count is one group. Set appropriate exposure time and projection brightness values for each group.
Exposure time: The exposure time is the duration for which light enters the camera while the shutter is open. The longer the exposure time, the more light enters and the clearer the image.
If the exposure time is too long, overexposure will occur. You can enable the overexposed area display; the red parts indicate overexposed areas.


6.1.4 Projection Brightness
Projection brightness refers to the intensity of the projected light. The greater the light intensity, the brighter and clearer the image. Within a certain range, the human eye may perceive the image as clearer because of the higher brightness. If it exceeds this limit, excessive brightness will make the image impossible to see clearly.
Range: 0-1023
The larger this value, the greater the projection brightness, which can effectively improve the signal-to-noise ratio. It is recommended to use the maximum value. Only consider reducing this value if the exposure time is adjusted to the minimum and overexposure still occurs
Note: It is recommended to set the brightness to 1023


6.1.5 Gain
Adjusts the brightness of the image.
Range: 0-24
The gain value of the 2D Camera can be adjusted and should be increased appropriately. As gain increases, noise will also increase
6.1.6 Smoothing
Range:0-5
Performs smoothing on the Point Cloud
6.1.7 Confidence
Confidence indicates the level of reliability and is used for an initial screening of the Point Cloud. Generally, 2-5 is sufficient, and customers can adjust it according to site conditions
Lowering the Confidence preserves more black regions in the Depth Map; conversely, increasing the Confidence removes black noise in the Depth Map.
Range: 0-100


6.1.8 Noise Filtering
In machine vision application scenarios, when inspecting objects such as metal surfaces, aluminum foil surfaces, reflective films, and smooth surfaces, specular reflection may cause excessively strong local reflected light, resulting in the loss of the object's original information and interfering with machine vision inspection. Noise filtering can remove the generated noise while preserving the original information of the object.
Range0-100
When recognizing objects such as metal surfaces, aluminum foil surfaces, reflective films, and smooth surfaces, specular reflection may cause excessively strong local reflected light, causing the loss of the object's original information and interfering with PickWiz recognition and image inspection. Increasing the noise filtering value can remove the generated noise while preserving the original information of the object.


6.2 Optional Parameters
6.2.1 Radius Filtering
For each point in the Point Cloud, determine a sphere with radius r and count the number of valid points. If the number of points inside is less than the valid-point threshold, it is considered a noise point and removed. The smaller the radius and the larger the valid-point threshold, the more obvious the filtering effect
Radius range: 0-99
Valid point range: 0-99
6.2.2 Depth Filtering
Filters floating noise points in the Z-axis direction. The larger the threshold, the more obvious the filtering effect. For filtering methods based on the Depth Map, a threshold of 33 is recommended at a distance of 1000mm.
Range: 0-100
6.2.3 Reflection Filtering
Filters vertical-surface noise caused by mutual reflection of metals. The larger the threshold, the more obvious the filtering effect
Range: 0-100
6.2.4 Phase Correction
Phase correction, that is, Point Cloud grayscale compensation, is a method of correcting grayscale information in 3D Point Cloud data. The purpose of Point Cloud grayscale compensation is to eliminate these grayscale value differences and convert the grayscale information in the Point Cloud into values corresponding to the actual surface reflectance of the object. The larger the threshold, the more obvious the correction
Range: 0-100

Usage: First place the Calibration Board and use the plane of the Calibration Board as the reference plane, as shown above.

As described in the Maximum Height and Minimum Height section, set the maximum height to 1 and the minimum height to -1 so that only the Calibration Board part is displayed, as shown above.

With phase correction disabled, the Calibration Board is shown as above. On the actual Calibration Board, the whole board is flat, and there is no vertical fluctuation between the circular and non-circular parts, but the actual captured result shows fluctuations in the circular areas.

After enabling phase correction, if the Calibration Board shows basically no color difference or the difference is not obvious, the correction is successful.
6.2.5 2D Image Overlay Exposure
This function can separately override the original 2D image after the Point Cloud is obtained. Sometimes the Point Cloud is good but the 2D image is too dark or overexposed and does not meet requirements. In this case, you can enable 2D overlay exposure to override the original 2D image.
If you select illuminated, you can set the exposure time, gain, and capture method (single exposure or high dynamic) manually.
If you select non-illuminated, it refers to the ambient light brightness. You can also set the exposure time, gain, and capture method (single exposure or high dynamic) manually.
Overlay Gain
When using the 2D overlay exposure function, you can adjust this gain to make the image brighter
Overlay Exposure Time
When using the 2D overlay exposure function, you can adjust this exposure time to make the image brighter