SPARROW Series Camera User Manual
About 4059 wordsAbout 14 min
1. Product Introduction
The SPARROW Camera is a structured-light 3D Camera based on DLP projection. It uses a TI DLP2010 projector chip and a Sony IMX392 imaging chip as its core components, and its low-level controller has no built-in computing unit. Data is transmitted over a GigE interface, and multiple exposure modes are supported. It is suitable for 3D scanning and industrial 3D defect inspection and can be used with industrial robots in scenarios such as random bin picking and loading/unloading.
2. Safety Precautions
Please read the safety precautions carefully and operate strictly in accordance with the following specifications. Otherwise, the Camera may be seriously damaged. Dexforce is not responsible for any resulting maintenance issues.
Do not immerse the device in water
Do not expose to fire
Do not disassemble the device
Do not connect the Power Supply without authorization
Do not extend the network cable without authorization
Do not use the Camera in humid, condensing, or dusty environments
Do not use the Camera in environments with strong magnetic fields or high-voltage discharge equipment (such as electric welders)
Avoid external impacts and drops. If this occurs, contact staff for inspection and repair
Keep the Camera away from equipment such as laser marking machines and engraving machines that may damage it. If such equipment must be used, contact company personnel for confirmation
Do not use the Camera beyond the specified range. Operating distance: XEMA-P:100-150mm;XEMA-DCW:0.3-0.5m;XEMA-SCW:0.5-1m;XEMA-LCW:1-2.5m;SPARROW:0.3-0.5m;FINCH:1.5-3.5m:For special customization, communicate your requirements with the Camera team in advance
Use the Camera strictly within the allowable high- and low-temperature ranges specified in the product specifications.
3. Specifications
| Parameter | Model | SPARROW-D-1001 | SPARROW-SC-1100 |
|---|---|---|---|
| Hardware Parameters | Field of View (H/V) | 33°/23° | 34°/22° |
| Dimensions (mm) | 130*90*×37 | 226*90*38 | |
| Weight (kg) | 0.5 | 0.7 | |
| Baseline Length (mm) | 75 | 170 | |
| Resolution | 1440*1080 | 1440*1080 | |
| Interface Type | Gigabit Ethernet | Gigabit Ethernet | |
| Computing Unit | / | / | |
| Rated Voltage | DC 12V ≥6.3A | DC 12V ≥6.3A | |
| Technical Parameters | Near-end Field of View (mm) | 181*122 | 207*193 |
| Far-end Field of View (mm) | 300*209 | 615*393 | |
| Recommended Working Distance (mm) | 300~500 | 500~1000 | |
| Z-axis Accuracy | 0.05mm@0.4m | 0.05mm@0.6m | |
| Z-axis Repeatability σ(μm) | 18@0.3m-0.5m | / | |
| Pixel Interval | 0.13mm@0.3m 0.21mm@0.5m | 0.2mm@0.5m 0.4mm@1m | |
| Typical Capture Time | 0.4-1.0s | 0.4-1.0s | |
| Output Data | Point Cloud, Depth Map, Grayscale Image | Point Cloud, Depth Map, Grayscale Image (Color Image) | |
| Supported Operating Systems | Windows(CUDA>=11.0) | Windows(CUDA>=11.0) | |
| SDK Interface | C/C++/C#/Python | C/C++/C#/Python |
4. Installation and Connection
4.1 Unboxing Inspection

4.2 Hardware Installation
The SPARROW Series Camera supports only one installation method: connect the Camera network cable directly to the host network port. Routed connections are not allowed.

4.2 Connection
Before connecting the Camera, download and install DexSense. Please follow the DexSense Installation Guide to complete the installation.
The SPARROW server runs on the host and must be configured before DexSense can connect to the Camera. The procedure is as follows:
1)For first-time use, download the server package SPARROW_V2.0_C(G)PU.zip and the Daheng package GalaxySDK.zip from: https://gitee.com/open3dv/sparrow/releases (depending on the computer configuration, you may choose either the CPU or GPU version)

2)In the release package, extract the GalaxySDK archive and double-click install.bat. This silently installs the Daheng-related libraries. Wait 2 minutes for the installation to complete.

3)After installation, the GxGigE Daheng IP Configuration Tool appears on the desktop. Double-click to open it. If the IP address is marked with an exclamation mark, the address is not in the same network segment. Click the Auto Configure IP button. If it is displayed as Available, no configuration is required


When Available appears without an exclamation mark, you can start the SPARROW server

4)Start the SPARROW Camera server
Navigate to the SPARROW_v2.0-camera_server_2.0--camera_server Path and double-click the server program camera_server.exe.
If the projector emits light and the server status shows listening, the server is running. The client can then connect to the Camera normally.
Note: During use, the camera_server server must not be closed. If it needs to be closed, press Ctrl+C; otherwise, the client will not exit properly, and the Camera must be restarted before it can be used again.


Open DexSense and search for the connection directly in the DexSense software interface
Use a network cable to connect the Camera and the Vision system industrial PC (Industrial Personal Computer), turn on the Camera Power Supply, then click the
Camerapanel on the main interface to open theCamerainterface

- On the Camera interface, select the Camera brand to connect. Options include XEMA, FINCH, SPARROW, KINGFISHER, and KINGFISHER-R. Then click
Searchto search for Cameras of the current brand or Cameras that were previously connected, and connect to the Camera with the corresponding IP address.



You can also enter the Camera IP address directly and then click the Connect Camera button.

- After the Camera is connected successfully, select a Camera for the current task on the
Task Informationinterface. After selection, the Camera connection status in the status bar changes to "Connected", as shown below.

5. Camera Configuration
- Each Camera has multiple default configurations, and none of them can be modified. Select the corresponding default configuration, click
Trigger Capture, and use the imaging Parameters to capture a 2D image, Point Cloud, and Depth Map. You can view the imaging quality in the visualization window on the left.





- The Camera's default configurations cannot be changed. If the 2D images captured by all default configurations do not reach normal exposure, click
+to copy the current default Camera configuration, add an identical Camera configuration, and then enter the Camera configuration interface to modify the Parameters.

After copying the current default Camera configuration, switch to the newly added Camera configuration and click the settings button to enter the Camera configuration interface and modify the Parameters.


Click — to delete the newly added Camera configuration.

- Click
Save Imageto save the 2D image, Point Cloud, and Depth Map captured with the current configuration to the local machine, as shown below. The save Path is project folder/config/camera/image/capture time. Files with thebmpsuffix are 2D images, files with theplysuffix are Point Cloud files, and files with thetiffsuffix are Depth Maps.


- Click
View Intrinsicsto view the Camera Intrinsic Parameter, including lens focal length, principal point coordinates, distortion coefficients, etc.


- Click
Disconnectto disconnect the Camera and select another Camera to reconnect.

5.1 Import Camera Configuration
Enter the Camera configuration interface and click Import Camera Configuration to import an existing Camera configuration.

5.2 Functional Operations
The Camera configuration interface provides the following operations:
- Show Overexposed Areas
After enabling Show Overexposed Areas, the visualization window displays the overexposed areas of the current image, as shown below.

- Trigger Capture
Click Trigger Capture to capture a 2D image, Point Cloud, and Depth Map using the current Camera configuration. You can view the imaging quality of the current Camera configuration in the visualization window.




- Continuous Acquisition
Click Continuous Acquisition, and the Camera continuously captures images. Click Stop Acquisition to stop.

- Save Image
Click Save Image to save the captured 2D image, Point Cloud, and Depth Map.


- Camera Accuracy
Click Camera Accuracy to view and verify the Camera accuracy

5.3 Camera Accuracy
5.3.1 View Camera Accuracy
- Place the Calibration Board within the Camera field of view. In the Camera configuration interface, click
Camera Accuracyto open the Camera accuracy interface, as shown below

- Select the corresponding Calibration Board. If the selected Calibration Board does not match the actual board, a warning dialog saying "Please check the Calibration Board type" will appear, as shown below. The warning appears when the Calibration Board type is A3


- After selecting the correct Calibration Board type, click
View Camera Accuracy. PickWiz automatically calculates the current Camera accuracy. If the Camera accuracy meets the requirement, only camera imaging quality adjustment is required; if the Camera accuracy is too large, a prompt saying "The error is too large. Camera accuracy calibration is recommended" will appear.

For the SPARROW Series Camera, a Camera accuracy below 0.2mm indicates that the accuracy meets the requirement
5.3.2 Calibrate Camera Accuracy
During actual Camera use, and after Extrinsic Parameter calibration, the current Camera accuracy needs to be verified to determine whether it meets the requirements.
When viewing Camera accuracy, if the Camera accuracy error is abnormal, the Camera accuracy should be calibrated;
If the Point Cloud captured by the current Camera shows large fluctuations, the Camera accuracy should be calibrated
Before calibrating Camera accuracy, enable
Show Overexposed Areasto check the Camera exposure level. If there are overexposed areas, adjust the single-exposure time to ensure normal Camera imaging exposure.After calibration is complete, switch the Camera imaging Parameters back to the original configuration

- In the Camera configuration interface, click
Camera Accuracy

- Select the corresponding Calibration Board. If the selected Calibration Board does not match the actual board, a warning dialog saying "Please check the Calibration Board type" will appear, as shown below. The warning appears when the Calibration Board type is A3


- After selecting the correct Calibration Board type, click
Calibrate Camera Accuracy

- Place the Calibration Board horizontally at the center of the Camera field of view, then click
Add Sample. The Camera starts sampling and validating the sample. If the collected sample passes validation, it is added belowCollected Samples, as shown below


Click Sample x to view the collected sample. In a validated sample, the centers of all concentric circles on the Calibration Board turn green

- Move the Calibration Board to the four corners of the Camera field of view and add samples




At each corner, prop up the Calibration Board at an angle. The placement angle of the Calibration Board should be 15-30°; the tilt should be neither too large nor too small
- After adding 5 samples, click
Calculate Calibration Results

- After calibration is complete, a prompt saying "Camera accuracy calibration completed. The calibrated accuracy is x, the error is normal. Do you want to overwrite the current camera parameter configuration?" will appear

If you choose to overwrite the current Camera Parameter configuration, the Camera accuracy calibration result is updated to the Camera, and the Camera must be restarted for it to take effect

6. Camera Parameter Tuning
6.1 Required Parameters
6.1.1 Engine Mode
Engine modes include Normal, Highly Reflective, and Black. Highly Reflective mode is suitable for highly reflective parts, Black mode is suitable for black workpieces, and Normal mode is suitable for ordinary workpieces.
| Engine Mode | Description |
|---|---|
| Normal | Suitable for general workpieces |
| Highly Reflective | Suitable for highly reflective workpieces |
| Black | Suitable for black workpieces |


6.1.2 Exposure Type
(1)Single Exposure
Single exposure can be used for workpieces with ordinary textures.
(2)High Dynamic Range:
For highly reflective workpieces, the HDR function can be used for Point Cloud fusion.
The High Dynamic setting of Exposure Type should be used together with the High Dynamic setting of Engine Mode.
High Dynamic Range Imaging (HDRI or HDR for short) is a set of techniques used to achieve a larger exposure dynamic range (that is, greater differences between light and dark) than ordinary digital imaging technology.
High dynamic range makes image layers more distinct and the contrast between light and dark more obvious, especially when dealing with reflective workpieces.
Parameter tuning recommendation:
- When using high dynamic range, you can choose the number of high dynamic exposures according to the specific Scene and workpiece. The value range is 2~6, and the default is 2. If the 3D Point Cloud quality and 2D image quality are poor, increase the number of groups to expose the object multiple times and achieve the best imaging quality.
It is recommended to use fewer high dynamic exposures while meeting Point Cloud quality requirements.
(3)Repeated Exposure: The number of times the Camera repeats image capture. Its function is to improve the signal-to-noise ratio. The higher the signal-to-noise ratio, the better, because random noise is suppressed and effective information is increased. The value range is 0~10.
Repeated exposure can be used for black objects to optimize the Point Cloud through multiple exposures.
6.1.3 Exposure Time
Camera exposure time: this is the time for which the shutter remains open while reflected light from the Scene passes through the lens and reaches the photosensitive imaging material. The longer the exposure time, the more light enters. If the exposure time is too long, overexposure occurs, which affects the Point Cloud, so adjust it according to actual conditions
Range: 1700-100000
Exposure time and projection brightness form one group. Each high dynamic exposure count corresponds to one group. Set appropriate exposure time and projection brightness values for each group.
Exposure time: This is the duration for which light enters the Camera while the shutter is open. The longer the exposure time, the more light enters and the clearer the image.
If the exposure time is too long, overexposure occurs. You can enable the overexposed-area display; the red regions indicate overexposed areas.


6.1.4 Projection Brightness
Projection brightness refers to the intensity of the projected light. The greater the light intensity, the brighter and clearer the image. Within a certain range, the human eye may perceive the image as clearer because of the higher brightness. Beyond that limit, excessive brightness makes the image difficult to see clearly.
Range: 0-1023
A larger value means greater projection brightness, which can effectively improve the signal-to-noise ratio. It is recommended to use the maximum value. Reduce this value only if the exposure time has been adjusted to the minimum and overexposure still occurs
Note: It is recommended to set the brightness to 1023


6.1.5 Gain
Adjusts the brightness of the image.
Range: 0-24
The gain value of the 2D Camera can be adjusted and should be increased appropriately. As gain increases, noise also increases
6.1.6 Smoothing
Range: 0-5
Applies smoothing to the Point Cloud
6.1.7 Confidence
Confidence indicates the reliability level and is used for an initial screening of the Point Cloud. Generally, 2-5 is sufficient, and customers can adjust it according to site conditions
Lowering the Confidence preserves more black regions in the Depth Map; conversely, increasing the Confidence removes black noise in the Depth Map.
Range: 0-100


6.1.8 Noise Filtering
In machine vision application scenarios, when inspecting objects such as metal surfaces, aluminum foil surfaces, reflective films, and smooth surfaces, specular reflection may cause excessively strong local reflected light, resulting in the loss of the object's original information and interfering with machine vision inspection. Noise filtering can remove the generated noise while preserving the original information of the object.
Range: 0-100
When recognizing objects such as metal surfaces, aluminum foil surfaces, reflective films, and smooth surfaces, specular reflection may cause excessively strong local reflected light, causing the loss of the object's original information and interfering with PickWiz recognition and image inspection. Increasing the noise filtering value can remove the generated noise while preserving the original information of the object.


6.2 Optional Parameters
6.2.1 Radius Filtering
For each point in the Point Cloud, define a sphere with radius r and count the number of valid points. If the number of points inside the sphere is less than the valid-point threshold, it is considered a noise point and removed. The smaller the radius and the larger the valid-point threshold, the more obvious the filtering effect
Radius range: 0-99
Valid point range: 0-99
6.2.2 Depth Filtering
Filters floating noise points in the Z-axis direction. The larger the threshold, the more obvious the filtering effect. For filtering methods based on the Depth Map, a threshold of 33 is recommended at a distance of 1000mm.
Range: 0-100
6.2.3 Reflection Filtering
Filters vertical-surface noise caused by mutual reflection between metals. The larger the threshold, the more obvious the filtering effect
Range: 0-100
6.2.4 Phase Correction
Phase correction, that is, Point Cloud grayscale compensation, is a method for correcting grayscale information in 3D Point Cloud data. The purpose of Point Cloud grayscale compensation is to eliminate grayscale-value differences and convert the grayscale information in the Point Cloud into values corresponding to the actual surface reflectance of the object. The larger the threshold, the more obvious the correction
Range: 0-100

Usage: First place the Calibration Board and use its plane as the reference plane, as shown above.

As described in the Maximum Height and Minimum Height section, set the maximum height to 1 and the minimum height to -1 so that only the Calibration Board is displayed, as shown above.

With phase correction disabled, the Calibration Board is shown as above. On the actual Calibration Board, the entire board is flat, and there is no vertical fluctuation between the circular and non-circular areas, but the captured result shows fluctuations in the circular areas.

After enabling phase correction, if the Calibration Board shows almost no color difference or the difference is not obvious, the correction is successful.
6.2.5 2D Image Overlay Exposure
This function can separately override the original 2D image after the Point Cloud is obtained. Sometimes the Point Cloud is good, but the 2D image is too dark or overexposed and does not meet requirements. In this case, you can enable 2D image overlay exposure to override the original 2D image.
If you select illuminated, you can manually set the exposure time, gain, and capture method (single exposure or high dynamic).
If you select non-illuminated, it refers to ambient light brightness. You can also manually set the exposure time, gain, and capture method (single exposure or high dynamic).
Overlay Gain
When using the 2D image overlay exposure function, you can adjust this gain to make the image brighter
Overlay Exposure Time
When using the 2D image overlay exposure function, you can adjust this exposure time to make the image brighter