Terminology and Concepts
About 729 wordsAbout 2 min
Terminology and Concepts
| Scene Category | Term | Meaning |
|---|---|---|
| Project & task | Project | A Project contains multiple tasks, and you can freely switch between different tasks. |
| task | A task contains all configurations required for the current task to run, including Camera, Robot, workpiece, vision parameters, ROI, etc. One task corresponds to only one Scene (Depalletizing, Ordered Loading and Unloading, Random Picking, Positioning and Assembly), and cannot be modified. | |
| Workpiece Attributes | Mesh File | A file format used to store 3D model data, mainly including formats such as OBJ, STL, and PLY. |
| Point Cloud | A Point Cloud is a collection of points with coordinate information in 3D space. Each point contains at least three coordinate values (X, Y, Z) and is used to accurately describe the geometric shape of an object's surface. | |
| eye-hand calibration | eye-hand calibration | Determines the relative Robot Pose (position + orientation) between the Robot coordinate system ("hand") and the Camera coordinate system ("eye"), enabling the vision system to guide the Robot accurately to complete picking tasks. |
| Intrinsic Parameter | Parameters that describe the internal properties of the Camera and its optical characteristics, independent of the Camera's position in the world coordinate system. These parameters remain unchanged during Camera use. | |
| Extrinsic Parameter | The pose parameters (rotation matrix + translation vector) of the Camera in the world coordinate system, describing its spatial position and orientation. | |
| Euler Angles | A way to describe the orientation of an object in 3D space, using 3 angle parameters (pitch, yaw, and roll) to represent the object's rotation in 3D space. | |
| TCP | TCP (Tool Center Point) is the point located at the tip of the Tool. To complete tasks such as workpiece picking, when the Robot is moved to a point in space, the essence is to move the Tool Center Point to that point. | |
| Vision Model | Vision Model | A Deep Learning model that can perform Inference on input images to obtain inference results such as object Masks, bounding boxes, keypoints, and scores. |
| Keypoint | A feature point in a 3D model with clear semantic or geometric significance, used to describe the local structure or global pose of the target. In keypoint-based pose estimation, the overall pose (position + orientation) of the target is inferred by detecting the positions of these points. | |
| Pose | Represents the combination of an object's Position and Orientation in space. | |
| Mask | An image, graphic, or object used to occlude all or part of the processed image, thereby controlling the image processing region or process. The specific image or object used for coverage is called a Mask. | |
| Bounding Box | A rectangular box used in computer vision and machine learning to locate target objects by annotating their position, size, and range with coordinates. | |
| Vision Workflow | Coarse Matching | Coarse Matching is the process of matching the keypoints of the template Point Cloud with the keypoints predicted by the model for the actual Point Cloud. |
| Fine Matching | Fine Matching is the process of matching the template Point Cloud with the actual Point Cloud so that the workpiece pose in the actual Point Cloud overlaps with the workpiece pose in the template Point Cloud as much as possible, thereby optimizing the workpiece pose in the actual Point Cloud. | |
| ROI | In machine vision and image processing, the area selected from the processed image that needs to be handled is called the Region of Interest. In PickWiz, ROI 3D and ROI 2D need to be set separately. |