🟧 Detection Results Generate Grasp Object Pose
Function Description
Using detection results output by deep learning and ordered point cloud data, generate grasping poses for each detection target.
Usage Scenarios
Suitable for scenarios requiring deep learning for image detection and segmentation.
Input Output
Input |
Detection Results: 2D detection results containing bounding box, category, score, polygon mask and other information for each detection target. Image: Original camera image corresponding to the point cloud data. Camera Coordinate System Point Cloud: Ordered point cloud, i.e., points in the point cloud have one-to-one correspondence with pixel points in the image. Must be point cloud in camera coordinate system. Camera Intrinsics: 3x3 camera intrinsic matrix for conversion between pixel coordinates and camera coordinates. Camera Distortion: Camera distortion coefficient vector. Hand-Eye Calibration Matrix: 4x4 homogeneous transformation matrix from camera coordinate system to robot base coordinate system. |
Output |
Grasp Object Pose Information: Grasping pose list generated for each valid detection result. |
Parameter Description
Center Point Cloud Region Radius
Parameter Description |
Define a small circular region located at the center of the detection box for collecting point cloud to calculate the depth (Z value) of the pick point. This value is the ratio of the circular region radius to half the short side length of the detection box. |
Parameter Adjustment |
|
Parameter Range |
[0,1], Default value: 0.3 |
Pick Point Z Value Calculation Method
Parameter Description |
Set how to calculate the final depth (Z value) within the above circular sampling region. |
Parameter Adjustment |
|
Use Detection Result Angle
Parameter Description |
Control how the rotation part (especially rotation around Z-axis) of the output pose is determined. |
Parameter Adjustment |
|