🟧 Detection Results Generate Grasp Object Pose

Function Description

Using detection results output by deep learning and ordered point cloud data, generate grasping poses for each detection target.

Usage Scenarios

Suitable for scenarios requiring deep learning for image detection and segmentation.

Input Output

Input

Detection Results: 2D detection results containing bounding box, category, score, polygon mask and other information for each detection target.

Image: Original camera image corresponding to the point cloud data.

Camera Coordinate System Point Cloud: Ordered point cloud, i.e., points in the point cloud have one-to-one correspondence with pixel points in the image. Must be point cloud in camera coordinate system.

Camera Intrinsics: 3x3 camera intrinsic matrix for conversion between pixel coordinates and camera coordinates.

Camera Distortion: Camera distortion coefficient vector.

Hand-Eye Calibration Matrix: 4x4 homogeneous transformation matrix from camera coordinate system to robot base coordinate system.

Output

Grasp Object Pose Information: Grasping pose list generated for each valid detection result.

Parameter Description

Center Point Cloud Region Radius

Parameter Description

Define a small circular region located at the center of the detection box for collecting point cloud to calculate the depth (Z value) of the pick point. This value is the ratio of the circular region radius to half the short side length of the detection box.

Parameter Adjustment

  • Increasing this value: Sampling region becomes larger, will average more point cloud data, beneficial for obtaining more stable depth values, but if the object is small or irregularly shaped, may sample background points, leading to depth calculation errors.

  • Decreasing this value: Sampling region is more concentrated at the object center, can more accurately reflect the depth of the center point, but if the center region happens to have point cloud holes or noise, results may be unstable.

Parameter Range

[0,1], Default value: 0.3

Pick Point Z Value Calculation Method

Parameter Description

Set how to calculate the final depth (Z value) within the above circular sampling region.

Parameter Adjustment

  • Mean: Calculate average of all valid point cloud depths in the sampling region, suitable for situations with good point cloud quality.

  • Median: Calculate median of depth values. Using median method can reduce interference from "flying points" and "noise points", suitable for situations with poor point cloud quality.

Use Detection Result Angle

Parameter Description

Control how the rotation part (especially rotation around Z-axis) of the output pose is determined.

Parameter Adjustment

  • On: Directly use the angle provided in the 2D detection results as the rotation of the grasping pose.

  • Off (Default): The operator will calculate the direction of the long side of the detection box in the robot base coordinate system based on hand-eye calibration matrix, and use this as the rotation of the grasping pose. This method usually better ensures that the grasping pose aligns with the robot’s motion axes.