Generate Grasp Pose from Detection Results
Function: Converts detection results to pick object poses. Converts detection results from deep learning methods, combined with point cloud information, into pick object poses. [cite: 6]
Input Parameters:
Name | Type | Valid Range | Default Value | Meaning |
---|---|---|---|---|
Detection results |
DetectInstance |
None |
{} |
Returns bounding box, category, score, and polygon. [cite: 6] |
Image |
Image |
None |
None |
Image, input can be grayscale or color. [cite: 6] |
Camera coordinate system point cloud |
XYZPoints |
None |
None |
Point cloud. Since it’s converted using intrinsic parameters, it needs to be a point cloud in the camera coordinate system and must be an ordered point cloud to align with the image. [cite: 6] |
Camera internal reference |
Matrix |
None |
None |
Camera intrinsic matrix. [cite: 6] |
Camera distortion |
List |
None |
None |
Camera distortion coefficient vector. [cite: 6] |
Hand-eye calibration matrix |
Matrix |
None |
[[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]] |
Hand-eye calibration matrix, defaults to identity matrix. [cite: 6] |
Output Parameters:
Name | Type | Valid Range | Default Value | Meaning |
---|---|---|---|---|
Grab Positional Information |
PosesList |
None |
[] |
Grab Positional Information. [cite: 6] |
Parameter Settings:
Name | Type | Valid Range | Default Value | Meaning |
---|---|---|---|---|
Circle radius scale |
Float |
[0, 1] |
0.3 |
Circle radius ratio, relative to half the short side of the bounding box, used to select object surface point cloud for depth calculation. [cite: 6] |
Calculation method of the z value of the capture point |
String |
['Mean', 'Median'] |
Mean |
The Z-value of the pick point is generally calculated using the mean. However, in scenarios with many outliers in the point cloud, using the median might avoid inaccuracies caused by outliers. [cite: 6] |
Use detection result angle |
Bool |
None |
False |
If enabled, use the angle from the detection result to calculate the resulting pose rotation. If disabled, use the angular relationship between the robot coordinate system point cloud corresponding to the long side of the detection result’s minimum bounding box and the coordinate system axis for Rz rotation. [cite: 6] |