🟠 Add Model to Scene Point Cloud

Function Description

This operator can transform a model point cloud according to each pose by inputting a point cloud model and generated matching results, and "add" these transformed model instances to the scene, i.e., reconstruct the matched model point clouds on the original scene point cloud.

Usage Scenarios

  • Object detection verification: Place detected object models identified by detection algorithms in the original scene, and verify the accuracy of detection results by comparing the overlap between models and actual point clouds.

  • Matching result visualization: Place models in the scene according to the poses of matching results to visualize matching effects, allowing intuitive viewing and debugging of matching performance.

  • Construct simulation scenes: Combine identified objects (in the form of model point clouds) with actual scene data to generate a more complete scene point cloud containing precisely positioned objects, providing data support for robot path planning, grasping strategies, etc.

Input Output

Input

Matching results: A list of poses.

Scene point cloud: Original scene point cloud data.

Output

Scene point cloud with model: A point cloud or point cloud list.

Parameter Description

This operator has two versions:

  • Add Model to Scene Point Cloud: Processes point clouds without normal information.

  • Add Model to Scene Point Cloud (with normals): Processes point clouds with normal information.

Both have identical core functionality and parameters, differing only in the type of point cloud data they process.

Model Point Cloud File

Parameter Description

Upload a PLY format model point cloud file.

Enable Downsampling

Parameter Description

Choose whether to downsample the model when loading it to reduce the number of point cloud points.

Parameter Adjustment

  • Enabled (default): Reduce the number of model point clouds to speed up data processing.

  • Disabled: If the model itself doesn’t have many points, or if you need to preserve all details of the model, you can choose to disable.

Voxel Downsampling Size

Parameter Description

Defines the edge length of downsampling voxels.

Parameter Adjustment

This is the key to controlling the degree of downsampling. The larger this value, the coarser the downsampling, the fewer model points, the faster the processing speed, but the more model detail loss. The smaller this value, the more details are preserved, but the downsampling effect is not obvious. Need to balance according to model size and application requirements.

Parameter Range

[0,10000], Default value: 2, Unit: mm

Result Retains Input Point Cloud

Parameter Description

Controls whether the final output includes the original scene point cloud.

Parameter Adjustment

  • Enabled (default): Output will include the original scene point cloud and all newly added model instances.

  • Disabled: Output will only include newly added model instances according to matching poses, not including the original scene.

Merge Output Point Clouds

Parameter Description

Controls whether output point clouds are merged.

Parameter Adjustment

  • Enabled (default): Merge all point clouds (original scene and all model instances) into a single point cloud.

  • Disabled: Output a point cloud list where each item is an independent point cloud (original scene is one, each model instance is also one).