Terminology

Before using the software, please understand the following concepts to better use the software.

  • Project solution

    Project solution, referred to as Solution, is a comprehensive solution for implementing 3D vision-guided projects and can handle one or more grasping scenarios. A "project solution" consists of one or more "workspaces".

  • Workspace

    Workspace, referred to as Space, is a specific configuration for a specific grasping scenario, including scene configurations such as robots and cameras, as well as grasping configurations such as vision algorithms and motion planning.

  • ROI

    ROI (Region of Interest) is a specific area in the data. Setting ROI can improve processing efficiency and accuracy, save resources, etc.

  • Hand-eye calibration

    Hand-eye calibration refers to calculating the conversion relationship between the robot coordinate system and the camera coordinate system by collecting the calibration object pose and robot pose captured by the camera. Using the calculated transformation matrix, the target point in the camera coordinate system can be converted to the robot coordinate system, thereby realizing the operation of the robot under 3D vision guidance. The accuracy of hand-eye calibration is crucial and will directly affect the accuracy and stability of the system.

  • Tools

    The end-effector tool is a device specially designed and installed at the mechanical interface to enable the robot to complete its task, such as a gripper or suction cup.

  • Scene objects

    Scene objects refer to various objects in the real robot working scene, generally including pallets, material bins, brackets, etc.

  • Motion Planning

    root terms motion

    The Motion Planning module is responsible for calculating the optimal path sequence for a robot from its starting position to the target grasping position. This module adopts a new algorithm architecture that supports setting multiple pre-grasp points and pre-retraction points, enabling the robot to intelligently select the most suitable approach method and safe withdrawal route, effectively dealing with obstacles in complex working environments. The planning details panel of this module intuitively displays the detection results of each path, combined with precise trajectory display and collision area visualization functions, greatly simplifying the path configuration and debugging process, allowing engineers to more quickly complete the construction and optimization of robot application solutions.

  • Collision Grid Edge Length

    During collision detection, the point cloud itself does not directly participate in collision detection. Instead, the three-dimensional space where the point cloud is distributed is recursively divided into eight sub-cubes until the cube edge length reaches the set collision grid length. If these cubes collide with other objects, it is considered that the point cloud has collided. By adjusting the collision grid edge length, the accuracy and efficiency of collision detection can be influenced.

    In collision detection, the collision grid edge length directly affects the detection accuracy: the smaller the collision grid edge length, the more precise the collision detection, but the longer the calculation time.