🔷Yolo v8/v10 Detection and Segmentation

Functional Description

This operator utilizes advanced YOLOv8 or YOLOv10 deep learning models to perform object detection, instance segmentation, or rotated object detection tasks on input color images, supporting multiple model formats such as .pt, .onnx, and .epicnn.

Usage Scenarios

  • Object Detection: Quickly locate and identify multiple objects in images, outputting their bounding boxes and categories. Suitable for part recognition, defect location, object counting, etc.

  • Instance Segmentation: On the basis of object detection, further generate precise pixel-level segmentation masks (contours) for each identified object instance. Suitable for scenarios requiring precise shape information, such as grasping positioning, area measurement, etc.

  • Rotated Object Detection: Detect objects with arbitrary orientations and output rotated bounding boxes that can tightly enclose the object along with their angles. Suitable for detecting tilted or arbitrarily placed objects.

Input Output

Input

Image: Color image that needs detection or segmentation (must be RGB format). Currently only supports single image input.

DLYolo input

Output

Detection Results: List containing detection/segmentation results.

DLYolo output

Parameter Description

  • Input Image: Input format requires color RGB images. Since Epic series cameras apply special processing to data, all output images support using YOLO algorithms for detection and segmentation.

  • Single Image Processing: Current operator implementation only supports processing one image at a time.

  • GPU Environment: If GPU is enabled, especially when using ONNX models, please ensure CUDA environment and onnxruntime-gpu library are correctly installed and compatible.

  • Epicnn Model: When using .epicnn models, the "Inference Type" parameter must be set correctly.

Weight File

Parameter Description

Specifies the YOLO model weight file for inference. Supports PyTorch (.pt), ONNX (.onnx), and epicnn (.epicnn) formats. Must select a valid model file.

Parameter Adjustment

Select appropriate model file based on task requirements.

  • .pt files are typically used for training and debugging;

  • .onnx files have better cross-platform compatibility and relatively faster CPU running speed;

  • .epicnn files are designed for smart camera platforms to achieve optimal performance.

Enable GPU

Parameter Description

Choose whether to use GPU for model inference computation. If checked, ensure the computer has an available NVIDIA graphics card and corresponding CUDA environment.

Parameter Adjustment

Checking this option can significantly improve processing speed, especially for large models or high-resolution images.

  • If using .onnx models with GPU enabled, need to install onnxruntime-gpu library matching CUDA version as prompted;

  • If no compatible GPU or incorrect environment configuration, should uncheck (use CPU).

  • For .epicnn models, this option is ineffective.

Inference Type

Parameter Description

Only effective when selecting .epicnn weight files. Used to explicitly inform the operator which task this .epicnn model is used for (detection, segmentation, rotated detection) and which YOLO version (v8 or v10) it’s based on.

Parameter Adjustment

When loading .epicnn files, must select the correct inference type based on the actual training task of that model, otherwise may cause post-processing errors.

For example, if the loaded is a YOLOv8 segmentation model converted .epicnn file, should select "yolov8 segmentation". For .pt and .onnx models, the operator will automatically recognize task type, this parameter will be ignored.

Confidence Threshold

Parameter Description

Confidence score threshold used to filter detection/segmentation results. Only instances with scores above this threshold will be output.

Parameter Adjustment

This is the most commonly adjusted parameter. Increasing this value will result in fewer output results, keeping only targets the model is very confident about, which can effectively reduce false positives. Decreasing this value will get more detection results, possibly including some less confident or lower quality targets, but may also recover some missed targets. Need to balance between recall and precision according to actual application scenarios. Usually start from default value and adjust based on results.

Parameter Range

[0.005, 1], Default value: 0.8

DLYolo 1

DLYolo 2

DLYolo 3

Confidence Threshold=0.05

Confidence Threshold=0.8

Confidence Threshold=0.95