Detections Merge¶
Class: DetectionsMergeBlockV1
Source: inference.core.workflows.core_steps.transformations.detections_merge.v1.DetectionsMergeBlockV1
Combine multiple detection predictions into a single merged detection with a union bounding box that encompasses all input detections, simplifying multiple detections into one larger detection region for overlapping object consolidation, region creation from multiple objects, and detection simplification workflows.
How This Block Works¶
This block merges multiple detections into a single detection by calculating a union bounding box that contains all input detections. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) containing multiple detections
- Validates input (handles empty detections by returning an empty detection result)
- Calculates the union bounding box from all input detections:
- Extracts all bounding box coordinates (xyxy format) from input detections
- Finds the minimum x and y coordinates (leftmost and topmost points) across all boxes
- Finds the maximum x and y coordinates (rightmost and bottommost points) across all boxes
- Creates a single bounding box that completely encompasses all input detections
- Determines the merged detection's confidence:
- Finds the detection with the lowest confidence score among all input detections
- Uses this lowest confidence as the merged detection's confidence (conservative approach)
- Handles cases where confidence scores may not be present
- Creates a new merged detection with:
- The calculated union bounding box (encompasses all input detections)
- A customizable class name (default: "merged_detection", configurable via class_name parameter)
- The lowest confidence from input detections (conservative confidence assignment)
- A fixed class_id of 0 for the merged detection
- A newly generated detection ID (unique identifier for the merged detection)
- Returns the single merged detection containing all input detections within its bounding box
The block creates a unified bounding box representation of multiple detections, useful for consolidating overlapping or nearby detections into a single region. The union bounding box approach ensures all original detections are completely contained within the merged detection. By using the lowest confidence, the block adopts a conservative approach, ensuring the merged detection's confidence reflects the least certain input detection. The merged detection can be customized with a class name to indicate its merged nature or to represent a specific category.
Common Use Cases¶
- Overlapping Detection Consolidation: Merge multiple overlapping detections of the same or related objects into a single unified detection (e.g., merge overlapping detections of the same person from multiple frames, consolidate duplicate detections from different models, combine overlapping object parts into one detection), enabling overlapping detection simplification
- Multi-Object Region Creation: Create a single bounding box region that encompasses multiple detected objects for area-based analysis (e.g., create a region containing multiple people for crowd analysis, merge detections of objects in a scene into one region, combine multiple detections into a single monitoring zone), enabling multi-object region workflows
- Nearby Detection Grouping: Group nearby detections together into a single merged detection (e.g., merge detections of objects close to each other, group nearby detections into clusters, combine adjacent detections for simplified processing), enabling spatial grouping workflows
- Detection Simplification: Simplify multiple detections into one larger detection for downstream processing (e.g., reduce multiple detections to one for simpler analysis, consolidate detections for easier visualization, merge detections for streamlined workflows), enabling detection simplification workflows
- Zone Definition from Detections: Create zone boundaries from multiple detection locations (e.g., define zones based on detection locations, create regions from detected object positions, establish boundaries from detection clusters), enabling zone creation from detections
- Redundant Detection Removal: Merge redundant or duplicate detections into a single representation (e.g., combine duplicate detections from different stages, merge redundant object detections, consolidate repeated detections), enabling redundant detection consolidation workflows
Connecting to Other Blocks¶
This block receives multiple detection predictions and produces a single merged detection:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to merge multiple detections into one unified detection for simplified processing, enabling detection consolidation workflows
- After filtering blocks (e.g., Detections Filter) to merge filtered detections that meet specific criteria into a single detection (e.g., merge filtered detections by class, combine detections after filtering, consolidate filtered results), enabling filtered detection consolidation
- Before crop blocks to create a single crop region from multiple detections (e.g., crop a region containing multiple objects, extract area encompassing multiple detections, create unified crop region), enabling multi-detection region extraction
- Before zone-based blocks (e.g., Polygon Zone, Dynamic Zone) to define zones based on merged detection regions (e.g., create zones from merged detection areas, establish monitoring zones from merged detections, define regions from consolidated detections), enabling zone creation from merged detections
- Before visualization blocks to display simplified merged detections instead of multiple individual detections (e.g., visualize consolidated detection regions, display merged bounding boxes, show simplified detection representation), enabling simplified visualization outputs
- Before analysis blocks that benefit from simplified detection representation (e.g., analyze merged detection regions, process consolidated detections, work with simplified detection data), enabling simplified detection analysis workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_merge@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
class_name |
str |
Class name to assign to the merged detection. The merged detection will use this class name in its data. Default is 'merged_detection' to indicate that this is a merged detection. You can customize this to represent a specific category or to indicate the purpose of the merged detection (e.g., 'crowd', 'group', 'region'). This class name will be stored in the detection's data dictionary.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Merge in version v1.
- inputs:
Moondream2,Detections Consensus,Detections Merge,Seg Preview,Instance Segmentation Model,Dynamic Zone,Dynamic Crop,VLM As Detector,VLM As Detector,SAM 3,Path Deviation,Detection Offset,Byte Tracker,Line Counter,Byte Tracker,SAM 3,Segment Anything 2 Model,Detections List Roll-Up,Object Detection Model,Template Matching,Path Deviation,Google Vision OCR,Time in Zone,Keypoint Detection Model,Bounding Rectangle,Detections Stitch,Time in Zone,Instance Segmentation Model,Detections Filter,Gaze Detection,Detections Stabilizer,EasyOCR,PTZ Tracking (ONVIF).md),Perspective Correction,Detections Combine,Object Detection Model,SAM 3,Keypoint Detection Model,Byte Tracker,Detection Event Log,Detections Classes Replacement,OCR Model,YOLO-World Model,Overlap Filter,Time in Zone,Motion Detection,Velocity,Detections Transformation - outputs:
Circle Visualization,Detections Consensus,Detections Merge,Florence-2 Model,Blur Visualization,Dynamic Crop,Label Visualization,Path Deviation,Detection Offset,Corner Visualization,Ellipse Visualization,Byte Tracker,Byte Tracker,Line Counter,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Detections List Roll-Up,Model Comparison Visualization,Background Color Visualization,Size Measurement,Trace Visualization,Path Deviation,Time in Zone,Line Counter,Triangle Visualization,Detections Stitch,Detections Filter,Roboflow Custom Metadata,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Stitch OCR Detections,Camera Focus,Perspective Correction,Color Visualization,Detections Combine,Pixelate Visualization,Dot Visualization,Bounding Box Visualization,Detection Event Log,Detections Transformation,Byte Tracker,Distance Measurement,Detections Classes Replacement,Overlap Filter,Icon Visualization,Crop Visualization,Time in Zone,Stitch OCR Detections,Time in Zone,Velocity,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Merge in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction]): Detection predictions containing multiple detections to merge into a single detection. Supports object detection, instance segmentation, or keypoint detection predictions. All input detections will be combined into one merged detection with a union bounding box that encompasses all input detections. If empty detections are provided, the block returns an empty detection result. The merged detection will contain all input detections within its bounding box boundaries..
-
output
predictions(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Detections Merge in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_merge@v1",
"predictions": "$steps.object_detection_model.predictions",
"class_name": "<block_does_not_provide_example>"
}