Detections Merge¶
Class: DetectionsMergeBlockV1
Source: inference.core.workflows.core_steps.transformations.detections_merge.v1.DetectionsMergeBlockV1
Combine multiple detection predictions into a single merged detection with a union bounding box that encompasses all input detections, simplifying multiple detections into one larger detection region for overlapping object consolidation, region creation from multiple objects, and detection simplification workflows.
How This Block Works¶
This block merges multiple detections into a single detection by calculating a union bounding box that contains all input detections. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) containing multiple detections
- Validates input (handles empty detections by returning an empty detection result)
- Calculates the union bounding box from all input detections:
- Extracts all bounding box coordinates (xyxy format) from input detections
- Finds the minimum x and y coordinates (leftmost and topmost points) across all boxes
- Finds the maximum x and y coordinates (rightmost and bottommost points) across all boxes
- Creates a single bounding box that completely encompasses all input detections
- Determines the merged detection's confidence:
- Finds the detection with the lowest confidence score among all input detections
- Uses this lowest confidence as the merged detection's confidence (conservative approach)
- Handles cases where confidence scores may not be present
- Creates a new merged detection with:
- The calculated union bounding box (encompasses all input detections)
- A customizable class name (default: "merged_detection", configurable via class_name parameter)
- The lowest confidence from input detections (conservative confidence assignment)
- A fixed class_id of 0 for the merged detection
- A newly generated detection ID (unique identifier for the merged detection)
- Returns the single merged detection containing all input detections within its bounding box
The block creates a unified bounding box representation of multiple detections, useful for consolidating overlapping or nearby detections into a single region. The union bounding box approach ensures all original detections are completely contained within the merged detection. By using the lowest confidence, the block adopts a conservative approach, ensuring the merged detection's confidence reflects the least certain input detection. The merged detection can be customized with a class name to indicate its merged nature or to represent a specific category.
Common Use Cases¶
- Overlapping Detection Consolidation: Merge multiple overlapping detections of the same or related objects into a single unified detection (e.g., merge overlapping detections of the same person from multiple frames, consolidate duplicate detections from different models, combine overlapping object parts into one detection), enabling overlapping detection simplification
- Multi-Object Region Creation: Create a single bounding box region that encompasses multiple detected objects for area-based analysis (e.g., create a region containing multiple people for crowd analysis, merge detections of objects in a scene into one region, combine multiple detections into a single monitoring zone), enabling multi-object region workflows
- Nearby Detection Grouping: Group nearby detections together into a single merged detection (e.g., merge detections of objects close to each other, group nearby detections into clusters, combine adjacent detections for simplified processing), enabling spatial grouping workflows
- Detection Simplification: Simplify multiple detections into one larger detection for downstream processing (e.g., reduce multiple detections to one for simpler analysis, consolidate detections for easier visualization, merge detections for streamlined workflows), enabling detection simplification workflows
- Zone Definition from Detections: Create zone boundaries from multiple detection locations (e.g., define zones based on detection locations, create regions from detected object positions, establish boundaries from detection clusters), enabling zone creation from detections
- Redundant Detection Removal: Merge redundant or duplicate detections into a single representation (e.g., combine duplicate detections from different stages, merge redundant object detections, consolidate repeated detections), enabling redundant detection consolidation workflows
Connecting to Other Blocks¶
This block receives multiple detection predictions and produces a single merged detection:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to merge multiple detections into one unified detection for simplified processing, enabling detection consolidation workflows
- After filtering blocks (e.g., Detections Filter) to merge filtered detections that meet specific criteria into a single detection (e.g., merge filtered detections by class, combine detections after filtering, consolidate filtered results), enabling filtered detection consolidation
- Before crop blocks to create a single crop region from multiple detections (e.g., crop a region containing multiple objects, extract area encompassing multiple detections, create unified crop region), enabling multi-detection region extraction
- Before zone-based blocks (e.g., Polygon Zone, Dynamic Zone) to define zones based on merged detection regions (e.g., create zones from merged detection areas, establish monitoring zones from merged detections, define regions from consolidated detections), enabling zone creation from merged detections
- Before visualization blocks to display simplified merged detections instead of multiple individual detections (e.g., visualize consolidated detection regions, display merged bounding boxes, show simplified detection representation), enabling simplified visualization outputs
- Before analysis blocks that benefit from simplified detection representation (e.g., analyze merged detection regions, process consolidated detections, work with simplified detection data), enabling simplified detection analysis workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_merge@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
class_name |
str |
Class name to assign to the merged detection. The merged detection will use this class name in its data. Default is 'merged_detection' to indicate that this is a merged detection. You can customize this to represent a specific category or to indicate the purpose of the merged detection (e.g., 'crowd', 'group', 'region'). This class name will be stored in the detection's data dictionary.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Merge in version v1.
- inputs:
OCR Model,Google Vision OCR,Byte Tracker,Velocity,Object Detection Model,VLM as Detector,Gaze Detection,Time in Zone,Time in Zone,SAM 3,SAM 3,PTZ Tracking (ONVIF).md),Dynamic Zone,Detections List Roll-Up,EasyOCR,Detections Classes Replacement,VLM as Detector,Dynamic Crop,Detections Stitch,Instance Segmentation Model,Byte Tracker,Segment Anything 2 Model,Keypoint Detection Model,Path Deviation,Template Matching,YOLO-World Model,Line Counter,Detection Event Log,Detections Consensus,Time in Zone,Detections Filter,Detection Offset,Seg Preview,Perspective Correction,Detections Combine,Detections Stabilizer,Moondream2,Bounding Rectangle,Overlap Filter,Detections Transformation,Motion Detection,Path Deviation,Object Detection Model,Keypoint Detection Model,SAM 3,Instance Segmentation Model,Byte Tracker,Detections Merge - outputs:
Roboflow Dataset Upload,Byte Tracker,Velocity,Model Comparison Visualization,Time in Zone,Time in Zone,PTZ Tracking (ONVIF).md),Detections List Roll-Up,Dynamic Crop,Detections Stitch,Icon Visualization,Byte Tracker,Byte Tracker,Circle Visualization,Segment Anything 2 Model,Trace Visualization,Path Deviation,Florence-2 Model,Line Counter,Pixelate Visualization,Triangle Visualization,Label Visualization,Detection Event Log,Detections Consensus,Time in Zone,Detections Filter,Detection Offset,Blur Visualization,Ellipse Visualization,Dot Visualization,Crop Visualization,Perspective Correction,Detections Combine,Detections Stabilizer,Distance Measurement,Color Visualization,Corner Visualization,Overlap Filter,Detections Transformation,Background Color Visualization,Path Deviation,Roboflow Dataset Upload,Size Measurement,Florence-2 Model,Bounding Box Visualization,Roboflow Custom Metadata,Stitch OCR Detections,Line Counter,Camera Focus,Detections Classes Replacement,Model Monitoring Inference Aggregator,Detections Merge
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Merge in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Detection predictions containing multiple detections to merge into a single detection. Supports object detection, instance segmentation, or keypoint detection predictions. All input detections will be combined into one merged detection with a union bounding box that encompasses all input detections. If empty detections are provided, the block returns an empty detection result. The merged detection will contain all input detections within its bounding box boundaries..
-
output
predictions(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Detections Merge in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_merge@v1",
"predictions": "$steps.object_detection_model.predictions",
"class_name": "<block_does_not_provide_example>"
}