Detections Merge¶
Class: DetectionsMergeBlockV1
Source: inference.core.workflows.core_steps.transformations.detections_merge.v1.DetectionsMergeBlockV1
Combine multiple detection predictions into a single merged detection with a union bounding box that encompasses all input detections, simplifying multiple detections into one larger detection region for overlapping object consolidation, region creation from multiple objects, and detection simplification workflows.
How This Block Works¶
This block merges multiple detections into a single detection by calculating a union bounding box that contains all input detections. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) containing multiple detections
- Validates input (handles empty detections by returning an empty detection result)
- Calculates the union bounding box from all input detections:
- Extracts all bounding box coordinates (xyxy format) from input detections
- Finds the minimum x and y coordinates (leftmost and topmost points) across all boxes
- Finds the maximum x and y coordinates (rightmost and bottommost points) across all boxes
- Creates a single bounding box that completely encompasses all input detections
- Determines the merged detection's confidence:
- Finds the detection with the lowest confidence score among all input detections
- Uses this lowest confidence as the merged detection's confidence (conservative approach)
- Handles cases where confidence scores may not be present
- Creates a new merged detection with:
- The calculated union bounding box (encompasses all input detections)
- A customizable class name (default: "merged_detection", configurable via class_name parameter)
- The lowest confidence from input detections (conservative confidence assignment)
- A fixed class_id of 0 for the merged detection
- A newly generated detection ID (unique identifier for the merged detection)
- Returns the single merged detection containing all input detections within its bounding box
The block creates a unified bounding box representation of multiple detections, useful for consolidating overlapping or nearby detections into a single region. The union bounding box approach ensures all original detections are completely contained within the merged detection. By using the lowest confidence, the block adopts a conservative approach, ensuring the merged detection's confidence reflects the least certain input detection. The merged detection can be customized with a class name to indicate its merged nature or to represent a specific category.
Common Use Cases¶
- Overlapping Detection Consolidation: Merge multiple overlapping detections of the same or related objects into a single unified detection (e.g., merge overlapping detections of the same person from multiple frames, consolidate duplicate detections from different models, combine overlapping object parts into one detection), enabling overlapping detection simplification
- Multi-Object Region Creation: Create a single bounding box region that encompasses multiple detected objects for area-based analysis (e.g., create a region containing multiple people for crowd analysis, merge detections of objects in a scene into one region, combine multiple detections into a single monitoring zone), enabling multi-object region workflows
- Nearby Detection Grouping: Group nearby detections together into a single merged detection (e.g., merge detections of objects close to each other, group nearby detections into clusters, combine adjacent detections for simplified processing), enabling spatial grouping workflows
- Detection Simplification: Simplify multiple detections into one larger detection for downstream processing (e.g., reduce multiple detections to one for simpler analysis, consolidate detections for easier visualization, merge detections for streamlined workflows), enabling detection simplification workflows
- Zone Definition from Detections: Create zone boundaries from multiple detection locations (e.g., define zones based on detection locations, create regions from detected object positions, establish boundaries from detection clusters), enabling zone creation from detections
- Redundant Detection Removal: Merge redundant or duplicate detections into a single representation (e.g., combine duplicate detections from different stages, merge redundant object detections, consolidate repeated detections), enabling redundant detection consolidation workflows
Connecting to Other Blocks¶
This block receives multiple detection predictions and produces a single merged detection:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to merge multiple detections into one unified detection for simplified processing, enabling detection consolidation workflows
- After filtering blocks (e.g., Detections Filter) to merge filtered detections that meet specific criteria into a single detection (e.g., merge filtered detections by class, combine detections after filtering, consolidate filtered results), enabling filtered detection consolidation
- Before crop blocks to create a single crop region from multiple detections (e.g., crop a region containing multiple objects, extract area encompassing multiple detections, create unified crop region), enabling multi-detection region extraction
- Before zone-based blocks (e.g., Polygon Zone, Dynamic Zone) to define zones based on merged detection regions (e.g., create zones from merged detection areas, establish monitoring zones from merged detections, define regions from consolidated detections), enabling zone creation from merged detections
- Before visualization blocks to display simplified merged detections instead of multiple individual detections (e.g., visualize consolidated detection regions, display merged bounding boxes, show simplified detection representation), enabling simplified visualization outputs
- Before analysis blocks that benefit from simplified detection representation (e.g., analyze merged detection regions, process consolidated detections, work with simplified detection data), enabling simplified detection analysis workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_merge@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
class_name |
str |
Class name to assign to the merged detection. The merged detection will use this class name in its data. Default is 'merged_detection' to indicate that this is a merged detection. You can customize this to represent a specific category or to indicate the purpose of the merged detection (e.g., 'crowd', 'group', 'region'). This class name will be stored in the detection's data dictionary.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Merge in version v1.
- inputs:
Detections Consensus,Time in Zone,Detections Stabilizer,Detections Stitch,Time in Zone,Detections Classes Replacement,Segment Anything 2 Model,Velocity,Bounding Rectangle,SAM 3,Byte Tracker,Detections Transformation,Dynamic Zone,Template Matching,Seg Preview,Detection Offset,Perspective Correction,Detections Filter,Instance Segmentation Model,Object Detection Model,OCR Model,Keypoint Detection Model,Byte Tracker,VLM As Detector,SAM 3,Object Detection Model,Google Vision OCR,SAM 3,VLM As Detector,Time in Zone,EasyOCR,Moondream2,YOLO-World Model,Path Deviation,Overlap Filter,Motion Detection,Line Counter,Detection Event Log,Instance Segmentation Model,Gaze Detection,Detections List Roll-Up,Byte Tracker,Dynamic Crop,PTZ Tracking (ONVIF).md),Detections Combine,Detections Merge,Mask Area Measurement,Keypoint Detection Model,Path Deviation - outputs:
Trace Visualization,Detections Consensus,Detections Stabilizer,Circle Visualization,Background Color Visualization,Size Measurement,Detections Stitch,Color Visualization,Florence-2 Model,Time in Zone,Detections Classes Replacement,Segment Anything 2 Model,Corner Visualization,Velocity,Byte Tracker,Detections Transformation,Detection Offset,Roboflow Dataset Upload,Perspective Correction,Detections Filter,Model Monitoring Inference Aggregator,Pixelate Visualization,Ellipse Visualization,Label Visualization,Byte Tracker,Camera Focus,Model Comparison Visualization,Bounding Box Visualization,Crop Visualization,Stitch OCR Detections,Roboflow Dataset Upload,Stitch OCR Detections,Time in Zone,Distance Measurement,Detections Merge,Path Deviation,Overlap Filter,Line Counter,Florence-2 Model,Icon Visualization,Triangle Visualization,Line Counter,Detection Event Log,Dot Visualization,Heatmap Visualization,Roboflow Custom Metadata,Blur Visualization,Detections List Roll-Up,Byte Tracker,Dynamic Crop,Detections Combine,PTZ Tracking (ONVIF).md),Time in Zone,Mask Area Measurement,Path Deviation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Merge in version v1 has.
Bindings
-
input
predictions(Union[keypoint_detection_prediction,object_detection_prediction,instance_segmentation_prediction]): Detection predictions containing multiple detections to merge into a single detection. Supports object detection, instance segmentation, or keypoint detection predictions. All input detections will be combined into one merged detection with a union bounding box that encompasses all input detections. If empty detections are provided, the block returns an empty detection result. The merged detection will contain all input detections within its bounding box boundaries..
-
output
predictions(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Detections Merge in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_merge@v1",
"predictions": "$steps.object_detection_model.predictions",
"class_name": "<block_does_not_provide_example>"
}