Detections Stitch¶
Class: DetectionsStitchBlockV1
Source: inference.core.workflows.core_steps.fusion.detections_stitch.v1.DetectionsStitchBlockV1
This block merges detections that were inferred for multiple sub-parts of the same input image into single detection.
Block may be helpful in the following scenarios: * to apply Slicing Adaptive Inference (SAHI) technique, as a final step of procedure, which involves Image Slicer block and model block at previous stages. * to merge together detections performed by precise, high-resolution model applied as secondary model after coarse detection is performed in the first stage and Dynamic Crop is applied later.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/detections_stitch@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
overlap_filtering_strategy |
str |
Which strategy to employ when filtering overlapping boxes. None does nothing, NMS discards lower-confidence detections, NMM combines them.. | ✅ |
iou_threshold |
float |
Minimum overlap threshold between boxes. If intersection over union (IoU) is above this ratio, discard or merge the lower confidence box.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Stitch
in version v1
.
- inputs:
YOLO-World Model
,VLM as Detector
,OpenAI
,Bounding Rectangle
,Keypoint Detection Model
,Circle Visualization
,Roboflow Dataset Upload
,PTZ Tracking (ONVIF)
.md),Roboflow Custom Metadata
,Depth Estimation
,SIFT
,Florence-2 Model
,Template Matching
,Grid Visualization
,Detections Stitch
,Dynamic Zone
,Instance Segmentation Model
,Color Visualization
,CSV Formatter
,Detections Consensus
,Overlap Filter
,Object Detection Model
,Identify Outliers
,Perspective Correction
,Path Deviation
,Model Monitoring Inference Aggregator
,Image Slicer
,OpenAI
,Model Comparison Visualization
,Clip Comparison
,Stitch Images
,Dynamic Crop
,Image Contours
,Moondream2
,Webhook Sink
,Pixelate Visualization
,Llama 3.2 Vision
,Byte Tracker
,Camera Calibration
,Line Counter
,Reference Path Visualization
,Image Blur
,Time in Zone
,Local File Sink
,Blur Visualization
,OCR Model
,Ellipse Visualization
,Trace Visualization
,Velocity
,Corner Visualization
,Camera Focus
,Polygon Zone Visualization
,Detections Classes Replacement
,Google Gemini
,OpenAI
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Detections Transformation
,Single-Label Classification Model
,Bounding Box Visualization
,Detections Stabilizer
,Detections Merge
,CogVLM
,Image Convert Grayscale
,Halo Visualization
,LMM
,Email Notification
,Polygon Visualization
,Absolute Static Crop
,Object Detection Model
,Slack Notification
,Dot Visualization
,Label Visualization
,Byte Tracker
,Stability AI Outpainting
,Crop Visualization
,Detections Filter
,Google Vision OCR
,Stability AI Image Generation
,Detection Offset
,Image Threshold
,Stitch OCR Detections
,Image Preprocessing
,Identify Changes
,SIFT Comparison
,Mask Visualization
,Time in Zone
,Florence-2 Model
,Segment Anything 2 Model
,Twilio SMS Notification
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter Visualization
,Path Deviation
,VLM as Classifier
,Instance Segmentation Model
,Background Color Visualization
,Anthropic Claude
,LMM For Classification
,Image Slicer
,Keypoint Visualization
,Multi-Label Classification Model
,VLM as Detector
,Relative Static Crop
- outputs:
Bounding Rectangle
,Corner Visualization
,Detections Classes Replacement
,Roboflow Dataset Upload
,Circle Visualization
,PTZ Tracking (ONVIF)
.md),Triangle Visualization
,Roboflow Custom Metadata
,Stability AI Inpainting
,Detections Transformation
,Bounding Box Visualization
,Size Measurement
,Florence-2 Model
,Distance Measurement
,Detections Stabilizer
,Detections Merge
,Halo Visualization
,Detections Stitch
,Dynamic Zone
,Polygon Visualization
,Ellipse Visualization
,Byte Tracker
,Color Visualization
,Label Visualization
,Dot Visualization
,Detections Consensus
,Crop Visualization
,Overlap Filter
,Detections Filter
,Perspective Correction
,Path Deviation
,Model Monitoring Inference Aggregator
,Detection Offset
,Stitch OCR Detections
,Model Comparison Visualization
,Dynamic Crop
,Mask Visualization
,Time in Zone
,Florence-2 Model
,Segment Anything 2 Model
,Pixelate Visualization
,Byte Tracker
,Roboflow Dataset Upload
,Line Counter
,Byte Tracker
,Time in Zone
,Path Deviation
,Background Color Visualization
,Line Counter
,Blur Visualization
,Velocity
,Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Stitch
in version v1
has.
Bindings
-
input
reference_image
(image
): Original image that was cropped to produce the predictions..predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to be merged into the original image..overlap_filtering_strategy
(string
): Which strategy to employ when filtering overlapping boxes. None does nothing, NMS discards lower-confidence detections, NMM combines them..iou_threshold
(float_zero_to_one
): Minimum overlap threshold between boxes. If intersection over union (IoU) is above this ratio, discard or merge the lower confidence box..
-
output
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Detections Stitch
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_stitch@v1",
"reference_image": "$inputs.image",
"predictions": "$steps.my_object_detection_model.predictions",
"overlap_filtering_strategy": "nms",
"iou_threshold": 0.4
}