Detections Stitch¶
Class: DetectionsStitchBlockV1
Source: inference.core.workflows.core_steps.fusion.detections_stitch.v1.DetectionsStitchBlockV1
This block merges detections that were inferred for multiple sub-parts of the same input image into single detection.
Block may be helpful in the following scenarios: * to apply Slicing Adaptive Inference (SAHI) technique, as a final step of procedure, which involves Image Slicer block and model block at previous stages. * to merge together detections performed by precise, high-resolution model applied as secondary model after coarse detection is performed in the first stage and Dynamic Crop is applied later.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/detections_stitch@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
overlap_filtering_strategy |
str |
Which strategy to employ when filtering overlapping boxes. None does nothing, NMS discards lower-confidence detections, NMM combines them.. | ✅ |
iou_threshold |
float |
Minimum overlap threshold between boxes. If intersection over union (IoU) is above this ratio, discard or merge the lower confidence box.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Stitch
in version v1
.
- inputs:
Segment Anything 2 Model
,Image Slicer
,Stability AI Inpainting
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Roboflow Custom Metadata
,Object Detection Model
,SIFT Comparison
,Detection Offset
,Grid Visualization
,Ellipse Visualization
,SIFT
,VLM as Detector
,CogVLM
,Image Contours
,OpenAI
,Absolute Static Crop
,Camera Focus
,Trace Visualization
,Multi-Label Classification Model
,VLM as Detector
,Dot Visualization
,Google Vision OCR
,Identify Changes
,Polygon Zone Visualization
,Roboflow Dataset Upload
,Identify Outliers
,Classification Label Visualization
,Corner Visualization
,Byte Tracker
,Llama 3.2 Vision
,Dynamic Crop
,Reference Path Visualization
,Label Visualization
,Detections Stabilizer
,Mask Visualization
,Triangle Visualization
,Line Counter Visualization
,Template Matching
,Dynamic Zone
,Detections Transformation
,Time in Zone
,Model Monitoring Inference Aggregator
,Blur Visualization
,Line Counter
,Anthropic Claude
,Instance Segmentation Model
,Webhook Sink
,Time in Zone
,Instance Segmentation Model
,Slack Notification
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,Path Deviation
,OpenAI
,Relative Static Crop
,Detections Consensus
,Twilio SMS Notification
,VLM as Classifier
,Roboflow Dataset Upload
,Google Gemini
,Model Comparison Visualization
,Halo Visualization
,Crop Visualization
,Byte Tracker
,Image Blur
,Circle Visualization
,Velocity
,Keypoint Detection Model
,Image Preprocessing
,Background Color Visualization
,Bounding Rectangle
,Florence-2 Model
,Bounding Box Visualization
,Byte Tracker
,Florence-2 Model
,Image Slicer
,Local File Sink
,LMM For Classification
,Stitch Images
,Stability AI Image Generation
,Image Threshold
,Detections Stitch
,OCR Model
,LMM
,Keypoint Visualization
,Email Notification
,Color Visualization
,Path Deviation
,Single-Label Classification Model
,YOLO-World Model
,CSV Formatter
,Image Convert Grayscale
,Detections Classes Replacement
,Polygon Visualization
- outputs:
Segment Anything 2 Model
,Detections Filter
,Stitch OCR Detections
,Stability AI Inpainting
,Pixelate Visualization
,Perspective Correction
,Path Deviation
,Roboflow Custom Metadata
,Detections Consensus
,Detection Offset
,Roboflow Dataset Upload
,Ellipse Visualization
,Model Comparison Visualization
,Halo Visualization
,Crop Visualization
,Byte Tracker
,Trace Visualization
,Distance Measurement
,Circle Visualization
,Velocity
,Dot Visualization
,Background Color Visualization
,Bounding Rectangle
,Roboflow Dataset Upload
,Size Measurement
,Florence-2 Model
,Byte Tracker
,Corner Visualization
,Bounding Box Visualization
,Florence-2 Model
,Byte Tracker
,Dynamic Crop
,Line Counter
,Detections Stabilizer
,Label Visualization
,Mask Visualization
,Triangle Visualization
,Dynamic Zone
,Detections Transformation
,Detections Stitch
,Model Monitoring Inference Aggregator
,Time in Zone
,Color Visualization
,Path Deviation
,Blur Visualization
,Line Counter
,Time in Zone
,Detections Classes Replacement
,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Stitch
in version v1
has.
Bindings
-
input
reference_image
(image
): Original image that was cropped to produce the predictions..predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to be merged into the original image..overlap_filtering_strategy
(string
): Which strategy to employ when filtering overlapping boxes. None does nothing, NMS discards lower-confidence detections, NMM combines them..iou_threshold
(float_zero_to_one
): Minimum overlap threshold between boxes. If intersection over union (IoU) is above this ratio, discard or merge the lower confidence box..
-
output
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Detections Stitch
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_stitch@v1",
"reference_image": "$inputs.image",
"predictions": "$steps.my_object_detection_model.predictions",
"overlap_filtering_strategy": "nms",
"iou_threshold": 0.4
}