Detections Stitch¶
Class: DetectionsStitchBlockV1
Source: inference.core.workflows.core_steps.fusion.detections_stitch.v1.DetectionsStitchBlockV1
Merge detections from multiple image slices or crops back into a single unified detection result by converting coordinates from slice/crop space to original image coordinates, combining all detections, and optionally filtering overlapping detections to enable SAHI workflows, multi-stage detection pipelines, and coordinate-space merging workflows where detections from sub-images need to be reconstructed as if they were detected on the original image.
How This Block Works¶
This block merges detections that were made on multiple sub-parts (slices or crops) of the same input image, reconstructing them as a single detection result in the original image coordinate space. The block:
- Receives reference image and slice/crop predictions:
- Takes the original reference image that was sliced or cropped
- Receives predictions from detection models that processed each slice/crop
- Predictions must contain parent coordinate metadata indicating slice/crop position
- Retrieves crop offsets for each detection:
- Extracts parent coordinates from each detection's metadata
- Gets the offset (x, y position) indicating where each slice/crop was located in the original image
- Uses this offset to transform coordinates from slice space to original image space
- Manages crop metadata:
- Updates image dimensions in detection metadata to match reference image dimensions
- Validates that detections were not scaled (scaled detections are not supported)
- Attaches parent coordinate information to detections for proper coordinate transformation
- Transforms coordinates to original image space:
- Moves bounding box coordinates (xyxy) from slice/crop coordinates to original image coordinates
- Transforms segmentation masks from slice/crop space to original image space (if present)
- Applies offset to align detections with their position in the original image
- Merges all transformed detections:
- Combines all re-aligned detections from all slices/crops into a single detection result
- Creates unified detection output containing all detections from all sub-images
- Applies overlap filtering (optional):
- None strategy: Returns all merged detections without filtering (may contain duplicates from overlapping slices)
- NMS (Non-Maximum Suppression): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region
- NMM (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold
- Returns merged detections:
- Outputs unified detection result in original image coordinate space
- Reduces dimensionality by 1 (multiple slice detections → single image detections)
- All detections are now referenced to the original image dimensions and coordinates
This block is essential for SAHI (Slicing Adaptive Inference) workflows where an image is sliced, each slice is processed separately, and results need to be merged back. Overlapping slices can produce duplicate detections for the same object, so overlap filtering (NMS/NMM) helps clean up these duplicates. The coordinate transformation ensures that detection coordinates are correctly positioned relative to the original image, not the slices.
Common Use Cases¶
- SAHI Workflows: Complete SAHI technique by merging detections from image slices back to original image coordinates (e.g., merge slice detections from SAHI processing, reconstruct full-image detections from slices, combine small object detection results), enabling SAHI detection workflows
- Multi-Stage Detection: Merge detections from secondary high-resolution models applied to dynamically cropped regions (e.g., coarse detection → crop → precise detection → merge, two-stage detection pipelines, hierarchical detection workflows), enabling multi-stage detection workflows
- Small Object Detection: Combine detection results from sliced images processed separately for small object detection (e.g., merge detections from aerial image slices, combine slice detection results, reconstruct detections from tiled images), enabling small object detection workflows
- High-Resolution Processing: Merge detections from high-resolution images processed in smaller chunks (e.g., merge detections from satellite image tiles, combine results from medical image regions, reconstruct detections from large image segments), enabling high-resolution detection workflows
- Coordinate Space Unification: Convert detections from multiple coordinate spaces (slice/crop space) to a single unified coordinate space (original image space) for consistent processing (e.g., unify detection coordinates, merge coordinate spaces, standardize detection positions), enabling coordinate unification workflows
- Overlapping Region Handling: Handle duplicate detections from overlapping slices or crops by applying overlap filtering (e.g., remove duplicate detections from overlapping slices, merge overlapping detections, clean up overlapping results), enabling overlap resolution workflows
Connecting to Other Blocks¶
This block receives slice/crop predictions and reference images, and produces merged detections:
- After detection models in SAHI workflows following Image Slicer → Detection Model → Detections Stitch pattern to merge slice detections (e.g., merge SAHI slice detections, reconstruct full-image detections, combine slice results), enabling SAHI completion workflows
- After secondary detection models in multi-stage pipelines following Dynamic Crop → Detection Model → Detections Stitch pattern to merge cropped detections (e.g., merge cropped region detections, combine two-stage detection results, unify multi-stage outputs), enabling multi-stage detection workflows
- Before visualization blocks to visualize merged detection results on the original image (e.g., visualize merged detections, display stitched results, show unified detection output), enabling visualization workflows
- Before filtering or analytics blocks to process merged detection results (e.g., filter merged detections, analyze stitched results, process unified outputs), enabling analysis workflows
- Before sink or storage blocks to store or export merged detection results (e.g., save merged detections, export stitched results, store unified outputs), enabling storage workflows
- In workflow outputs to provide merged detections as final workflow output (e.g., return merged detections, output stitched results, provide unified detection output), enabling output workflows
Requirements¶
This block requires a reference image (the original image that was sliced/cropped) and predictions from detection models that processed slices/crops. The predictions must contain parent coordinate metadata (PARENT_COORDINATES_KEY) indicating the position of each slice/crop in the original image. The block does not support scaled detections (detections that were resized relative to the parent image). Predictions should be from object detection or instance segmentation models. The block supports three overlap filtering strategies: "none" (no filtering, may include duplicates), "nms" (Non-Maximum Suppression, removes lower-confidence overlapping detections, default), and "nmm" (Non-Maximum Merge, combines overlapping detections). The IoU threshold (default 0.3) determines when detections are considered overlapping for filtering purposes. For more information on SAHI technique, see: https://ieeexplore.ieee.org/document/9897990.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_stitch@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
overlap_filtering_strategy |
str |
Strategy for handling overlapping detections when merging results from overlapping slices/crops. 'none': No filtering applied, all detections are kept (may include duplicates from overlapping regions). 'nms' (Non-Maximum Suppression, default): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region. 'nmm' (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold. Use 'none' when you want to preserve all detections, 'nms' to remove duplicates (recommended for most cases), or 'nmm' to combine overlapping detections.. | ✅ |
iou_threshold |
float |
Intersection over Union (IoU) threshold for overlap filtering. Range: 0.0 to 1.0. When overlap filtering strategy is 'nms' or 'nmm', detections with IoU above this threshold are considered overlapping. For NMS: overlapping detections with IoU above threshold result in lower-confidence detection being removed. For NMM: overlapping detections with IoU above threshold are merged. Lower values (e.g., 0.2-0.3) are more aggressive, removing/merging more detections. Higher values (e.g., 0.5-0.7) are more permissive, only handling highly overlapping detections. Default 0.3 works well for most use cases with overlapping slices.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Stitch in version v1.
- inputs:
Anthropic Claude,Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Multi-Label Classification Model,Dynamic Zone,Email Notification,QR Code Generator,Dynamic Crop,VLM As Detector,VLM As Detector,Google Gemini,LMM,SAM 3,Path Deviation,Image Blur,Detection Offset,Corner Visualization,Image Convert Grayscale,Line Counter,Byte Tracker,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Object Detection Model,Template Matching,Image Contours,Path Deviation,Trace Visualization,Google Vision OCR,Morphological Transformation,Triangle Visualization,Bounding Rectangle,Detections Stitch,Instance Segmentation Model,Relative Static Crop,CSV Formatter,Text Display,Stitch Images,Detections Filter,Camera Calibration,Grid Visualization,Google Gemini,Detections Stabilizer,Local File Sink,Slack Notification,VLM As Classifier,PTZ Tracking (ONVIF).md),Camera Focus,Roboflow Dataset Upload,Color Visualization,Dot Visualization,Image Slicer,Polygon Visualization,Detections Combine,Object Detection Model,Anthropic Claude,LMM For Classification,Line Counter Visualization,Llama 3.2 Vision,Keypoint Detection Model,Byte Tracker,Contrast Equalization,Identify Changes,Detections Classes Replacement,SIFT Comparison,Camera Focus,Time in Zone,Background Subtraction,Velocity,Image Slicer,Circle Visualization,Moondream2,Seg Preview,Identify Outliers,Halo Visualization,Florence-2 Model,Blur Visualization,Florence-2 Model,Label Visualization,Twilio SMS/MMS Notification,Clip Comparison,Email Notification,Ellipse Visualization,OpenAI,SIFT,Byte Tracker,Image Preprocessing,SAM 3,Model Monitoring Inference Aggregator,Single-Label Classification Model,Detections List Roll-Up,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Depth Estimation,OpenAI,Time in Zone,CogVLM,Absolute Static Crop,Roboflow Custom Metadata,EasyOCR,Stitch OCR Detections,Perspective Correction,Anthropic Claude,Pixelate Visualization,Stability AI Image Generation,Reference Path Visualization,Google Gemini,Keypoint Visualization,Polygon Visualization,SAM 3,Twilio SMS Notification,Bounding Box Visualization,Detection Event Log,Polygon Zone Visualization,OCR Model,YOLO-World Model,Overlap Filter,Icon Visualization,Crop Visualization,Time in Zone,Stitch OCR Detections,Motion Detection,OpenAI,Detections Transformation,Roboflow Dataset Upload - outputs:
Mask Visualization,Circle Visualization,Detections Consensus,Detections Merge,Halo Visualization,Dynamic Zone,Florence-2 Model,Blur Visualization,Dynamic Crop,Label Visualization,Path Deviation,Detection Offset,Corner Visualization,Ellipse Visualization,Byte Tracker,Byte Tracker,Line Counter,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Detections List Roll-Up,Model Comparison Visualization,Background Color Visualization,Path Deviation,Trace Visualization,Size Measurement,Time in Zone,Line Counter,Triangle Visualization,Bounding Rectangle,Detections Stitch,Detections Filter,Roboflow Custom Metadata,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Camera Focus,Stitch OCR Detections,Perspective Correction,Color Visualization,Detections Combine,Dot Visualization,Polygon Visualization,Pixelate Visualization,Polygon Visualization,Bounding Box Visualization,Byte Tracker,Detection Event Log,Distance Measurement,Detections Transformation,Detections Classes Replacement,Overlap Filter,Icon Visualization,Crop Visualization,Time in Zone,Stitch OCR Detections,Time in Zone,Velocity,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Stitch in version v1 has.
Bindings
-
input
reference_image(image): Original reference image that was sliced or cropped to produce the input predictions. This image is used to determine the target coordinate space and image dimensions for the merged detections. All detection coordinates will be transformed to match this reference image's coordinate system. The same image that was provided to Image Slicer or Dynamic Crop blocks should be used here to ensure proper coordinate alignment..predictions(Union[instance_segmentation_prediction,object_detection_prediction]): Model predictions (object detection or instance segmentation) from detection models that processed image slices or crops. These predictions must contain parent coordinate metadata indicating the position of each slice/crop in the original image. Predictions are collected from multiple slices/crops and merged into a single unified detection result. The block converts coordinates from slice/crop space to original image space and combines all detections..overlap_filtering_strategy(string): Strategy for handling overlapping detections when merging results from overlapping slices/crops. 'none': No filtering applied, all detections are kept (may include duplicates from overlapping regions). 'nms' (Non-Maximum Suppression, default): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region. 'nmm' (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold. Use 'none' when you want to preserve all detections, 'nms' to remove duplicates (recommended for most cases), or 'nmm' to combine overlapping detections..iou_threshold(float_zero_to_one): Intersection over Union (IoU) threshold for overlap filtering. Range: 0.0 to 1.0. When overlap filtering strategy is 'nms' or 'nmm', detections with IoU above this threshold are considered overlapping. For NMS: overlapping detections with IoU above threshold result in lower-confidence detection being removed. For NMM: overlapping detections with IoU above threshold are merged. Lower values (e.g., 0.2-0.3) are more aggressive, removing/merging more detections. Higher values (e.g., 0.5-0.7) are more permissive, only handling highly overlapping detections. Default 0.3 works well for most use cases with overlapping slices..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Detections Stitch in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_stitch@v1",
"reference_image": "$inputs.image",
"predictions": "$steps.object_detection.predictions",
"overlap_filtering_strategy": "none",
"iou_threshold": 0.2
}