Skip to content

Detections Stitch

Class: DetectionsStitchBlockV1

Source: inference.core.workflows.core_steps.fusion.detections_stitch.v1.DetectionsStitchBlockV1

Merge detections from multiple image slices or crops back into a single unified detection result by converting coordinates from slice/crop space to original image coordinates, combining all detections, and optionally filtering overlapping detections to enable SAHI workflows, multi-stage detection pipelines, and coordinate-space merging workflows where detections from sub-images need to be reconstructed as if they were detected on the original image.

How This Block Works

This block merges detections that were made on multiple sub-parts (slices or crops) of the same input image, reconstructing them as a single detection result in the original image coordinate space. The block:

  1. Receives reference image and slice/crop predictions:
  2. Takes the original reference image that was sliced or cropped
  3. Receives predictions from detection models that processed each slice/crop
  4. Predictions must contain parent coordinate metadata indicating slice/crop position
  5. Retrieves crop offsets for each detection:
  6. Extracts parent coordinates from each detection's metadata
  7. Gets the offset (x, y position) indicating where each slice/crop was located in the original image
  8. Uses this offset to transform coordinates from slice space to original image space
  9. Manages crop metadata:
  10. Updates image dimensions in detection metadata to match reference image dimensions
  11. Validates that detections were not scaled (scaled detections are not supported)
  12. Attaches parent coordinate information to detections for proper coordinate transformation
  13. Transforms coordinates to original image space:
  14. Moves bounding box coordinates (xyxy) from slice/crop coordinates to original image coordinates
  15. Transforms segmentation masks from slice/crop space to original image space (if present)
  16. Applies offset to align detections with their position in the original image
  17. Merges all transformed detections:
  18. Combines all re-aligned detections from all slices/crops into a single detection result
  19. Creates unified detection output containing all detections from all sub-images
  20. Applies overlap filtering (optional):
  21. None strategy: Returns all merged detections without filtering (may contain duplicates from overlapping slices)
  22. NMS (Non-Maximum Suppression): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region
  23. NMM (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold
  24. Returns merged detections:
  25. Outputs unified detection result in original image coordinate space
  26. Reduces dimensionality by 1 (multiple slice detections → single image detections)
  27. All detections are now referenced to the original image dimensions and coordinates

This block is essential for SAHI (Slicing Adaptive Inference) workflows where an image is sliced, each slice is processed separately, and results need to be merged back. Overlapping slices can produce duplicate detections for the same object, so overlap filtering (NMS/NMM) helps clean up these duplicates. The coordinate transformation ensures that detection coordinates are correctly positioned relative to the original image, not the slices.

Common Use Cases

  • SAHI Workflows: Complete SAHI technique by merging detections from image slices back to original image coordinates (e.g., merge slice detections from SAHI processing, reconstruct full-image detections from slices, combine small object detection results), enabling SAHI detection workflows
  • Multi-Stage Detection: Merge detections from secondary high-resolution models applied to dynamically cropped regions (e.g., coarse detection → crop → precise detection → merge, two-stage detection pipelines, hierarchical detection workflows), enabling multi-stage detection workflows
  • Small Object Detection: Combine detection results from sliced images processed separately for small object detection (e.g., merge detections from aerial image slices, combine slice detection results, reconstruct detections from tiled images), enabling small object detection workflows
  • High-Resolution Processing: Merge detections from high-resolution images processed in smaller chunks (e.g., merge detections from satellite image tiles, combine results from medical image regions, reconstruct detections from large image segments), enabling high-resolution detection workflows
  • Coordinate Space Unification: Convert detections from multiple coordinate spaces (slice/crop space) to a single unified coordinate space (original image space) for consistent processing (e.g., unify detection coordinates, merge coordinate spaces, standardize detection positions), enabling coordinate unification workflows
  • Overlapping Region Handling: Handle duplicate detections from overlapping slices or crops by applying overlap filtering (e.g., remove duplicate detections from overlapping slices, merge overlapping detections, clean up overlapping results), enabling overlap resolution workflows

Connecting to Other Blocks

This block receives slice/crop predictions and reference images, and produces merged detections:

  • After detection models in SAHI workflows following Image Slicer → Detection Model → Detections Stitch pattern to merge slice detections (e.g., merge SAHI slice detections, reconstruct full-image detections, combine slice results), enabling SAHI completion workflows
  • After secondary detection models in multi-stage pipelines following Dynamic Crop → Detection Model → Detections Stitch pattern to merge cropped detections (e.g., merge cropped region detections, combine two-stage detection results, unify multi-stage outputs), enabling multi-stage detection workflows
  • Before visualization blocks to visualize merged detection results on the original image (e.g., visualize merged detections, display stitched results, show unified detection output), enabling visualization workflows
  • Before filtering or analytics blocks to process merged detection results (e.g., filter merged detections, analyze stitched results, process unified outputs), enabling analysis workflows
  • Before sink or storage blocks to store or export merged detection results (e.g., save merged detections, export stitched results, store unified outputs), enabling storage workflows
  • In workflow outputs to provide merged detections as final workflow output (e.g., return merged detections, output stitched results, provide unified detection output), enabling output workflows

Requirements

This block requires a reference image (the original image that was sliced/cropped) and predictions from detection models that processed slices/crops. The predictions must contain parent coordinate metadata (PARENT_COORDINATES_KEY) indicating the position of each slice/crop in the original image. The block does not support scaled detections (detections that were resized relative to the parent image). Predictions should be from object detection or instance segmentation models. The block supports three overlap filtering strategies: "none" (no filtering, may include duplicates), "nms" (Non-Maximum Suppression, removes lower-confidence overlapping detections, default), and "nmm" (Non-Maximum Merge, combines overlapping detections). The IoU threshold (default 0.3) determines when detections are considered overlapping for filtering purposes. For more information on SAHI technique, see: https://ieeexplore.ieee.org/document/9897990.

Type identifier

Use the following identifier in step "type" field: roboflow_core/detections_stitch@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
overlap_filtering_strategy str Strategy for handling overlapping detections when merging results from overlapping slices/crops. 'none': No filtering applied, all detections are kept (may include duplicates from overlapping regions). 'nms' (Non-Maximum Suppression, default): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region. 'nmm' (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold. Use 'none' when you want to preserve all detections, 'nms' to remove duplicates (recommended for most cases), or 'nmm' to combine overlapping detections..
iou_threshold float Intersection over Union (IoU) threshold for overlap filtering. Range: 0.0 to 1.0. When overlap filtering strategy is 'nms' or 'nmm', detections with IoU above this threshold are considered overlapping. For NMS: overlapping detections with IoU above threshold result in lower-confidence detection being removed. For NMM: overlapping detections with IoU above threshold are merged. Lower values (e.g., 0.2-0.3) are more aggressive, removing/merging more detections. Higher values (e.g., 0.5-0.7) are more permissive, only handling highly overlapping detections. Default 0.3 works well for most use cases with overlapping slices..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Detections Stitch in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Detections Stitch in version v1 has.

Bindings
  • input

    • reference_image (image): Original reference image that was sliced or cropped to produce the input predictions. This image is used to determine the target coordinate space and image dimensions for the merged detections. All detection coordinates will be transformed to match this reference image's coordinate system. The same image that was provided to Image Slicer or Dynamic Crop blocks should be used here to ensure proper coordinate alignment..
    • predictions (Union[object_detection_prediction, instance_segmentation_prediction]): Model predictions (object detection or instance segmentation) from detection models that processed image slices or crops. These predictions must contain parent coordinate metadata indicating the position of each slice/crop in the original image. Predictions are collected from multiple slices/crops and merged into a single unified detection result. The block converts coordinates from slice/crop space to original image space and combines all detections..
    • overlap_filtering_strategy (string): Strategy for handling overlapping detections when merging results from overlapping slices/crops. 'none': No filtering applied, all detections are kept (may include duplicates from overlapping regions). 'nms' (Non-Maximum Suppression, default): Removes lower-confidence detections when IoU exceeds threshold, keeping only the highest confidence detection for each overlapping region. 'nmm' (Non-Maximum Merge): Combines overlapping detections instead of discarding them, merging detections that exceed IoU threshold. Use 'none' when you want to preserve all detections, 'nms' to remove duplicates (recommended for most cases), or 'nmm' to combine overlapping detections..
    • iou_threshold (float_zero_to_one): Intersection over Union (IoU) threshold for overlap filtering. Range: 0.0 to 1.0. When overlap filtering strategy is 'nms' or 'nmm', detections with IoU above this threshold are considered overlapping. For NMS: overlapping detections with IoU above threshold result in lower-confidence detection being removed. For NMM: overlapping detections with IoU above threshold are merged. Lower values (e.g., 0.2-0.3) are more aggressive, removing/merging more detections. Higher values (e.g., 0.5-0.7) are more permissive, only handling highly overlapping detections. Default 0.3 works well for most use cases with overlapping slices..
  • output

    • predictions (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
Example JSON definition of step Detections Stitch in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/detections_stitch@v1",
    "reference_image": "$inputs.image",
    "predictions": "$steps.object_detection.predictions",
    "overlap_filtering_strategy": "none",
    "iou_threshold": 0.2
}