Image Slicer¶
v2¶
Class: ImageSlicerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Split input images into overlapping tiles or slices using the Slicing Adaptive Inference (SAHI) technique to enable small object detection by processing smaller image regions where objects appear larger relative to the image size, improving detection accuracy for small objects in large images through tiled inference workflows with equal-sized slices and deduplication.
How This Block Works¶
This block implements the first step of the SAHI (Slicing Adaptive Inference) technique by dividing large images into smaller overlapping tiles. This approach helps detect small objects that might be missed when processing the entire image at once. The block:
- Receives an input image and slicing configuration:
- Takes an input image to be sliced
- Receives slice dimensions (width and height in pixels)
- Receives overlap ratios for width and height (controls overlap between adjacent slices)
- Calculates slice positions:
- Generates a grid of slice coordinates across the image
- Positions slices with specified overlap between consecutive slices
- Overlap helps ensure objects at slice boundaries are not missed
- Adjusts border slice positions to ensure all slices are equal size (pushes border slices toward image center)
- Creates image slices:
- Extracts each slice from the original image using calculated coordinates
- Creates WorkflowImageData objects for each slice with crop metadata
- Stores offset information (x, y coordinates) for each slice relative to original image
- Maintains parent image reference for coordinate mapping
- Deduplicates slices:
- Removes any duplicate slice coordinates that may occur from overlap calculations
- Ensures each unique slice position appears only once in the output
- Prevents redundant processing of identical image regions
- Handles edge cases:
- Filters out empty slices (if any occur)
- Ensures all slices fit within image boundaries
- Creates crop identifiers for tracking each slice
- Returns list of slices:
- Outputs all unique slices as a list of images
- All slices have equal dimensions (border slices adjusted to match)
- Increases dimensionality by 1 (one image becomes multiple slices)
- Each slice can be processed independently by downstream blocks
The SAHI technique works by making small objects appear larger relative to the slice size. When an object is only a few pixels in a large image, scaling the image down to model input size makes the object too small to detect. By slicing the image and processing each slice separately, the same object occupies more pixels in each slice, making detection more reliable. Overlapping slices ensure objects near slice boundaries are detected in at least one slice.
Common Use Cases¶
- Small Object Detection: Detect small objects in large images using SAHI technique (e.g., detect small vehicles in aerial images, find license plates in wide-angle camera views, detect insects in high-resolution photos), enabling small object detection workflows
- High-Resolution Image Processing: Process high-resolution images by slicing them into manageable pieces (e.g., process satellite imagery, analyze medical imaging scans, process large document images), enabling high-resolution processing workflows
- Aerial and Drone Imagery: Detect objects in aerial photography where objects are small relative to image size (e.g., detect vehicles in drone footage, find people in aerial surveillance, detect structures in satellite images), enabling aerial detection workflows
- Wide-Angle Camera Monitoring: Improve detection in wide-angle camera views where objects appear small (e.g., monitor large parking lots, detect objects in panoramic views, analyze traffic in wide camera coverage), enabling wide-angle monitoring workflows
- Medical Imaging Analysis: Analyze medical images by processing regions separately (e.g., detect lesions in large scans, find anomalies in medical images, analyze radiology images), enabling medical imaging workflows
- Document and Text Processing: Process large documents by slicing into regions (e.g., OCR large documents, detect text regions in scanned documents, analyze document layouts), enabling document processing workflows
Connecting to Other Blocks¶
This block receives images and produces image slices:
- After image input or preprocessing blocks to slice images for SAHI processing (e.g., slice input images, process preprocessed images, slice transformed images), enabling image-to-slice workflows
- Before detection model blocks (Object Detection Model, Instance Segmentation Model) to process slices for small object detection (e.g., detect objects in slices, run detection on each slice, process slices with models), enabling slice-to-detection workflows
- Before Detections Stitch block (required after detection models) to merge detections from slices back to original image coordinates (e.g., merge slice detections, combine detection results, reconstruct full-image predictions), enabling slice-detection-stitch workflows
- In SAHI workflows following the pattern: Image Slicer → Detection Model → Detections Stitch to implement complete SAHI technique for small object detection
- Before filtering or analytics blocks to process slice-level results before stitching (e.g., filter detections per slice, analyze slice results, process slice outputs), enabling slice-to-analysis workflows
- As part of multi-stage detection pipelines where slices are processed independently and results are combined (e.g., multi-scale detection, hierarchical detection, parallel slice processing), enabling multi-stage detection workflows
Version Differences¶
This version (v2) includes the following enhancements over v1:
- Equal-Sized Slices: All slices generated by the slicer have equal dimensions. Border slices that would normally be smaller in v1 are adjusted by pushing them toward the image center, ensuring consistent slice sizes. This provides more predictable processing behavior and ensures all slices are processed with the same dimensions, which can be important for model inference consistency.
- Deduplication: Duplicate slice coordinates are automatically removed, ensuring each unique slice position appears only once in the output. This prevents redundant processing of identical image regions that could occur due to overlap calculations, improving efficiency and preventing duplicate detections.
Requirements¶
This block requires an input image. The slice dimensions (width and height) should ideally match the model's input size for optimal performance. If slice size differs from model input size, slices will be resized during inference which may affect accuracy. Default slice size is 640x640 pixels, but this should be adjusted based on your model's input size (e.g., use 320x320 for models with 320 input size, 1280x1280 for models with 1280 input size). Overlap ratios (default 0.2 or 20%) help ensure objects at slice boundaries are detected, but higher overlap increases processing time. The block should be used with object detection or instance segmentation models, followed by Detections Stitch block to merge results. For more information on SAHI technique, see: https://ieeexplore.ieee.org/document/9897990. For a practical guide, visit: https://blog.roboflow.com/how-to-use-sahi-to-detect-small-objects/.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice in pixels. Should ideally match your detection model's input width for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time. All slices will have equal width (border slices adjusted to match).. | ✅ |
slice_height |
int |
Height of each slice in pixels. Should ideally match your detection model's input height for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time. All slices will have equal height (border slices adjusted to match).. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width (horizontal) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice width overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection. Duplicate slices created by overlap are automatically removed.. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height (vertical) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice height overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection. Duplicate slices created by overlap are automatically removed.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v2.
- inputs:
Contrast Equalization,Image Contours,Image Slicer,Detections Consensus,Depth Estimation,SIFT Comparison,Clip Comparison,Pixel Color Count,Polygon Visualization,QR Code Generator,Image Blur,SIFT Comparison,Stitch Images,Dynamic Crop,Bounding Box Visualization,Text Display,Detection Event Log,Model Comparison Visualization,Camera Focus,SIFT,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Line Counter,Mask Visualization,Relative Static Crop,Keypoint Visualization,Distance Measurement,Circle Visualization,Camera Focus,Trace Visualization,Pixelate Visualization,Color Visualization,Absolute Static Crop,Image Slicer,Stability AI Inpainting,Reference Path Visualization,Dot Visualization,Identify Outliers,Label Visualization,Perspective Correction,Ellipse Visualization,Crop Visualization,Halo Visualization,Image Threshold,Grid Visualization,Template Matching,Image Convert Grayscale,Corner Visualization,Image Preprocessing,Line Counter,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Identify Changes,Icon Visualization,Triangle Visualization,Stability AI Image Generation,Background Subtraction - outputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,Anthropic Claude,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,SmolVLM2,CLIP Embedding Model,Roboflow Dataset Upload,Text Display,Motion Detection,SIFT,Model Comparison Visualization,Camera Focus,Moondream2,LMM,Qwen3-VL,Single-Label Classification Model,Google Vision OCR,SAM 3,Anthropic Claude,Relative Static Crop,Mask Visualization,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Multi-Label Classification Model,Time in Zone,VLM as Classifier,Reference Path Visualization,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Ellipse Visualization,Crop Visualization,Keypoint Detection Model,Florence-2 Model,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Image Preprocessing,Barcode Detection,Icon Visualization,SAM 3,Background Subtraction,Segment Anything 2 Model,Qwen2.5-VL,Image Slicer,Image Contours,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Detections Stitch,Stitch Images,QR Code Detection,Dynamic Crop,Bounding Box Visualization,Anthropic Claude,VLM as Classifier,YOLO-World Model,Instance Segmentation Model,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,Stability AI Image Generation,Dominant Color,OCR Model,Keypoint Visualization,Google Gemini,OpenAI,Camera Focus,Trace Visualization,CogVLM,OpenAI,Image Slicer,Absolute Static Crop,Color Visualization,Dot Visualization,Label Visualization,Buffer,Florence-2 Model,Google Gemini,Google Gemini,Object Detection Model,LMM For Classification,Template Matching,OpenAI,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Byte Tracker,SAM 3,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Gaze Detection,Clip Comparison,Triangle Visualization,VLM as Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v2 has.
Bindings
-
input
image(image): Input image to be sliced into smaller tiles. The image will be divided into overlapping slices based on the slice dimensions and overlap ratios. Each slice maintains metadata about its position in the original image for coordinate mapping. All slices will have equal dimensions (border slices are adjusted to match). Used in SAHI (Slicing Adaptive Inference) workflows to enable small object detection by processing image regions separately..slice_width(integer): Width of each slice in pixels. Should ideally match your detection model's input width for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time. All slices will have equal width (border slices adjusted to match)..slice_height(integer): Height of each slice in pixels. Should ideally match your detection model's input height for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time. All slices will have equal height (border slices adjusted to match)..overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width (horizontal) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice width overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection. Duplicate slices created by overlap are automatically removed..overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height (vertical) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice height overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection. Duplicate slices created by overlap are automatically removed..
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.1,
"overlap_ratio_height": 0.1
}
v1¶
Class: ImageSlicerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Split input images into overlapping tiles or slices using the Slicing Adaptive Inference (SAHI) technique to enable small object detection by processing smaller image regions where objects appear larger relative to the image size, improving detection accuracy for small objects in large images through tiled inference workflows.
How This Block Works¶
This block implements the first step of the SAHI (Slicing Adaptive Inference) technique by dividing large images into smaller overlapping tiles. This approach helps detect small objects that might be missed when processing the entire image at once. The block:
- Receives an input image and slicing configuration:
- Takes an input image to be sliced
- Receives slice dimensions (width and height in pixels)
- Receives overlap ratios for width and height (controls overlap between adjacent slices)
- Calculates slice positions:
- Generates a grid of slice coordinates across the image
- Positions slices with specified overlap between consecutive slices
- Overlap helps ensure objects at slice boundaries are not missed
- Border slices may be smaller than specified size to fit within image bounds
- Creates image slices:
- Extracts each slice from the original image using calculated coordinates
- Creates WorkflowImageData objects for each slice with crop metadata
- Stores offset information (x, y coordinates) for each slice relative to original image
- Maintains parent image reference for coordinate mapping
- Handles edge cases:
- Filters out empty slices (if any occur)
- Ensures all slices fit within image boundaries
- Creates crop identifiers for tracking each slice
- Returns list of slices:
- Outputs all slices as a list of images
- Increases dimensionality by 1 (one image becomes multiple slices)
- Each slice can be processed independently by downstream blocks
The SAHI technique works by making small objects appear larger relative to the slice size. When an object is only a few pixels in a large image, scaling the image down to model input size makes the object too small to detect. By slicing the image and processing each slice separately, the same object occupies more pixels in each slice, making detection more reliable. Overlapping slices ensure objects near slice boundaries are detected in at least one slice.
Common Use Cases¶
- Small Object Detection: Detect small objects in large images using SAHI technique (e.g., detect small vehicles in aerial images, find license plates in wide-angle camera views, detect insects in high-resolution photos), enabling small object detection workflows
- High-Resolution Image Processing: Process high-resolution images by slicing them into manageable pieces (e.g., process satellite imagery, analyze medical imaging scans, process large document images), enabling high-resolution processing workflows
- Aerial and Drone Imagery: Detect objects in aerial photography where objects are small relative to image size (e.g., detect vehicles in drone footage, find people in aerial surveillance, detect structures in satellite images), enabling aerial detection workflows
- Wide-Angle Camera Monitoring: Improve detection in wide-angle camera views where objects appear small (e.g., monitor large parking lots, detect objects in panoramic views, analyze traffic in wide camera coverage), enabling wide-angle monitoring workflows
- Medical Imaging Analysis: Analyze medical images by processing regions separately (e.g., detect lesions in large scans, find anomalies in medical images, analyze radiology images), enabling medical imaging workflows
- Document and Text Processing: Process large documents by slicing into regions (e.g., OCR large documents, detect text regions in scanned documents, analyze document layouts), enabling document processing workflows
Connecting to Other Blocks¶
This block receives images and produces image slices:
- After image input or preprocessing blocks to slice images for SAHI processing (e.g., slice input images, process preprocessed images, slice transformed images), enabling image-to-slice workflows
- Before detection model blocks (Object Detection Model, Instance Segmentation Model) to process slices for small object detection (e.g., detect objects in slices, run detection on each slice, process slices with models), enabling slice-to-detection workflows
- Before Detections Stitch block (required after detection models) to merge detections from slices back to original image coordinates (e.g., merge slice detections, combine detection results, reconstruct full-image predictions), enabling slice-detection-stitch workflows
- In SAHI workflows following the pattern: Image Slicer → Detection Model → Detections Stitch to implement complete SAHI technique for small object detection
- Before filtering or analytics blocks to process slice-level results before stitching (e.g., filter detections per slice, analyze slice results, process slice outputs), enabling slice-to-analysis workflows
- As part of multi-stage detection pipelines where slices are processed independently and results are combined (e.g., multi-scale detection, hierarchical detection, parallel slice processing), enabling multi-stage detection workflows
Requirements¶
This block requires an input image. The slice dimensions (width and height) should ideally match the model's input size for optimal performance. If slice size differs from model input size, slices will be resized during inference which may affect accuracy. Default slice size is 640x640 pixels, but this should be adjusted based on your model's input size (e.g., use 320x320 for models with 320 input size, 1280x1280 for models with 1280 input size). Overlap ratios (default 0.2 or 20%) help ensure objects at slice boundaries are detected, but higher overlap increases processing time. The block should be used with object detection or instance segmentation models, followed by Detections Stitch block to merge results. For more information on SAHI technique, see: https://ieeexplore.ieee.org/document/9897990. For a practical guide, visit: https://blog.roboflow.com/how-to-use-sahi-to-detect-small-objects/.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice in pixels. Should ideally match your detection model's input width for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time.. | ✅ |
slice_height |
int |
Height of each slice in pixels. Should ideally match your detection model's input height for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time.. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width (horizontal) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice width overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection.. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height (vertical) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice height overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v1.
- inputs:
Contrast Equalization,Image Contours,Image Slicer,Detections Consensus,Depth Estimation,SIFT Comparison,Clip Comparison,Pixel Color Count,Polygon Visualization,QR Code Generator,Image Blur,SIFT Comparison,Stitch Images,Dynamic Crop,Bounding Box Visualization,Text Display,Detection Event Log,Model Comparison Visualization,Camera Focus,SIFT,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Line Counter,Mask Visualization,Relative Static Crop,Keypoint Visualization,Distance Measurement,Circle Visualization,Camera Focus,Trace Visualization,Pixelate Visualization,Color Visualization,Absolute Static Crop,Image Slicer,Stability AI Inpainting,Reference Path Visualization,Dot Visualization,Identify Outliers,Label Visualization,Perspective Correction,Ellipse Visualization,Crop Visualization,Halo Visualization,Image Threshold,Grid Visualization,Template Matching,Image Convert Grayscale,Corner Visualization,Image Preprocessing,Line Counter,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Identify Changes,Icon Visualization,Triangle Visualization,Stability AI Image Generation,Background Subtraction - outputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,Anthropic Claude,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,SmolVLM2,CLIP Embedding Model,Roboflow Dataset Upload,Text Display,Motion Detection,SIFT,Model Comparison Visualization,Camera Focus,Moondream2,LMM,Qwen3-VL,Single-Label Classification Model,Google Vision OCR,SAM 3,Anthropic Claude,Relative Static Crop,Mask Visualization,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Multi-Label Classification Model,Time in Zone,VLM as Classifier,Reference Path Visualization,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Ellipse Visualization,Crop Visualization,Keypoint Detection Model,Florence-2 Model,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Image Preprocessing,Barcode Detection,Icon Visualization,SAM 3,Background Subtraction,Segment Anything 2 Model,Qwen2.5-VL,Image Slicer,Image Contours,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Detections Stitch,Stitch Images,QR Code Detection,Dynamic Crop,Bounding Box Visualization,Anthropic Claude,VLM as Classifier,YOLO-World Model,Instance Segmentation Model,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,Stability AI Image Generation,Dominant Color,OCR Model,Keypoint Visualization,Google Gemini,OpenAI,Camera Focus,Trace Visualization,CogVLM,OpenAI,Image Slicer,Absolute Static Crop,Color Visualization,Dot Visualization,Label Visualization,Buffer,Florence-2 Model,Google Gemini,Google Gemini,Object Detection Model,LMM For Classification,Template Matching,OpenAI,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Byte Tracker,SAM 3,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Gaze Detection,Clip Comparison,Triangle Visualization,VLM as Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v1 has.
Bindings
-
input
image(image): Input image to be sliced into smaller tiles. The image will be divided into overlapping slices based on the slice dimensions and overlap ratios. Each slice maintains metadata about its position in the original image for coordinate mapping. Used in SAHI (Slicing Adaptive Inference) workflows to enable small object detection by processing image regions separately..slice_width(integer): Width of each slice in pixels. Should ideally match your detection model's input width for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time..slice_height(integer): Height of each slice in pixels. Should ideally match your detection model's input height for optimal performance. If different, slices will be resized during model inference which may affect accuracy. Common values: 320 (for models with 320px input), 640 (default, for most YOLO models), 1280 (for high-resolution models). Larger slices process fewer total slices but may miss very small objects. Smaller slices detect smaller objects but increase processing time..overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width (horizontal) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice width overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection..overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height (vertical) dimension. Range: 0.0 to <1.0. Specifies what fraction of the slice height overlaps with adjacent slices. Default 0.2 means 20% overlap. Higher overlap (e.g., 0.3-0.5) ensures objects at slice boundaries are more likely to be detected but increases processing time since more slices are created. Lower overlap (e.g., 0.1) is faster but may miss objects at boundaries. Typical values: 0.1-0.3 for most use cases, 0.3-0.5 for critical small object detection..
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.1,
"overlap_ratio_height": 0.1
}