Image Slicer¶
v2¶
Class: ImageSlicerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v2.
- inputs:
Corner Visualization,Color Visualization,Image Slicer,Camera Calibration,Image Blur,QR Code Generator,Line Counter,Image Convert Grayscale,Dot Visualization,Image Threshold,Dynamic Crop,SIFT Comparison,Blur Visualization,Morphological Transformation,Label Visualization,Background Color Visualization,Bounding Box Visualization,SIFT,Classification Label Visualization,Camera Focus,Stability AI Outpainting,Identify Changes,Keypoint Visualization,Trace Visualization,Polygon Visualization,Mask Visualization,Pixelate Visualization,Absolute Static Crop,Ellipse Visualization,Model Comparison Visualization,Pixel Color Count,Line Counter,Detections Consensus,Triangle Visualization,Grid Visualization,Contrast Equalization,Image Preprocessing,Polygon Zone Visualization,Relative Static Crop,Stability AI Inpainting,Halo Visualization,Image Contours,Line Counter Visualization,Stitch Images,Crop Visualization,Stability AI Image Generation,Distance Measurement,Template Matching,Icon Visualization,Circle Visualization,SIFT Comparison,Identify Outliers,Depth Estimation,Clip Comparison,Background Subtraction,Camera Focus,Image Slicer,Perspective Correction,Reference Path Visualization - outputs:
Detections Stitch,Seg Preview,Byte Tracker,Qwen3-VL,YOLO-World Model,Google Gemini,Image Convert Grayscale,QR Code Detection,Dynamic Crop,Blur Visualization,SIFT,Stability AI Outpainting,Bounding Box Visualization,Camera Focus,Keypoint Visualization,Trace Visualization,Instance Segmentation Model,Polygon Visualization,Dominant Color,Pixel Color Count,Ellipse Visualization,OpenAI,Model Comparison Visualization,Anthropic Claude,Triangle Visualization,Qwen2.5-VL,SAM 3,Polygon Zone Visualization,Halo Visualization,LMM,Stability AI Image Generation,Time in Zone,VLM as Detector,Florence-2 Model,CLIP Embedding Model,Single-Label Classification Model,Detections Stabilizer,Email Notification,Circle Visualization,Google Vision OCR,Google Gemini,Motion Detection,Clip Comparison,Camera Focus,Anthropic Claude,Object Detection Model,Instance Segmentation Model,Perspective Correction,Perception Encoder Embedding Model,VLM as Classifier,Reference Path Visualization,Corner Visualization,Color Visualization,Twilio SMS/MMS Notification,EasyOCR,Multi-Label Classification Model,Image Slicer,OpenAI,Image Blur,Buffer,Camera Calibration,SmolVLM2,VLM as Detector,SAM 3,Dot Visualization,Image Threshold,Morphological Transformation,Label Visualization,Background Color Visualization,OCR Model,Classification Label Visualization,Roboflow Dataset Upload,Keypoint Detection Model,Mask Visualization,Pixelate Visualization,Absolute Static Crop,Keypoint Detection Model,Moondream2,Image Preprocessing,Contrast Equalization,Google Gemini,Stability AI Inpainting,Barcode Detection,Template Matching,Line Counter Visualization,OpenAI,Image Contours,Crop Visualization,Relative Static Crop,Stitch Images,OpenAI,Llama 3.2 Vision,Icon Visualization,Clip Comparison,SIFT Comparison,Gaze Detection,Depth Estimation,Single-Label Classification Model,VLM as Classifier,Florence-2 Model,Background Subtraction,LMM For Classification,SAM 3,Object Detection Model,Multi-Label Classification Model,Segment Anything 2 Model,CogVLM,Image Slicer,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v2 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v1.
- inputs:
Corner Visualization,Color Visualization,Image Slicer,Camera Calibration,Image Blur,QR Code Generator,Line Counter,Image Convert Grayscale,Dot Visualization,Image Threshold,Dynamic Crop,SIFT Comparison,Blur Visualization,Morphological Transformation,Label Visualization,Background Color Visualization,Bounding Box Visualization,SIFT,Classification Label Visualization,Camera Focus,Stability AI Outpainting,Identify Changes,Keypoint Visualization,Trace Visualization,Polygon Visualization,Mask Visualization,Pixelate Visualization,Absolute Static Crop,Ellipse Visualization,Model Comparison Visualization,Pixel Color Count,Line Counter,Detections Consensus,Triangle Visualization,Grid Visualization,Contrast Equalization,Image Preprocessing,Polygon Zone Visualization,Relative Static Crop,Stability AI Inpainting,Halo Visualization,Image Contours,Line Counter Visualization,Stitch Images,Crop Visualization,Stability AI Image Generation,Distance Measurement,Template Matching,Icon Visualization,Circle Visualization,SIFT Comparison,Identify Outliers,Depth Estimation,Clip Comparison,Background Subtraction,Camera Focus,Image Slicer,Perspective Correction,Reference Path Visualization - outputs:
Detections Stitch,Seg Preview,Byte Tracker,Qwen3-VL,YOLO-World Model,Google Gemini,Image Convert Grayscale,QR Code Detection,Dynamic Crop,Blur Visualization,SIFT,Stability AI Outpainting,Bounding Box Visualization,Camera Focus,Keypoint Visualization,Trace Visualization,Instance Segmentation Model,Polygon Visualization,Dominant Color,Pixel Color Count,Ellipse Visualization,OpenAI,Model Comparison Visualization,Anthropic Claude,Triangle Visualization,Qwen2.5-VL,SAM 3,Polygon Zone Visualization,Halo Visualization,LMM,Stability AI Image Generation,Time in Zone,VLM as Detector,Florence-2 Model,CLIP Embedding Model,Single-Label Classification Model,Detections Stabilizer,Email Notification,Circle Visualization,Google Vision OCR,Google Gemini,Motion Detection,Clip Comparison,Camera Focus,Anthropic Claude,Object Detection Model,Instance Segmentation Model,Perspective Correction,Perception Encoder Embedding Model,VLM as Classifier,Reference Path Visualization,Corner Visualization,Color Visualization,Twilio SMS/MMS Notification,EasyOCR,Multi-Label Classification Model,Image Slicer,OpenAI,Image Blur,Buffer,Camera Calibration,SmolVLM2,VLM as Detector,SAM 3,Dot Visualization,Image Threshold,Morphological Transformation,Label Visualization,Background Color Visualization,OCR Model,Classification Label Visualization,Roboflow Dataset Upload,Keypoint Detection Model,Mask Visualization,Pixelate Visualization,Absolute Static Crop,Keypoint Detection Model,Moondream2,Image Preprocessing,Contrast Equalization,Google Gemini,Stability AI Inpainting,Barcode Detection,Template Matching,Line Counter Visualization,OpenAI,Image Contours,Crop Visualization,Relative Static Crop,Stitch Images,OpenAI,Llama 3.2 Vision,Icon Visualization,Clip Comparison,SIFT Comparison,Gaze Detection,Depth Estimation,Single-Label Classification Model,VLM as Classifier,Florence-2 Model,Background Subtraction,LMM For Classification,SAM 3,Object Detection Model,Multi-Label Classification Model,Segment Anything 2 Model,CogVLM,Image Slicer,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v1 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}