Image Slicer¶
v2¶
Class: ImageSlicerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v2.
- inputs:
Blur Visualization,Classification Label Visualization,Circle Visualization,SIFT Comparison,Crop Visualization,Image Contours,Relative Static Crop,Grid Visualization,Image Preprocessing,Perspective Correction,Ellipse Visualization,Absolute Static Crop,Stitch Images,Triangle Visualization,Contrast Equalization,Stability AI Inpainting,QR Code Generator,Image Slicer,Background Color Visualization,Polygon Zone Visualization,Stability AI Image Generation,Template Matching,Depth Estimation,Distance Measurement,Dot Visualization,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Morphological Transformation,SIFT,Reference Path Visualization,Halo Visualization,SIFT Comparison,Icon Visualization,Image Blur,Image Slicer,Polygon Visualization,Pixelate Visualization,Image Threshold,Image Convert Grayscale,Clip Comparison,Color Visualization,Line Counter,Label Visualization,Trace Visualization,Identify Outliers,Pixel Color Count,Dynamic Crop,Line Counter,Detections Consensus,Model Comparison Visualization,Corner Visualization,Camera Calibration,Mask Visualization,Keypoint Visualization,Stability AI Outpainting,Identify Changes - outputs:
VLM as Detector,Google Vision OCR,SAM 3,Classification Label Visualization,Detections Stabilizer,Circle Visualization,Image Contours,Relative Static Crop,Image Preprocessing,LMM For Classification,VLM as Classifier,Ellipse Visualization,Stitch Images,Triangle Visualization,Stability AI Inpainting,Image Slicer,VLM as Classifier,Background Color Visualization,Segment Anything 2 Model,Template Matching,Moondream2,OCR Model,Dot Visualization,Florence-2 Model,SIFT,Morphological Transformation,EasyOCR,Gaze Detection,Halo Visualization,Reference Path Visualization,SIFT Comparison,Buffer,Polygon Visualization,Image Slicer,Florence-2 Model,Clip Comparison,Perception Encoder Embedding Model,Instance Segmentation Model,OpenAI,Byte Tracker,Color Visualization,Image Convert Grayscale,Object Detection Model,Keypoint Detection Model,Google Gemini,Label Visualization,Email Notification,Llama 3.2 Vision,Trace Visualization,QR Code Detection,YOLO-World Model,Corner Visualization,Mask Visualization,Time in Zone,CogVLM,Stability AI Outpainting,OpenAI,Detections Stitch,Barcode Detection,Blur Visualization,Dominant Color,Crop Visualization,VLM as Detector,Single-Label Classification Model,OpenAI,Perspective Correction,Clip Comparison,Single-Label Classification Model,Absolute Static Crop,Seg Preview,Contrast Equalization,Roboflow Dataset Upload,Roboflow Dataset Upload,Polygon Zone Visualization,CLIP Embedding Model,Stability AI Image Generation,Depth Estimation,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Instance Segmentation Model,Multi-Label Classification Model,Icon Visualization,Image Blur,Pixelate Visualization,Image Threshold,Keypoint Detection Model,Anthropic Claude,LMM,Google Gemini,Multi-Label Classification Model,Pixel Color Count,SmolVLM2,Dynamic Crop,Qwen2.5-VL,Model Comparison Visualization,Camera Calibration,Keypoint Visualization,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v2 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v1.
- inputs:
Blur Visualization,Classification Label Visualization,Circle Visualization,SIFT Comparison,Crop Visualization,Image Contours,Relative Static Crop,Grid Visualization,Image Preprocessing,Perspective Correction,Ellipse Visualization,Absolute Static Crop,Stitch Images,Triangle Visualization,Contrast Equalization,Stability AI Inpainting,QR Code Generator,Image Slicer,Background Color Visualization,Polygon Zone Visualization,Stability AI Image Generation,Template Matching,Depth Estimation,Distance Measurement,Dot Visualization,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Morphological Transformation,SIFT,Reference Path Visualization,Halo Visualization,SIFT Comparison,Icon Visualization,Image Blur,Image Slicer,Polygon Visualization,Pixelate Visualization,Image Threshold,Image Convert Grayscale,Clip Comparison,Color Visualization,Line Counter,Label Visualization,Trace Visualization,Identify Outliers,Pixel Color Count,Dynamic Crop,Line Counter,Detections Consensus,Model Comparison Visualization,Corner Visualization,Camera Calibration,Mask Visualization,Keypoint Visualization,Stability AI Outpainting,Identify Changes - outputs:
VLM as Detector,Google Vision OCR,SAM 3,Classification Label Visualization,Detections Stabilizer,Circle Visualization,Image Contours,Relative Static Crop,Image Preprocessing,LMM For Classification,VLM as Classifier,Ellipse Visualization,Stitch Images,Triangle Visualization,Stability AI Inpainting,Image Slicer,VLM as Classifier,Background Color Visualization,Segment Anything 2 Model,Template Matching,Moondream2,OCR Model,Dot Visualization,Florence-2 Model,SIFT,Morphological Transformation,EasyOCR,Gaze Detection,Halo Visualization,Reference Path Visualization,SIFT Comparison,Buffer,Polygon Visualization,Image Slicer,Florence-2 Model,Clip Comparison,Perception Encoder Embedding Model,Instance Segmentation Model,OpenAI,Byte Tracker,Color Visualization,Image Convert Grayscale,Object Detection Model,Keypoint Detection Model,Google Gemini,Label Visualization,Email Notification,Llama 3.2 Vision,Trace Visualization,QR Code Detection,YOLO-World Model,Corner Visualization,Mask Visualization,Time in Zone,CogVLM,Stability AI Outpainting,OpenAI,Detections Stitch,Barcode Detection,Blur Visualization,Dominant Color,Crop Visualization,VLM as Detector,Single-Label Classification Model,OpenAI,Perspective Correction,Clip Comparison,Single-Label Classification Model,Absolute Static Crop,Seg Preview,Contrast Equalization,Roboflow Dataset Upload,Roboflow Dataset Upload,Polygon Zone Visualization,CLIP Embedding Model,Stability AI Image Generation,Depth Estimation,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Instance Segmentation Model,Multi-Label Classification Model,Icon Visualization,Image Blur,Pixelate Visualization,Image Threshold,Keypoint Detection Model,Anthropic Claude,LMM,Google Gemini,Multi-Label Classification Model,Pixel Color Count,SmolVLM2,Dynamic Crop,Qwen2.5-VL,Model Comparison Visualization,Camera Calibration,Keypoint Visualization,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v1 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}