Image Slicer¶
v2¶
Class: ImageSlicerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v2.
- inputs:
Label Visualization,SIFT Comparison,Line Counter,Blur Visualization,Background Color Visualization,Contrast Equalization,Bounding Box Visualization,Camera Calibration,Polygon Visualization,Stability AI Outpainting,Image Slicer,Keypoint Visualization,Reference Path Visualization,Pixelate Visualization,Icon Visualization,Identify Changes,Triangle Visualization,Template Matching,Model Comparison Visualization,Corner Visualization,Distance Measurement,Image Preprocessing,Color Visualization,SIFT Comparison,Line Counter Visualization,Grid Visualization,Stitch Images,Halo Visualization,Stability AI Image Generation,Identify Outliers,QR Code Generator,Circle Visualization,Image Contours,Relative Static Crop,Dot Visualization,Polygon Zone Visualization,Ellipse Visualization,Line Counter,Image Blur,Clip Comparison,Absolute Static Crop,Depth Estimation,Image Slicer,Morphological Transformation,Stability AI Inpainting,Dynamic Crop,Camera Focus,Pixel Color Count,Detections Consensus,Crop Visualization,Image Threshold,Perspective Correction,Image Convert Grayscale,Mask Visualization,Trace Visualization,Classification Label Visualization,SIFT - outputs:
Google Vision OCR,Label Visualization,LMM For Classification,Blur Visualization,Background Color Visualization,Contrast Equalization,Reference Path Visualization,Keypoint Visualization,Stability AI Outpainting,Bounding Box Visualization,Image Slicer,Pixelate Visualization,Single-Label Classification Model,Clip Comparison,SAM 3,Perception Encoder Embedding Model,Seg Preview,Byte Tracker,Image Preprocessing,SAM 3,Color Visualization,SIFT Comparison,Qwen2.5-VL,Object Detection Model,Dominant Color,Anthropic Claude,Circle Visualization,Image Contours,Object Detection Model,QR Code Detection,Polygon Zone Visualization,Ellipse Visualization,Email Notification,Clip Comparison,Moondream2,VLM as Classifier,OCR Model,Absolute Static Crop,Depth Estimation,LMM,Time in Zone,Morphological Transformation,Roboflow Dataset Upload,Gaze Detection,Crop Visualization,OpenAI,Florence-2 Model,Barcode Detection,Image Convert Grayscale,SAM 3,CogVLM,VLM as Detector,Multi-Label Classification Model,Classification Label Visualization,Buffer,Keypoint Detection Model,Segment Anything 2 Model,Keypoint Detection Model,YOLO-World Model,Polygon Visualization,CLIP Embedding Model,Camera Calibration,Icon Visualization,Triangle Visualization,Template Matching,Roboflow Dataset Upload,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,VLM as Detector,Line Counter Visualization,SmolVLM2,Halo Visualization,Stability AI Image Generation,Relative Static Crop,Dot Visualization,Detections Stitch,Llama 3.2 Vision,Image Blur,OpenAI,Instance Segmentation Model,Multi-Label Classification Model,Image Slicer,OpenAI,Stability AI Inpainting,Dynamic Crop,Single-Label Classification Model,Camera Focus,Pixel Color Count,Detections Stabilizer,Instance Segmentation Model,VLM as Classifier,Mask Visualization,Perspective Correction,Image Threshold,OpenAI,Trace Visualization,Stitch Images,SIFT
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v2 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/image_slicer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer in version v1.
- inputs:
Label Visualization,SIFT Comparison,Line Counter,Blur Visualization,Background Color Visualization,Contrast Equalization,Bounding Box Visualization,Camera Calibration,Polygon Visualization,Stability AI Outpainting,Image Slicer,Keypoint Visualization,Reference Path Visualization,Pixelate Visualization,Icon Visualization,Identify Changes,Triangle Visualization,Template Matching,Model Comparison Visualization,Corner Visualization,Distance Measurement,Image Preprocessing,Color Visualization,SIFT Comparison,Line Counter Visualization,Grid Visualization,Stitch Images,Halo Visualization,Stability AI Image Generation,Identify Outliers,QR Code Generator,Circle Visualization,Image Contours,Relative Static Crop,Dot Visualization,Polygon Zone Visualization,Ellipse Visualization,Line Counter,Image Blur,Clip Comparison,Absolute Static Crop,Depth Estimation,Image Slicer,Morphological Transformation,Stability AI Inpainting,Dynamic Crop,Camera Focus,Pixel Color Count,Detections Consensus,Crop Visualization,Image Threshold,Perspective Correction,Image Convert Grayscale,Mask Visualization,Trace Visualization,Classification Label Visualization,SIFT - outputs:
Google Vision OCR,Label Visualization,LMM For Classification,Blur Visualization,Background Color Visualization,Contrast Equalization,Reference Path Visualization,Keypoint Visualization,Stability AI Outpainting,Bounding Box Visualization,Image Slicer,Pixelate Visualization,Single-Label Classification Model,Clip Comparison,SAM 3,Perception Encoder Embedding Model,Seg Preview,Byte Tracker,Image Preprocessing,SAM 3,Color Visualization,SIFT Comparison,Qwen2.5-VL,Object Detection Model,Dominant Color,Anthropic Claude,Circle Visualization,Image Contours,Object Detection Model,QR Code Detection,Polygon Zone Visualization,Ellipse Visualization,Email Notification,Clip Comparison,Moondream2,VLM as Classifier,OCR Model,Absolute Static Crop,Depth Estimation,LMM,Time in Zone,Morphological Transformation,Roboflow Dataset Upload,Gaze Detection,Crop Visualization,OpenAI,Florence-2 Model,Barcode Detection,Image Convert Grayscale,SAM 3,CogVLM,VLM as Detector,Multi-Label Classification Model,Classification Label Visualization,Buffer,Keypoint Detection Model,Segment Anything 2 Model,Keypoint Detection Model,YOLO-World Model,Polygon Visualization,CLIP Embedding Model,Camera Calibration,Icon Visualization,Triangle Visualization,Template Matching,Roboflow Dataset Upload,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,VLM as Detector,Line Counter Visualization,SmolVLM2,Halo Visualization,Stability AI Image Generation,Relative Static Crop,Dot Visualization,Detections Stitch,Llama 3.2 Vision,Image Blur,OpenAI,Instance Segmentation Model,Multi-Label Classification Model,Image Slicer,OpenAI,Stability AI Inpainting,Dynamic Crop,Single-Label Classification Model,Camera Focus,Pixel Color Count,Detections Stabilizer,Instance Segmentation Model,VLM as Classifier,Mask Visualization,Perspective Correction,Image Threshold,OpenAI,Trace Visualization,Stitch Images,SIFT
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer in version v1 has.
Bindings
-
input
image(image): The input image for this step..slice_width(integer): Width of each slice, in pixels.slice_height(integer): Height of each slice, in pixels.overlap_ratio_width(float_zero_to_one): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height(float_zero_to_one): Overlap ratio between consecutive slices in the height dimension.
-
output
slices(image): Image in workflows.
Example JSON definition of step Image Slicer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}