Image Slicer¶
v2¶
Class: ImageSlicerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v2
.
- inputs:
Keypoint Visualization
,Image Contours
,Circle Visualization
,Image Threshold
,Absolute Static Crop
,Image Slicer
,Perspective Correction
,Color Visualization
,Mask Visualization
,Reference Path Visualization
,Stitch Images
,Image Blur
,Blur Visualization
,Pixelate Visualization
,Relative Static Crop
,Clip Comparison
,Dot Visualization
,Image Slicer
,Stability AI Inpainting
,SIFT Comparison
,Classification Label Visualization
,Icon Visualization
,Polygon Zone Visualization
,Depth Estimation
,Identify Outliers
,Template Matching
,Polygon Visualization
,Stability AI Image Generation
,Dynamic Crop
,Grid Visualization
,Crop Visualization
,Ellipse Visualization
,Stability AI Outpainting
,Trace Visualization
,Line Counter
,Bounding Box Visualization
,Distance Measurement
,Camera Calibration
,Image Preprocessing
,Image Convert Grayscale
,Detections Consensus
,Label Visualization
,Line Counter
,Pixel Color Count
,Corner Visualization
,SIFT
,QR Code Generator
,Background Color Visualization
,SIFT Comparison
,Camera Focus
,Model Comparison Visualization
,Triangle Visualization
,Identify Changes
,Halo Visualization
,Line Counter Visualization
- outputs:
Google Gemini
,Keypoint Visualization
,Keypoint Detection Model
,Detections Stabilizer
,Image Contours
,Circle Visualization
,Image Threshold
,Absolute Static Crop
,Perspective Correction
,Color Visualization
,Gaze Detection
,QR Code Detection
,Instance Segmentation Model
,Reference Path Visualization
,Stitch Images
,Image Blur
,Florence-2 Model
,Barcode Detection
,Blur Visualization
,Keypoint Detection Model
,Relative Static Crop
,Clip Comparison
,Stability AI Inpainting
,SIFT Comparison
,Icon Visualization
,SmolVLM2
,Polygon Zone Visualization
,Depth Estimation
,Instance Segmentation Model
,Template Matching
,Stability AI Image Generation
,Dynamic Crop
,Crop Visualization
,Time in Zone
,VLM as Classifier
,Single-Label Classification Model
,Camera Calibration
,Segment Anything 2 Model
,Perception Encoder Embedding Model
,VLM as Classifier
,Pixel Color Count
,Dominant Color
,SIFT
,Camera Focus
,Model Comparison Visualization
,Object Detection Model
,CLIP Embedding Model
,Llama 3.2 Vision
,Line Counter Visualization
,Triangle Visualization
,Clip Comparison
,Multi-Label Classification Model
,LMM
,Roboflow Dataset Upload
,Image Slicer
,Mask Visualization
,Byte Tracker
,Single-Label Classification Model
,OCR Model
,Pixelate Visualization
,Qwen2.5-VL
,Object Detection Model
,Dot Visualization
,Image Slicer
,Roboflow Dataset Upload
,YOLO-World Model
,OpenAI
,Classification Label Visualization
,VLM as Detector
,Polygon Visualization
,OpenAI
,Buffer
,Ellipse Visualization
,LMM For Classification
,Stability AI Outpainting
,Trace Visualization
,Moondream2
,Bounding Box Visualization
,Image Preprocessing
,Multi-Label Classification Model
,Image Convert Grayscale
,Google Vision OCR
,Label Visualization
,CogVLM
,Corner Visualization
,Detections Stitch
,Background Color Visualization
,Florence-2 Model
,VLM as Detector
,Halo Visualization
,OpenAI
,Anthropic Claude
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v1
.
- inputs:
Keypoint Visualization
,Image Contours
,Circle Visualization
,Image Threshold
,Absolute Static Crop
,Image Slicer
,Perspective Correction
,Color Visualization
,Mask Visualization
,Reference Path Visualization
,Stitch Images
,Image Blur
,Blur Visualization
,Pixelate Visualization
,Relative Static Crop
,Clip Comparison
,Dot Visualization
,Image Slicer
,Stability AI Inpainting
,SIFT Comparison
,Classification Label Visualization
,Icon Visualization
,Polygon Zone Visualization
,Depth Estimation
,Identify Outliers
,Template Matching
,Polygon Visualization
,Stability AI Image Generation
,Dynamic Crop
,Grid Visualization
,Crop Visualization
,Ellipse Visualization
,Stability AI Outpainting
,Trace Visualization
,Line Counter
,Bounding Box Visualization
,Distance Measurement
,Camera Calibration
,Image Preprocessing
,Image Convert Grayscale
,Detections Consensus
,Label Visualization
,Line Counter
,Pixel Color Count
,Corner Visualization
,SIFT
,QR Code Generator
,Background Color Visualization
,SIFT Comparison
,Camera Focus
,Model Comparison Visualization
,Triangle Visualization
,Identify Changes
,Halo Visualization
,Line Counter Visualization
- outputs:
Google Gemini
,Keypoint Visualization
,Keypoint Detection Model
,Detections Stabilizer
,Image Contours
,Circle Visualization
,Image Threshold
,Absolute Static Crop
,Perspective Correction
,Color Visualization
,Gaze Detection
,QR Code Detection
,Instance Segmentation Model
,Reference Path Visualization
,Stitch Images
,Image Blur
,Florence-2 Model
,Barcode Detection
,Blur Visualization
,Keypoint Detection Model
,Relative Static Crop
,Clip Comparison
,Stability AI Inpainting
,SIFT Comparison
,Icon Visualization
,SmolVLM2
,Polygon Zone Visualization
,Depth Estimation
,Instance Segmentation Model
,Template Matching
,Stability AI Image Generation
,Dynamic Crop
,Crop Visualization
,Time in Zone
,VLM as Classifier
,Single-Label Classification Model
,Camera Calibration
,Segment Anything 2 Model
,Perception Encoder Embedding Model
,VLM as Classifier
,Pixel Color Count
,Dominant Color
,SIFT
,Camera Focus
,Model Comparison Visualization
,Object Detection Model
,CLIP Embedding Model
,Llama 3.2 Vision
,Line Counter Visualization
,Triangle Visualization
,Clip Comparison
,Multi-Label Classification Model
,LMM
,Roboflow Dataset Upload
,Image Slicer
,Mask Visualization
,Byte Tracker
,Single-Label Classification Model
,OCR Model
,Pixelate Visualization
,Qwen2.5-VL
,Object Detection Model
,Dot Visualization
,Image Slicer
,Roboflow Dataset Upload
,YOLO-World Model
,OpenAI
,Classification Label Visualization
,VLM as Detector
,Polygon Visualization
,OpenAI
,Buffer
,Ellipse Visualization
,LMM For Classification
,Stability AI Outpainting
,Trace Visualization
,Moondream2
,Bounding Box Visualization
,Image Preprocessing
,Multi-Label Classification Model
,Image Convert Grayscale
,Google Vision OCR
,Label Visualization
,CogVLM
,Corner Visualization
,Detections Stitch
,Background Color Visualization
,Florence-2 Model
,VLM as Detector
,Halo Visualization
,OpenAI
,Anthropic Claude
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}