Image Slicer¶
v2¶
Class: ImageSlicerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v2
.
- inputs:
Crop Visualization
,SIFT
,Line Counter
,Stability AI Image Generation
,Triangle Visualization
,Line Counter
,Blur Visualization
,Background Color Visualization
,Relative Static Crop
,Color Visualization
,Image Contours
,Camera Focus
,Corner Visualization
,Line Counter Visualization
,Icon Visualization
,Mask Visualization
,Image Convert Grayscale
,Circle Visualization
,Image Blur
,Pixelate Visualization
,SIFT Comparison
,Absolute Static Crop
,Model Comparison Visualization
,Pixel Color Count
,Image Threshold
,Reference Path Visualization
,Detections Consensus
,Image Slicer
,Stitch Images
,Identify Outliers
,Depth Estimation
,Trace Visualization
,Image Preprocessing
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,Grid Visualization
,Bounding Box Visualization
,Camera Calibration
,Polygon Zone Visualization
,Ellipse Visualization
,QR Code Generator
,Halo Visualization
,Perspective Correction
,Stability AI Inpainting
,Image Slicer
,Template Matching
,SIFT Comparison
,Label Visualization
,Identify Changes
,Clip Comparison
,Distance Measurement
,Dynamic Crop
- outputs:
QR Code Detection
,Anthropic Claude
,Crop Visualization
,SIFT
,LMM For Classification
,Blur Visualization
,Line Counter Visualization
,Color Visualization
,Image Contours
,Camera Focus
,Mask Visualization
,Image Convert Grayscale
,Google Gemini
,Circle Visualization
,Absolute Static Crop
,VLM as Classifier
,Object Detection Model
,Multi-Label Classification Model
,Keypoint Detection Model
,Stitch Images
,Trace Visualization
,Image Preprocessing
,Qwen2.5-VL
,OCR Model
,Object Detection Model
,Clip Comparison
,SmolVLM2
,Polygon Zone Visualization
,LMM
,YOLO-World Model
,Halo Visualization
,CLIP Embedding Model
,Florence-2 Model
,Moondream2
,Perspective Correction
,Stability AI Inpainting
,Buffer
,Template Matching
,Label Visualization
,VLM as Detector
,Pixel Color Count
,Segment Anything 2 Model
,Perception Encoder Embedding Model
,Stability AI Image Generation
,Keypoint Detection Model
,Triangle Visualization
,Background Color Visualization
,Relative Static Crop
,Detections Stabilizer
,Corner Visualization
,Multi-Label Classification Model
,Icon Visualization
,Pixelate Visualization
,Image Blur
,Gaze Detection
,Model Comparison Visualization
,VLM as Detector
,Llama 3.2 Vision
,Time in Zone
,Instance Segmentation Model
,Image Threshold
,VLM as Classifier
,Google Vision OCR
,Reference Path Visualization
,Image Slicer
,Roboflow Dataset Upload
,CogVLM
,Byte Tracker
,Barcode Detection
,Depth Estimation
,Roboflow Dataset Upload
,Single-Label Classification Model
,OpenAI
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,OpenAI
,Single-Label Classification Model
,Bounding Box Visualization
,Camera Calibration
,Ellipse Visualization
,OpenAI
,Florence-2 Model
,Image Slicer
,Instance Segmentation Model
,SIFT Comparison
,Detections Stitch
,Dominant Color
,Clip Comparison
,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v1
.
- inputs:
Crop Visualization
,SIFT
,Line Counter
,Stability AI Image Generation
,Triangle Visualization
,Line Counter
,Blur Visualization
,Background Color Visualization
,Relative Static Crop
,Color Visualization
,Image Contours
,Camera Focus
,Corner Visualization
,Line Counter Visualization
,Icon Visualization
,Mask Visualization
,Image Convert Grayscale
,Circle Visualization
,Image Blur
,Pixelate Visualization
,SIFT Comparison
,Absolute Static Crop
,Model Comparison Visualization
,Pixel Color Count
,Image Threshold
,Reference Path Visualization
,Detections Consensus
,Image Slicer
,Stitch Images
,Identify Outliers
,Depth Estimation
,Trace Visualization
,Image Preprocessing
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,Grid Visualization
,Bounding Box Visualization
,Camera Calibration
,Polygon Zone Visualization
,Ellipse Visualization
,QR Code Generator
,Halo Visualization
,Perspective Correction
,Stability AI Inpainting
,Image Slicer
,Template Matching
,SIFT Comparison
,Label Visualization
,Identify Changes
,Clip Comparison
,Distance Measurement
,Dynamic Crop
- outputs:
QR Code Detection
,Anthropic Claude
,Crop Visualization
,SIFT
,LMM For Classification
,Blur Visualization
,Line Counter Visualization
,Color Visualization
,Image Contours
,Camera Focus
,Mask Visualization
,Image Convert Grayscale
,Google Gemini
,Circle Visualization
,Absolute Static Crop
,VLM as Classifier
,Object Detection Model
,Multi-Label Classification Model
,Keypoint Detection Model
,Stitch Images
,Trace Visualization
,Image Preprocessing
,Qwen2.5-VL
,OCR Model
,Object Detection Model
,Clip Comparison
,SmolVLM2
,Polygon Zone Visualization
,LMM
,YOLO-World Model
,Halo Visualization
,CLIP Embedding Model
,Florence-2 Model
,Moondream2
,Perspective Correction
,Stability AI Inpainting
,Buffer
,Template Matching
,Label Visualization
,VLM as Detector
,Pixel Color Count
,Segment Anything 2 Model
,Perception Encoder Embedding Model
,Stability AI Image Generation
,Keypoint Detection Model
,Triangle Visualization
,Background Color Visualization
,Relative Static Crop
,Detections Stabilizer
,Corner Visualization
,Multi-Label Classification Model
,Icon Visualization
,Pixelate Visualization
,Image Blur
,Gaze Detection
,Model Comparison Visualization
,VLM as Detector
,Llama 3.2 Vision
,Time in Zone
,Instance Segmentation Model
,Image Threshold
,VLM as Classifier
,Google Vision OCR
,Reference Path Visualization
,Image Slicer
,Roboflow Dataset Upload
,CogVLM
,Byte Tracker
,Barcode Detection
,Depth Estimation
,Roboflow Dataset Upload
,Single-Label Classification Model
,OpenAI
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,OpenAI
,Single-Label Classification Model
,Bounding Box Visualization
,Camera Calibration
,Ellipse Visualization
,OpenAI
,Florence-2 Model
,Image Slicer
,Instance Segmentation Model
,SIFT Comparison
,Detections Stitch
,Dominant Color
,Clip Comparison
,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}