Image Slicer¶
v2¶
Class: ImageSlicerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v2
.
- inputs:
Grid Visualization
,Ellipse Visualization
,Image Blur
,Image Preprocessing
,Image Slicer
,SIFT Comparison
,Dynamic Crop
,Absolute Static Crop
,Color Visualization
,Line Counter Visualization
,Corner Visualization
,Line Counter
,Depth Estimation
,SIFT Comparison
,Stability AI Outpainting
,Keypoint Visualization
,Distance Measurement
,Image Convert Grayscale
,Trace Visualization
,Clip Comparison
,Background Color Visualization
,QR Code Generator
,Model Comparison Visualization
,Pixel Color Count
,Identify Changes
,Mask Visualization
,Image Slicer
,Polygon Zone Visualization
,Detections Consensus
,Image Threshold
,Camera Focus
,Contrast Equalization
,Polygon Visualization
,Stability AI Inpainting
,Line Counter
,Dot Visualization
,Template Matching
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Identify Outliers
,Circle Visualization
,Bounding Box Visualization
,Camera Calibration
,Blur Visualization
,Image Contours
,Stitch Images
,Halo Visualization
,Reference Path Visualization
,Triangle Visualization
,Pixelate Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,Label Visualization
,Stability AI Image Generation
,Crop Visualization
- outputs:
Image Blur
,Image Preprocessing
,Image Slicer
,OpenAI
,Instance Segmentation Model
,Dynamic Crop
,Multi-Label Classification Model
,Roboflow Dataset Upload
,LMM
,Moondream2
,Absolute Static Crop
,Corner Visualization
,Color Visualization
,Google Gemini
,Depth Estimation
,Keypoint Detection Model
,Stability AI Outpainting
,Keypoint Visualization
,Trace Visualization
,Clip Comparison
,Google Vision OCR
,Keypoint Detection Model
,Single-Label Classification Model
,Time in Zone
,Model Comparison Visualization
,YOLO-World Model
,Mask Visualization
,Image Slicer
,Clip Comparison
,Multi-Label Classification Model
,Buffer
,Image Threshold
,Contrast Equalization
,OpenAI
,Barcode Detection
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Camera Calibration
,Florence-2 Model
,Stitch Images
,Blur Visualization
,Roboflow Dataset Upload
,Qwen2.5-VL
,Triangle Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,QR Code Detection
,Pixel Color Count
,Stability AI Image Generation
,Label Visualization
,Object Detection Model
,Llama 3.2 Vision
,Ellipse Visualization
,CogVLM
,Detections Stabilizer
,VLM as Detector
,SmolVLM2
,Single-Label Classification Model
,Line Counter Visualization
,Florence-2 Model
,SIFT Comparison
,Image Convert Grayscale
,Gaze Detection
,Perception Encoder Embedding Model
,Background Color Visualization
,VLM as Classifier
,Segment Anything 2 Model
,Polygon Zone Visualization
,Anthropic Claude
,VLM as Detector
,Detections Stitch
,Byte Tracker
,Polygon Visualization
,Camera Focus
,Dot Visualization
,LMM For Classification
,Template Matching
,CLIP Embedding Model
,Instance Segmentation Model
,Circle Visualization
,Bounding Box Visualization
,Image Contours
,OpenAI
,Object Detection Model
,OCR Model
,Dominant Color
,Halo Visualization
,Reference Path Visualization
,VLM as Classifier
,Pixelate Visualization
,EasyOCR
,Stability AI Inpainting
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v1
.
- inputs:
Grid Visualization
,Ellipse Visualization
,Image Blur
,Image Preprocessing
,Image Slicer
,SIFT Comparison
,Dynamic Crop
,Absolute Static Crop
,Color Visualization
,Line Counter Visualization
,Corner Visualization
,Line Counter
,Depth Estimation
,SIFT Comparison
,Stability AI Outpainting
,Keypoint Visualization
,Distance Measurement
,Image Convert Grayscale
,Trace Visualization
,Clip Comparison
,Background Color Visualization
,QR Code Generator
,Model Comparison Visualization
,Pixel Color Count
,Identify Changes
,Mask Visualization
,Image Slicer
,Polygon Zone Visualization
,Detections Consensus
,Image Threshold
,Camera Focus
,Contrast Equalization
,Polygon Visualization
,Stability AI Inpainting
,Line Counter
,Dot Visualization
,Template Matching
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Identify Outliers
,Circle Visualization
,Bounding Box Visualization
,Camera Calibration
,Blur Visualization
,Image Contours
,Stitch Images
,Halo Visualization
,Reference Path Visualization
,Triangle Visualization
,Pixelate Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,Label Visualization
,Stability AI Image Generation
,Crop Visualization
- outputs:
Image Blur
,Image Preprocessing
,Image Slicer
,OpenAI
,Instance Segmentation Model
,Dynamic Crop
,Multi-Label Classification Model
,Roboflow Dataset Upload
,LMM
,Moondream2
,Absolute Static Crop
,Corner Visualization
,Color Visualization
,Google Gemini
,Depth Estimation
,Keypoint Detection Model
,Stability AI Outpainting
,Keypoint Visualization
,Trace Visualization
,Clip Comparison
,Google Vision OCR
,Keypoint Detection Model
,Single-Label Classification Model
,Time in Zone
,Model Comparison Visualization
,YOLO-World Model
,Mask Visualization
,Image Slicer
,Clip Comparison
,Multi-Label Classification Model
,Buffer
,Image Threshold
,Contrast Equalization
,OpenAI
,Barcode Detection
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Camera Calibration
,Florence-2 Model
,Stitch Images
,Blur Visualization
,Roboflow Dataset Upload
,Qwen2.5-VL
,Triangle Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,QR Code Detection
,Pixel Color Count
,Stability AI Image Generation
,Label Visualization
,Object Detection Model
,Llama 3.2 Vision
,Ellipse Visualization
,CogVLM
,Detections Stabilizer
,VLM as Detector
,SmolVLM2
,Single-Label Classification Model
,Line Counter Visualization
,Florence-2 Model
,SIFT Comparison
,Image Convert Grayscale
,Gaze Detection
,Perception Encoder Embedding Model
,Background Color Visualization
,VLM as Classifier
,Segment Anything 2 Model
,Polygon Zone Visualization
,Anthropic Claude
,VLM as Detector
,Detections Stitch
,Byte Tracker
,Polygon Visualization
,Camera Focus
,Dot Visualization
,LMM For Classification
,Template Matching
,CLIP Embedding Model
,Instance Segmentation Model
,Circle Visualization
,Bounding Box Visualization
,Image Contours
,OpenAI
,Object Detection Model
,OCR Model
,Dominant Color
,Halo Visualization
,Reference Path Visualization
,VLM as Classifier
,Pixelate Visualization
,EasyOCR
,Stability AI Inpainting
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}