Image Slicer¶
v2¶
Class: ImageSlicerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v2.ImageSlicerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Changes compared to v1¶
-
All crops generated by slicer will be of equal size
-
No duplicated crops will be created
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v2
.
- inputs:
Corner Visualization
,Camera Focus
,Polygon Zone Visualization
,Circle Visualization
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Bounding Box Visualization
,Depth Estimation
,SIFT
,Distance Measurement
,Template Matching
,Image Convert Grayscale
,Halo Visualization
,Grid Visualization
,Polygon Visualization
,Absolute Static Crop
,Dot Visualization
,Color Visualization
,Label Visualization
,Stability AI Outpainting
,Detections Consensus
,Crop Visualization
,Identify Outliers
,Perspective Correction
,Stability AI Image Generation
,Image Slicer
,Image Threshold
,Pixel Color Count
,Image Preprocessing
,Model Comparison Visualization
,Identify Changes
,SIFT Comparison
,Dynamic Crop
,Stitch Images
,Image Contours
,Mask Visualization
,Clip Comparison
,Pixelate Visualization
,SIFT Comparison
,Camera Calibration
,Line Counter
,Reference Path Visualization
,Image Blur
,Line Counter Visualization
,Background Color Visualization
,Line Counter
,Image Slicer
,Keypoint Visualization
,Blur Visualization
,Ellipse Visualization
,Trace Visualization
,Relative Static Crop
- outputs:
YOLO-World Model
,OpenAI
,VLM as Detector
,Keypoint Detection Model
,Circle Visualization
,Gaze Detection
,Roboflow Dataset Upload
,Perception Encoder Embedding Model
,Depth Estimation
,SIFT
,Florence-2 Model
,Buffer
,Single-Label Classification Model
,Template Matching
,Detections Stitch
,Instance Segmentation Model
,Color Visualization
,Object Detection Model
,Perspective Correction
,Image Slicer
,OpenAI
,Keypoint Detection Model
,Model Comparison Visualization
,Clip Comparison
,Stitch Images
,Dynamic Crop
,Moondream2
,Image Contours
,Pixelate Visualization
,Llama 3.2 Vision
,Byte Tracker
,Camera Calibration
,Reference Path Visualization
,Time in Zone
,Image Blur
,CLIP Embedding Model
,Blur Visualization
,OCR Model
,Ellipse Visualization
,Trace Visualization
,SmolVLM2
,Polygon Zone Visualization
,Corner Visualization
,Google Gemini
,Camera Focus
,OpenAI
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Single-Label Classification Model
,Barcode Detection
,Bounding Box Visualization
,Detections Stabilizer
,CogVLM
,Image Convert Grayscale
,Halo Visualization
,LMM
,Polygon Visualization
,Absolute Static Crop
,Object Detection Model
,Dot Visualization
,Label Visualization
,Stability AI Outpainting
,Crop Visualization
,Google Vision OCR
,Stability AI Image Generation
,Pixel Color Count
,Image Threshold
,Image Preprocessing
,VLM as Classifier
,SIFT Comparison
,Mask Visualization
,Dominant Color
,Florence-2 Model
,Segment Anything 2 Model
,Clip Comparison
,Roboflow Dataset Upload
,QR Code Detection
,Line Counter Visualization
,VLM as Classifier
,Instance Segmentation Model
,Background Color Visualization
,Anthropic Claude
,LMM For Classification
,Multi-Label Classification Model
,Image Slicer
,Keypoint Visualization
,Qwen2.5-VL
,Multi-Label Classification Model
,VLM as Detector
,Relative Static Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v2",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}
v1¶
Class: ImageSlicerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.image_slicer.v1.ImageSlicerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Slicer
in version v1
.
- inputs:
Corner Visualization
,Camera Focus
,Polygon Zone Visualization
,Circle Visualization
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Bounding Box Visualization
,Depth Estimation
,SIFT
,Distance Measurement
,Template Matching
,Image Convert Grayscale
,Halo Visualization
,Grid Visualization
,Polygon Visualization
,Absolute Static Crop
,Dot Visualization
,Color Visualization
,Label Visualization
,Stability AI Outpainting
,Detections Consensus
,Crop Visualization
,Identify Outliers
,Perspective Correction
,Stability AI Image Generation
,Image Slicer
,Image Threshold
,Pixel Color Count
,Image Preprocessing
,Model Comparison Visualization
,Identify Changes
,SIFT Comparison
,Dynamic Crop
,Stitch Images
,Image Contours
,Mask Visualization
,Clip Comparison
,Pixelate Visualization
,SIFT Comparison
,Camera Calibration
,Line Counter
,Reference Path Visualization
,Image Blur
,Line Counter Visualization
,Background Color Visualization
,Line Counter
,Image Slicer
,Keypoint Visualization
,Blur Visualization
,Ellipse Visualization
,Trace Visualization
,Relative Static Crop
- outputs:
YOLO-World Model
,OpenAI
,VLM as Detector
,Keypoint Detection Model
,Circle Visualization
,Gaze Detection
,Roboflow Dataset Upload
,Perception Encoder Embedding Model
,Depth Estimation
,SIFT
,Florence-2 Model
,Buffer
,Single-Label Classification Model
,Template Matching
,Detections Stitch
,Instance Segmentation Model
,Color Visualization
,Object Detection Model
,Perspective Correction
,Image Slicer
,OpenAI
,Keypoint Detection Model
,Model Comparison Visualization
,Clip Comparison
,Stitch Images
,Dynamic Crop
,Moondream2
,Image Contours
,Pixelate Visualization
,Llama 3.2 Vision
,Byte Tracker
,Camera Calibration
,Reference Path Visualization
,Time in Zone
,Image Blur
,CLIP Embedding Model
,Blur Visualization
,OCR Model
,Ellipse Visualization
,Trace Visualization
,SmolVLM2
,Polygon Zone Visualization
,Corner Visualization
,Google Gemini
,Camera Focus
,OpenAI
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Single-Label Classification Model
,Barcode Detection
,Bounding Box Visualization
,Detections Stabilizer
,CogVLM
,Image Convert Grayscale
,Halo Visualization
,LMM
,Polygon Visualization
,Absolute Static Crop
,Object Detection Model
,Dot Visualization
,Label Visualization
,Stability AI Outpainting
,Crop Visualization
,Google Vision OCR
,Stability AI Image Generation
,Pixel Color Count
,Image Threshold
,Image Preprocessing
,VLM as Classifier
,SIFT Comparison
,Mask Visualization
,Dominant Color
,Florence-2 Model
,Segment Anything 2 Model
,Clip Comparison
,Roboflow Dataset Upload
,QR Code Detection
,Line Counter Visualization
,VLM as Classifier
,Instance Segmentation Model
,Background Color Visualization
,Anthropic Claude
,LMM For Classification
,Multi-Label Classification Model
,Image Slicer
,Keypoint Visualization
,Qwen2.5-VL
,Multi-Label Classification Model
,VLM as Detector
,Relative Static Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}