Image Slicer¶
Version v1
¶
This block enables Slicing Adaptive Inference (SAHI) technique in Workflows providing implementation for first step of procedure - making slices out of input image.
To use the block effectively, it must be paired with detection model (object-detection or instance segmentation) running against output images from this block. At the end - Detections Stitch block must be applied on top of predictions to merge them as if the prediction was made against input image, not its slices.
We recommend adjusting the size of slices to match the model's input size and the scale of objects in the dataset the model was trained on. Models generally perform best on data that is similar to what they encountered during training. The default size of slices is 640, but this might not be optimal if the model's input size is 320, as each slice would be downsized by a factor of two during inference. Similarly, if the model's input size is 1280, each slice will be artificially up-scaled. The best setup should be determined experimentally based on the specific data and model you are using.
To learn more about SAHI please visit Roboflow blog which describes the technique in details, yet not in context of Roboflow workflows.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/image_slicer@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
The unique name of this step.. | ❌ |
slice_width |
int |
Width of each slice, in pixels. | ✅ |
slice_height |
int |
Height of each slice, in pixels. | ✅ |
overlap_ratio_width |
float |
Overlap ratio between consecutive slices in the width dimension. | ✅ |
overlap_ratio_height |
float |
Overlap ratio between consecutive slices in the height dimension. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Check what blocks you can connect to Image Slicer
in version v1
.
- inputs:
Triangle Visualization
,Image Blur
,Image Preprocessing
,Line Counter Visualization
,Keypoint Visualization
,Blur Visualization
,Mask Visualization
,Background Color Visualization
,Model Comparison Visualization
,Ellipse Visualization
,Pixelate Visualization
,Corner Visualization
,Camera Focus
,Reference Path Visualization
,Relative Static Crop
,Image Contours
,Image Convert Grayscale
,Crop Visualization
,Stitch Images
,Label Visualization
,Polygon Visualization
,Stability AI Inpainting
,Dynamic Crop
,Trace Visualization
,Polygon Zone Visualization
,SIFT
,Image Threshold
,Color Visualization
,Image Slicer
,Perspective Correction
,Absolute Static Crop
,Bounding Box Visualization
,Halo Visualization
,SIFT Comparison
,Circle Visualization
,Dot Visualization
- outputs:
OpenAI
,Clip Comparison
,Keypoint Detection Model
,Florence-2 Model
,Line Counter Visualization
,Barcode Detection
,Model Comparison Visualization
,Background Color Visualization
,Ellipse Visualization
,Corner Visualization
,QR Code Detection
,Anthropic Claude
,Object Detection Model
,Time in zone
,Template Matching
,YOLO-World Model
,Google Vision OCR
,Pixel Color Count
,Stability AI Inpainting
,Single-Label Classification Model
,SIFT
,Dominant Color
,Color Visualization
,Image Slicer
,Bounding Box Visualization
,Perspective Correction
,Halo Visualization
,LMM For Classification
,Roboflow Dataset Upload
,OCR Model
,Mask Visualization
,Dot Visualization
,Triangle Visualization
,Image Blur
,Image Preprocessing
,Keypoint Visualization
,Blur Visualization
,Pixelate Visualization
,Camera Focus
,VLM as Detector
,Multi-Label Classification Model
,Detections Stitch
,Reference Path Visualization
,VLM as Classifier
,Relative Static Crop
,Image Contours
,OpenAI
,Crop Visualization
,Stitch Images
,Image Convert Grayscale
,Google Gemini
,Clip Comparison
,Label Visualization
,Segment Anything 2 Model
,Polygon Visualization
,Dynamic Crop
,Trace Visualization
,Polygon Zone Visualization
,CogVLM
,Image Threshold
,LMM
,Absolute Static Crop
,SIFT Comparison
,Roboflow Dataset Upload
,Circle Visualization
,Instance Segmentation Model
The available connections depend on its binding kinds. Check what binding kinds
Image Slicer
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..slice_width
(integer
): Width of each slice, in pixels.slice_height
(integer
): Height of each slice, in pixels.overlap_ratio_width
(float_zero_to_one
): Overlap ratio between consecutive slices in the width dimension.overlap_ratio_height
(float_zero_to_one
): Overlap ratio between consecutive slices in the height dimension.
-
output
slices
(image
): Image in workflows.
Example JSON definition of step Image Slicer
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/image_slicer@v1",
"image": "$inputs.image",
"slice_width": 320,
"slice_height": 320,
"overlap_ratio_width": 0.2,
"overlap_ratio_height": 0.2
}