Grid Visualization¶
Class: GridVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.grid.v1.GridVisualizationBlockV1
The GridVisualization
block displays an array of images in a grid.
It will automatically resize the images to in the specified width and
height. The first image will be in the top left corner, and the rest
will be added to the right of the previous image until the row is full.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/grid_visualization@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
width |
int |
Width of the output image.. | ✅ |
height |
int |
Height of the output image.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Grid Visualization
in version v1
.
- inputs:
Florence-2 Model
,OpenAI
,Clip Comparison
,OpenAI
,Google Gemini
,SIFT Comparison
,SIFT Comparison
,Line Counter
,Template Matching
,Size Measurement
,Clip Comparison
,Anthropic Claude
,Dimension Collapse
,Dynamic Zone
,Distance Measurement
,Buffer
,Pixel Color Count
,Llama 3.2 Vision
,Line Counter
,Image Contours
,Perspective Correction
,Florence-2 Model
- outputs:
Crop Visualization
,Keypoint Detection Model
,Ellipse Visualization
,VLM as Classifier
,Stability AI Inpainting
,Stability AI Image Generation
,Blur Visualization
,Keypoint Detection Model
,Pixelate Visualization
,Circle Visualization
,YOLO-World Model
,Model Comparison Visualization
,Stability AI Outpainting
,Single-Label Classification Model
,Bounding Box Visualization
,VLM as Classifier
,Background Color Visualization
,Dominant Color
,Llama 3.2 Vision
,Image Contours
,LMM
,Object Detection Model
,Label Visualization
,Florence-2 Model
,Triangle Visualization
,Dynamic Crop
,VLM as Detector
,Detections Stitch
,Google Vision OCR
,Florence-2 Model
,Instance Segmentation Model
,Keypoint Visualization
,QR Code Detection
,OCR Model
,Halo Visualization
,Multi-Label Classification Model
,Corner Visualization
,Line Counter Visualization
,SmolVLM2
,LMM For Classification
,CLIP Embedding Model
,Time in Zone
,Perception Encoder Embedding Model
,Template Matching
,Dot Visualization
,Clip Comparison
,Anthropic Claude
,Roboflow Dataset Upload
,Icon Visualization
,Depth Estimation
,CogVLM
,Polygon Zone Visualization
,Stitch Images
,Image Slicer
,OpenAI
,Relative Static Crop
,Segment Anything 2 Model
,Google Gemini
,SIFT Comparison
,Buffer
,Pixel Color Count
,Image Slicer
,Gaze Detection
,Image Threshold
,Perspective Correction
,Byte Tracker
,Camera Focus
,Qwen2.5-VL
,Classification Label Visualization
,Instance Segmentation Model
,Reference Path Visualization
,Color Visualization
,Barcode Detection
,OpenAI
,Camera Calibration
,Clip Comparison
,Roboflow Dataset Upload
,Mask Visualization
,OpenAI
,SIFT
,Polygon Visualization
,VLM as Detector
,Detections Stabilizer
,Image Convert Grayscale
,Moondream2
,Multi-Label Classification Model
,Image Blur
,Trace Visualization
,Absolute Static Crop
,Image Preprocessing
,Single-Label Classification Model
,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Grid Visualization
in version v1
has.
Bindings
-
input
images
(list_of_values
): Images to visualize.width
(integer
): Width of the output image..height
(integer
): Height of the output image..
-
output
image
(image
): Image in workflows.
Example JSON definition of step Grid Visualization
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/grid_visualization@v1",
"images": "$steps.buffer.output",
"width": 2560,
"height": 1440
}