Grid Visualization¶
Class: GridVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.grid.v1.GridVisualizationBlockV1
The GridVisualization
block displays an array of images in a grid.
It will automatically resize the images to in the specified width and
height. The first image will be in the top left corner, and the rest
will be added to the right of the previous image until the row is full.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/grid_visualization@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
width |
int |
Width of the output image.. | ✅ |
height |
int |
Height of the output image.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Grid Visualization
in version v1
.
- inputs:
Anthropic Claude
,OpenAI
,Llama 3.2 Vision
,SIFT Comparison
,Florence-2 Model
,Distance Measurement
,Clip Comparison
,Dimension Collapse
,Line Counter
,Google Gemini
,Image Contours
,Size Measurement
,Clip Comparison
,Line Counter
,Buffer
,SIFT Comparison
,Perspective Correction
,Pixel Color Count
,Dynamic Zone
,Florence-2 Model
,OpenAI
,Template Matching
- outputs:
Keypoint Detection Model
,CogVLM
,OpenAI
,Anthropic Claude
,Google Vision OCR
,Image Convert Grayscale
,Gaze Detection
,Florence-2 Model
,SmolVLM2
,Pixelate Visualization
,Single-Label Classification Model
,SIFT
,Qwen2.5-VL
,VLM as Detector
,OpenAI
,Moondream2
,YOLO-World Model
,Stability AI Image Generation
,Object Detection Model
,Mask Visualization
,Image Slicer
,Barcode Detection
,Triangle Visualization
,VLM as Detector
,Polygon Zone Visualization
,Model Comparison Visualization
,Crop Visualization
,LMM
,Classification Label Visualization
,Segment Anything 2 Model
,Keypoint Detection Model
,Reference Path Visualization
,Multi-Label Classification Model
,Google Gemini
,Bounding Box Visualization
,Image Contours
,Circle Visualization
,Perspective Correction
,Pixel Color Count
,Polygon Visualization
,VLM as Classifier
,Instance Segmentation Model
,Trace Visualization
,Color Visualization
,LMM For Classification
,Image Threshold
,Detections Stitch
,SIFT Comparison
,Absolute Static Crop
,Clip Comparison
,Line Counter Visualization
,Stitch Images
,Multi-Label Classification Model
,QR Code Detection
,Dot Visualization
,Instance Segmentation Model
,Background Color Visualization
,Florence-2 Model
,Detections Stabilizer
,Image Slicer
,Template Matching
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Image Blur
,Camera Focus
,CLIP Embedding Model
,Object Detection Model
,Stability AI Inpainting
,Blur Visualization
,Label Visualization
,Depth Estimation
,Image Preprocessing
,Llama 3.2 Vision
,Byte Tracker
,Ellipse Visualization
,VLM as Classifier
,Halo Visualization
,Corner Visualization
,Camera Calibration
,Clip Comparison
,Time in Zone
,Buffer
,Dynamic Crop
,Single-Label Classification Model
,Relative Static Crop
,OpenAI
,Keypoint Visualization
,Dominant Color
,OCR Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Grid Visualization
in version v1
has.
Bindings
-
input
images
(list_of_values
): Images to visualize.width
(integer
): Width of the output image..height
(integer
): Height of the output image..
-
output
image
(image
): Image in workflows.
Example JSON definition of step Grid Visualization
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/grid_visualization@v1",
"images": "$steps.buffer.output",
"width": 2560,
"height": 1440
}