Grid Visualization¶
Class: GridVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.grid.v1.GridVisualizationBlockV1
The GridVisualization block displays an array of images in a grid.
It will automatically resize the images to in the specified width and
height. The first image will be in the top left corner, and the rest
will be added to the right of the previous image until the row is full.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/grid_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
width |
int |
Width of the output image.. | ✅ |
height |
int |
Height of the output image.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Grid Visualization in version v1.
- inputs:
Buffer,Florence-2 Model,Clip Comparison,OpenAI,SIFT Comparison,Line Counter,Image Contours,OpenAI,Google Gemini,Perspective Correction,Anthropic Claude,Llama 3.2 Vision,Clip Comparison,Google Gemini,Dynamic Zone,Pixel Color Count,Line Counter,Template Matching,Size Measurement,Distance Measurement,Dimension Collapse,Florence-2 Model,SIFT Comparison - outputs:
VLM as Detector,Google Vision OCR,SAM 3,Classification Label Visualization,Detections Stabilizer,Circle Visualization,Image Contours,Relative Static Crop,Image Preprocessing,LMM For Classification,VLM as Classifier,Ellipse Visualization,Stitch Images,Triangle Visualization,Stability AI Inpainting,Image Slicer,VLM as Classifier,Background Color Visualization,Segment Anything 2 Model,Template Matching,Moondream2,OCR Model,Dot Visualization,Florence-2 Model,SIFT,Morphological Transformation,EasyOCR,Gaze Detection,Halo Visualization,Reference Path Visualization,SIFT Comparison,Buffer,Polygon Visualization,Image Slicer,Florence-2 Model,Clip Comparison,Perception Encoder Embedding Model,Instance Segmentation Model,OpenAI,Byte Tracker,Color Visualization,Image Convert Grayscale,Object Detection Model,Keypoint Detection Model,Google Gemini,Label Visualization,Email Notification,Llama 3.2 Vision,Trace Visualization,QR Code Detection,YOLO-World Model,Corner Visualization,Mask Visualization,Time in Zone,CogVLM,Stability AI Outpainting,OpenAI,Detections Stitch,Barcode Detection,Blur Visualization,Dominant Color,Crop Visualization,VLM as Detector,Single-Label Classification Model,OpenAI,Perspective Correction,Clip Comparison,Single-Label Classification Model,Absolute Static Crop,Seg Preview,Contrast Equalization,Roboflow Dataset Upload,Roboflow Dataset Upload,Polygon Zone Visualization,CLIP Embedding Model,Stability AI Image Generation,Depth Estimation,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Instance Segmentation Model,Multi-Label Classification Model,Icon Visualization,Image Blur,Pixelate Visualization,Image Threshold,Keypoint Detection Model,Anthropic Claude,LMM,Google Gemini,Multi-Label Classification Model,Pixel Color Count,SmolVLM2,Dynamic Crop,Qwen2.5-VL,Model Comparison Visualization,Camera Calibration,Keypoint Visualization,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Grid Visualization in version v1 has.
Bindings
-
input
images(list_of_values): Images to visualize.width(integer): Width of the output image..height(integer): Height of the output image..
-
output
image(image): Image in workflows.
Example JSON definition of step Grid Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/grid_visualization@v1",
"images": "$steps.buffer.output",
"width": 2560,
"height": 1440
}