Label Visualization¶
Class: LabelVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.label.v1.LabelVisualizationBlockV1
Draw text labels on detected objects with customizable content, position, styling, and background colors to display information like class names, confidence scores, tracking IDs, or other detection metadata.
How This Block Works¶
This block takes an image and detection predictions and draws text labels on each detected object. The block:
- Takes an image and predictions as input
- Extracts label text for each detection based on the selected text option (class name, confidence, tracker ID, dimensions, area, time in zone, or index)
- Determines label position based on the selected anchor point (center, corners, edges, or center of mass)
- Applies background color styling based on the selected color palette, with colors assigned by class, index, or track ID
- Renders text labels with customizable text color, scale, thickness, padding, and border radius using Supervision's LabelAnnotator
- Returns an annotated image with text labels overlaid on the original image
The block supports various text content options including class names, confidence scores, combination of class and confidence, tracker IDs (for tracked objects), time in zone (for zone analysis), object dimensions (center coordinates and width/height), area, or detection index. Labels are rendered with colored backgrounds that match the object's assigned color from the palette, and text styling (color, size, thickness) can be customized for optimal visibility. The labels can be positioned at any anchor point relative to each detection, allowing flexible placement for different visualization needs.
Common Use Cases¶
- Information Display on Detections: Add informative text labels showing class names, confidence scores, or other metadata directly on detected objects for quick identification and validation
- Model Performance Visualization: Display confidence scores or class predictions on detected objects to visualize model certainty, identify low-confidence detections, and validate model performance
- Object Tracking Visualization: Show tracker IDs on tracked objects to visualize object tracking across frames, monitor persistent object identities, or debug tracking algorithms
- Zone Analysis and Monitoring: Display "Time In Zone" labels on objects to visualize how long objects have been in specific zones for occupancy monitoring, dwell time analysis, or compliance tracking
- Spatial Information Display: Show object dimensions (center coordinates, width, height) or area measurements directly on detections for spatial analysis, measurement workflows, or quality control
- Professional Presentation and Reporting: Create clean, informative visualizations with labeled detections for reports, dashboards, or presentations that combine visual results with textual information
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Bounding Box Visualization, Polygon Visualization, Dot Visualization) to combine text labels with geometric annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save annotated images with labels for documentation, reporting, or analysis
- Webhook blocks to send visualized results with labels to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with labels as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with labels for live monitoring, tracking visualization, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/label_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
text |
str |
Content to display in text labels. Options: 'Class' (class name), 'Confidence' (confidence score), 'Class and Confidence' (both), 'Tracker Id' (tracking ID for tracked objects), 'Time In Zone' (time spent in zone), 'Dimensions' (center coordinates and width x height), 'Area' (object area in pixels), or 'Index' (detection index).. | ✅ |
text_position |
str |
Anchor position for placing labels relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object).. | ✅ |
text_color |
str |
Color of the label text. Can be a color name (e.g., 'WHITE', 'BLACK') or color code in HEX format (e.g., '#FFFFFF') or RGB format (e.g., 'rgb(255, 255, 255)').. | ✅ |
text_scale |
float |
Scale factor for text size. Higher values create larger text. Default is 1.0.. | ✅ |
text_thickness |
int |
Thickness of text characters in pixels. Higher values create bolder, thicker text for better visibility.. | ✅ |
text_padding |
int |
Padding around the text in pixels. Controls the spacing between the text and the label background border.. | ✅ |
border_radius |
int |
Border radius of the label background in pixels. Set to 0 for square corners. Higher values create more rounded corners for a softer appearance.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Label Visualization in version v1.
- inputs:
Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Multi-Label Classification Model,Email Notification,QR Code Generator,VLM As Detector,LMM,SAM 3,Detection Offset,Corner Visualization,Image Convert Grayscale,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,JSON Parser,Object Detection Model,Trace Visualization,Google Vision OCR,Instance Segmentation Model,Clip Comparison,CSV Formatter,Text Display,Stitch Images,Google Gemini,Local File Sink,Slack Notification,VLM As Classifier,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Polygon Visualization,Object Detection Model,Anthropic Claude,Buffer,Byte Tracker,Contrast Equalization,Identify Changes,Detections Classes Replacement,Dimension Collapse,Velocity,Moondream2,SIFT Comparison,Halo Visualization,Florence-2 Model,Blur Visualization,Label Visualization,Twilio SMS/MMS Notification,Ellipse Visualization,OpenAI,SIFT,Model Monitoring Inference Aggregator,Single-Label Classification Model,Detections List Roll-Up,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Size Measurement,OpenAI,Keypoint Detection Model,Gaze Detection,Polygon Visualization,Twilio SMS Notification,SAM 3,Bounding Box Visualization,OCR Model,Overlap Filter,Icon Visualization,Time in Zone,Google Gemini,Florence-2 Model,Roboflow Dataset Upload,Anthropic Claude,Dynamic Zone,Dynamic Crop,VLM As Detector,Google Gemini,Path Deviation,Image Blur,Line Counter,Byte Tracker,Stability AI Inpainting,Template Matching,Image Contours,Path Deviation,Morphological Transformation,Triangle Visualization,Bounding Rectangle,Detections Stitch,Relative Static Crop,Detections Filter,Camera Calibration,Grid Visualization,Detections Stabilizer,Camera Focus,Image Slicer,Detections Combine,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Llama 3.2 Vision,Distance Measurement,SIFT Comparison,Camera Focus,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Seg Preview,Identify Outliers,Clip Comparison,Email Notification,Byte Tracker,Image Preprocessing,SAM 3,Depth Estimation,Cosine Similarity,Time in Zone,Line Counter,CogVLM,Absolute Static Crop,Roboflow Custom Metadata,EasyOCR,Stitch OCR Detections,Perspective Correction,Anthropic Claude,Pixelate Visualization,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,VLM As Classifier,Detection Event Log,Polygon Zone Visualization,YOLO-World Model,Stitch OCR Detections,Crop Visualization,Pixel Color Count,Motion Detection,OpenAI,Detections Transformation - outputs:
Anthropic Claude,Mask Visualization,Classification Label Visualization,Instance Segmentation Model,Multi-Label Classification Model,Email Notification,Dynamic Crop,CLIP Embedding Model,VLM As Detector,VLM As Detector,Google Gemini,Multi-Label Classification Model,LMM,SAM 3,Image Blur,Corner Visualization,Image Convert Grayscale,Byte Tracker,Stability AI Outpainting,SmolVLM2,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Object Detection Model,Template Matching,Single-Label Classification Model,Image Contours,Trace Visualization,Google Vision OCR,Morphological Transformation,Triangle Visualization,Instance Segmentation Model,Clip Comparison,Detections Stitch,Relative Static Crop,Text Display,Stitch Images,Google Gemini,Camera Calibration,Detections Stabilizer,VLM As Classifier,Roboflow Dataset Upload,Camera Focus,Color Visualization,Dot Visualization,Image Slicer,Polygon Visualization,Object Detection Model,Anthropic Claude,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Buffer,Llama 3.2 Vision,Contrast Equalization,SIFT Comparison,Camera Focus,Perception Encoder Embedding Model,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Moondream2,Seg Preview,Halo Visualization,Florence-2 Model,Blur Visualization,Qwen3-VL,Twilio SMS/MMS Notification,Label Visualization,Barcode Detection,Clip Comparison,Ellipse Visualization,OpenAI,QR Code Detection,SIFT,Image Preprocessing,SAM 3,Single-Label Classification Model,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Depth Estimation,OpenAI,Motion Detection,Keypoint Detection Model,CogVLM,Absolute Static Crop,Gaze Detection,EasyOCR,Perspective Correction,Qwen2.5-VL,Anthropic Claude,Pixelate Visualization,Reference Path Visualization,Stability AI Image Generation,Keypoint Visualization,SAM 3,Polygon Visualization,VLM As Classifier,Bounding Box Visualization,Polygon Zone Visualization,OCR Model,YOLO-World Model,Icon Visualization,Crop Visualization,Pixel Color Count,Google Gemini,OpenAI,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Label Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..text(string): Content to display in text labels. Options: 'Class' (class name), 'Confidence' (confidence score), 'Class and Confidence' (both), 'Tracker Id' (tracking ID for tracked objects), 'Time In Zone' (time spent in zone), 'Dimensions' (center coordinates and width x height), 'Area' (object area in pixels), or 'Index' (detection index)..text_position(string): Anchor position for placing labels relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object)..text_color(string): Color of the label text. Can be a color name (e.g., 'WHITE', 'BLACK') or color code in HEX format (e.g., '#FFFFFF') or RGB format (e.g., 'rgb(255, 255, 255)')..text_scale(float): Scale factor for text size. Higher values create larger text. Default is 1.0..text_thickness(integer): Thickness of text characters in pixels. Higher values create bolder, thicker text for better visibility..text_padding(integer): Padding around the text in pixels. Controls the spacing between the text and the label background border..border_radius(integer): Border radius of the label background in pixels. Set to 0 for square corners. Higher values create more rounded corners for a softer appearance..
-
output
image(image): Image in workflows.
Example JSON definition of step Label Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/label_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"text": "LABEL",
"text_position": "CENTER",
"text_color": "WHITE",
"text_scale": 1.0,
"text_thickness": 1,
"text_padding": 10,
"border_radius": 0
}