Dot Visualization¶
Class: DotVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.dot.v1.DotVisualizationBlockV1
Draw circular dots on an image to mark specific points on detected objects, with customizable position, size, color, and outline styling.
How This Block Works¶
This block takes an image and detection predictions and draws circular dot markers at specified anchor positions on each detected object. The block:
- Takes an image and predictions as input
- Determines the dot position for each detection based on the selected anchor point (center, corners, edges, or center of mass)
- Applies color styling based on the selected color palette, with colors assigned by class, index, or track ID
- Draws circular dots with the specified radius and optional outline thickness using Supervision's DotAnnotator
- Returns an annotated image with dots overlaid on the original image
The block supports various position options including the center of the bounding box, any of the four corners, edge midpoints, or the center of mass (useful for objects with irregular shapes). Dots can be customized with different sizes (radius), optional outlines for better visibility, and various color palettes. This provides a minimal, clean visualization style that marks detection locations without the visual clutter of full bounding boxes, making it ideal for dense scenes or when you need to highlight specific points of interest.
Common Use Cases¶
- Minimal Object Marking: Mark detected objects with small dots instead of bounding boxes for cleaner, less cluttered visualizations when working with dense scenes or many detections
- Point of Interest Highlighting: Mark specific anchor points (corners, center, center of mass) on detected objects for applications like object tracking, pose estimation, or spatial analysis
- Tracking Visualization: Use dots to visualize object trajectories or tracking IDs over time, creating a cleaner alternative to bounding boxes for tracking workflows
- Crowd Counting and Density Analysis: Mark people or objects with dots to visualize density patterns, crowd distribution, or object counts without overlapping bounding boxes
- Keypoint and Landmark Marking: Mark specific points on objects (such as the center of mass for irregular shapes) for physics simulations, measurement workflows, or spatial relationship analysis
- Minimal UI Overlays: Create clean, unobtrusive visual overlays for user interfaces, dashboards, or mobile applications where full bounding boxes would be too visually intrusive
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Bounding Box Visualization, Label Visualization, Trace Visualization) to combine dot markers with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save annotated images with dot markers for documentation, reporting, or analysis
- Webhook blocks to send visualized results with dot markers to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with dot markers as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with dot markers for live monitoring, tracking visualization, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/dot_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
position |
str |
Anchor position for placing the dot relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object, useful for irregular shapes).. | ✅ |
radius |
int |
Radius of the dot in pixels. Higher values create larger, more visible dots.. | ✅ |
outline_thickness |
int |
Thickness of the dot outline in pixels. Set to 0 for no outline (filled dots only). Higher values create thicker outlines around the dot for better visibility against varying backgrounds.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Dot Visualization in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Instance Segmentation Model,Distance Measurement,Color Visualization,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Byte Tracker,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Object Detection Model,Camera Focus,Stitch OCR Detections,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Detections Merge,Perspective Correction,Overlap Filter,Anthropic Claude,Bounding Box Visualization,Velocity,Depth Estimation,Line Counter,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Detections Combine,Object Detection Model,SAM2 Video Tracker,Detection Event Log,Byte Tracker,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,YOLO-World Model,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Template Matching,Mask Area Measurement,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Keypoint Detection Model,Time in Zone,Reference Path Visualization,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Byte Tracker,Multi-Label Classification Model,Image Convert Grayscale,SAM 3,OpenAI,Corner Visualization,Dynamic Crop,Moondream2,Keypoint Visualization,Keypoint Detection Model,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Dot Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..position(string): Anchor position for placing the dot relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object, useful for irregular shapes)..radius(integer): Radius of the dot in pixels. Higher values create larger, more visible dots..outline_thickness(integer): Thickness of the dot outline in pixels. Set to 0 for no outline (filled dots only). Higher values create thicker outlines around the dot for better visibility against varying backgrounds..
-
output
image(image): Image in workflows.
Example JSON definition of step Dot Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/dot_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"position": "CENTER",
"radius": 4,
"outline_thickness": 2
}