Trace Visualization¶
Class: TraceVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.trace.v1.TraceVisualizationBlockV1
Draw trajectory paths for tracked objects, visualizing their movement history by connecting recent positions with colored lines to show object movement patterns, paths, and tracking behavior over time.
How This Block Works¶
This block takes an image and tracked predictions (with tracker IDs) and draws trajectory paths showing the recent movement history of each tracked object. The block:
- Takes an image and tracked predictions as input (predictions must include tracker_id data from a tracking block)
- Extracts tracking IDs and position history for each tracked object
- Determines the reference point for drawing traces based on the selected position anchor (center, corners, edges, or center of mass)
- Applies color styling based on the selected color palette, with colors assigned by class, index, or track ID
- Draws trajectory lines connecting the recent positions (up to trace_length positions) for each tracked object using Supervision's TraceAnnotator
- Connects historical positions sequentially, creating path traces that show object movement direction and patterns
- Returns an annotated image with trajectory paths overlaid on the original image
The block visualizes object tracking by drawing the path that each tracked object has taken over recent frames. Each tracked object gets a unique trace line (colored by track ID, class, or index) that connects its recent positions, creating a visual trail that shows movement direction, speed, and trajectory patterns. The trace_length parameter controls how many historical positions are included in each trace (longer traces show more movement history, shorter traces show recent movement only). This visualization requires predictions with tracker IDs from tracking blocks (like Byte Tracker), as it needs the tracking information to connect positions across frames. The traces help visualize object movement, identify tracking patterns, and understand object behavior over time.
Common Use Cases¶
- Object Trajectory Visualization: Visualize movement paths and trajectories of tracked objects to understand object behavior, movement patterns, or navigation routes for applications like vehicle tracking, pedestrian flow analysis, or object movement monitoring
- Tracking Performance Validation: Validate tracking performance by visualizing object paths to ensure tracking consistency, identify tracking errors or ID switches, or verify that objects maintain consistent trajectories
- Movement Pattern Analysis: Analyze movement patterns, speeds, or direction changes by visualizing trajectory traces to understand object behavior, detect anomalies, or identify movement trends in surveillance, security, or traffic monitoring workflows
- Path Deviation Detection: Visualize object paths to detect deviations from expected routes, identify unusual movement patterns, or monitor object trajectories for safety, security, or compliance workflows
- Real-Time Tracking Monitoring: Display trajectory traces in real-time monitoring interfaces, dashboards, or live video feeds to visualize object movement and tracking behavior as it happens
- Video Analysis and Post-Processing: Create trajectory visualizations for video analysis, post-processing workflows, or forensic analysis where understanding object movement paths and patterns is critical
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Tracking blocks (e.g., Byte Tracker) to receive tracked predictions with tracker IDs that are required for trace visualization
- Other visualization blocks (e.g., Bounding Box Visualization, Label Visualization, Dot Visualization) to combine trajectory traces with additional annotations for comprehensive tracking visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with trajectory traces for documentation, reporting, or analysis
- Webhook blocks to send visualized results with trajectory traces to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with trajectory traces as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with trajectory traces for live monitoring, tracking visualization, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/trace_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
position |
str |
Anchor position for drawing trajectory traces relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The trace path is drawn connecting positions at this anchor point across recent frames.. | ✅ |
trace_length |
int |
Maximum number of historical tracked object positions to include in each trajectory trace. Controls how long the movement trail appears. Higher values create longer traces showing more movement history, while lower values create shorter traces showing only recent movement. Must be at least 1. Typical values range from 10 to 50 frames depending on the desired trail length and frame rate.. | ✅ |
thickness |
int |
Thickness of the trajectory trace lines in pixels. Controls how thick the path lines appear. Higher values create thicker, more visible traces, while lower values create thinner, more subtle traces. Must be at least 1. Typical values range from 1 to 5 pixels.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Trace Visualization in version v1.
- inputs:
Dynamic Crop,OCR Model,Image Blur,Background Subtraction,Google Vision OCR,Google Gemini,Image Preprocessing,Local File Sink,Object Detection Model,Single-Label Classification Model,Bounding Box Visualization,Model Monitoring Inference Aggregator,Keypoint Detection Model,Camera Focus,Identify Outliers,Dot Visualization,Gaze Detection,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Depth Estimation,Polygon Visualization,OpenAI,Line Counter,Image Slicer,Detections List Roll-Up,Line Counter Visualization,Heatmap Visualization,Morphological Transformation,Stability AI Image Generation,Google Gemini,Distance Measurement,Keypoint Visualization,Keypoint Detection Model,Background Color Visualization,Label Visualization,Polygon Visualization,LMM,CogVLM,Time in Zone,Triangle Visualization,Stability AI Outpainting,Mask Visualization,Color Visualization,Detections Combine,Text Display,Bounding Rectangle,Reference Path Visualization,Llama 3.2 Vision,OpenAI,Image Threshold,Clip Comparison,Classification Label Visualization,Clip Comparison,Polygon Zone Visualization,Image Contours,VLM As Classifier,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Velocity,Halo Visualization,Blur Visualization,Path Deviation,Absolute Static Crop,Anthropic Claude,SAM 3,Detections Transformation,Ellipse Visualization,Identify Changes,Crop Visualization,SIFT Comparison,Path Deviation,Trace Visualization,Twilio SMS Notification,Stitch Images,Detections Stabilizer,Size Measurement,Detections Merge,Time in Zone,Motion Detection,Email Notification,SIFT Comparison,OpenAI,Seg Preview,Time in Zone,Instance Segmentation Model,Anthropic Claude,Multi-Label Classification Model,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Detections Stitch,VLM As Detector,Camera Focus,SAM 3,Stitch OCR Detections,Perspective Correction,PTZ Tracking (ONVIF),Moondream2,Camera Calibration,Corner Visualization,Icon Visualization,Overlap Filter,Qwen3.5-VL,Byte Tracker,VLM As Detector,Halo Visualization,JSON Parser,Detection Event Log,Pixelate Visualization,Contrast Equalization,Dimension Collapse,VLM As Classifier,Instance Segmentation Model,Detections Classes Replacement,Relative Static Crop,Line Counter,Stitch OCR Detections,Webhook Sink,Circle Visualization,Image Convert Grayscale,Grid Visualization,Mask Area Measurement,Byte Tracker,Florence-2 Model,Buffer,SAM 3,SIFT,YOLO-World Model,Object Detection Model,Byte Tracker,Detections Consensus,Template Matching,Anthropic Claude,Google Gemini,Model Comparison Visualization,Detection Offset,QR Code Generator,EasyOCR,Image Slicer,S3 Sink,Stability AI Inpainting,Segment Anything 2 Model,Detections Filter,OpenAI,Pixel Color Count,Roboflow Dataset Upload - outputs:
Dynamic Crop,OCR Model,Barcode Detection,Motion Detection,Email Notification,Image Blur,Background Subtraction,Google Vision OCR,SIFT Comparison,Google Gemini,OpenAI,Image Preprocessing,Qwen2.5-VL,Seg Preview,Object Detection Model,Instance Segmentation Model,Single-Label Classification Model,Bounding Box Visualization,Multi-Label Classification Model,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Detections Stitch,Twilio SMS/MMS Notification,Camera Focus,VLM As Detector,Gaze Detection,Florence-2 Model,Dot Visualization,Roboflow Dataset Upload,Camera Focus,SAM 3,Depth Estimation,Polygon Visualization,Moondream2,OpenAI,Perspective Correction,Image Slicer,Icon Visualization,Corner Visualization,Camera Calibration,Qwen3.5-VL,Line Counter Visualization,Heatmap Visualization,Google Gemini,Morphological Transformation,Stability AI Image Generation,Keypoint Visualization,VLM As Detector,Keypoint Detection Model,Halo Visualization,Background Color Visualization,Label Visualization,QR Code Detection,Polygon Visualization,Pixelate Visualization,LMM,CogVLM,Time in Zone,Single-Label Classification Model,Qwen3-VL,Contrast Equalization,Triangle Visualization,Stability AI Outpainting,Mask Visualization,VLM As Classifier,Color Visualization,Instance Segmentation Model,Dominant Color,Text Display,Relative Static Crop,Reference Path Visualization,OpenAI,Llama 3.2 Vision,Clip Comparison,Clip Comparison,Classification Label Visualization,Image Threshold,Circle Visualization,Polygon Zone Visualization,Image Contours,Image Convert Grayscale,VLM As Classifier,Byte Tracker,Buffer,Florence-2 Model,SmolVLM2,SAM 3,Perception Encoder Embedding Model,LMM For Classification,SIFT,YOLO-World Model,Halo Visualization,Template Matching,Object Detection Model,Semantic Segmentation Model,Anthropic Claude,Google Gemini,Model Comparison Visualization,Blur Visualization,EasyOCR,Absolute Static Crop,Image Slicer,Anthropic Claude,SAM 3,CLIP Embedding Model,Stability AI Inpainting,Ellipse Visualization,Crop Visualization,Trace Visualization,Segment Anything 2 Model,Stitch Images,Detections Stabilizer,OpenAI,Pixel Color Count,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Trace Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..position(string): Anchor position for drawing trajectory traces relative to each detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The trace path is drawn connecting positions at this anchor point across recent frames..trace_length(integer): Maximum number of historical tracked object positions to include in each trajectory trace. Controls how long the movement trail appears. Higher values create longer traces showing more movement history, while lower values create shorter traces showing only recent movement. Must be at least 1. Typical values range from 10 to 50 frames depending on the desired trail length and frame rate..thickness(integer): Thickness of the trajectory trace lines in pixels. Controls how thick the path lines appear. Higher values create thicker, more visible traces, while lower values create thinner, more subtle traces. Must be at least 1. Typical values range from 1 to 5 pixels..
-
output
image(image): Image in workflows.
Example JSON definition of step Trace Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/trace_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"position": "CENTER",
"trace_length": 30,
"thickness": 1
}