Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Measure how closely tracked objects follow a reference path by calculating the Fréchet distance between the object's actual trajectory and the expected reference path, enabling path compliance monitoring, route deviation detection, quality control in automated systems, and behavioral analysis workflows.
How This Block Works¶
This block compares the actual movement path of tracked objects against a predefined reference path to measure deviation. The block:
- Receives tracked detection predictions with unique tracker IDs, an image with embedded video metadata, and a reference path definition
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts video_identifier to maintain separate path tracking state for different videos
- Uses video metadata to initialize and manage path tracking state per video
- Validates that detections have tracker IDs (required for tracking object movement across frames)
- Initializes or retrieves path tracking state for the video:
- Maintains a history of positions for each tracked object per video
- Stores object paths using video_identifier to separate state for different videos
- Creates new path tracking entries for objects appearing for the first time
- Extracts anchor point coordinates for each detection:
- Uses the triggering_anchor to determine which point on the bounding box to track (default: CENTER)
- Gets the (x, y) coordinates of the anchor point for each detection in the current frame
- The anchor point represents the position of the object used for path comparison
- Accumulates object paths over time:
- Appends each object's anchor point to its path history as frames are processed
- Maintains separate path histories for each unique tracker_id
- Builds complete trajectory paths by accumulating positions across all processed frames
- Calculates Fréchet distance for each tracked object:
- Fréchet Distance: Measures the similarity between two curves (paths) considering both location and ordering of points
- Compares the object's accumulated path (actual trajectory) against the reference path (expected trajectory)
- Uses dynamic programming to compute the minimum "leash length" required to traverse both paths simultaneously
- Accounts for the order of points along each path, not just point-to-point distances
- Lower values indicate the object follows the reference path closely, higher values indicate greater deviation
- Stores path deviation in detection metadata:
- Adds the Fréchet distance value to each detection's metadata
- Each detection includes path_deviation representing how much it deviates from the reference path
- Distance is measured in pixels (same units as image coordinates)
- Maintains persistent path tracking:
- Path histories accumulate across frames for the entire video
- Each object's deviation is calculated based on its complete path from the start of tracking
- Separate tracking state maintained for each video_identifier
- Returns detections enhanced with path deviation information:
- Outputs detection objects with added path_deviation metadata
- Each detection now includes the Fréchet distance measuring its deviation from the reference path
The Fréchet distance is a metric that measures the similarity between two curves by finding the minimum length of a "leash" that connects a point moving along one curve to a point moving along the other curve, where both points move forward along their respective curves. Unlike simple Euclidean distance, Fréchet distance considers the ordering and continuity of points along paths, making it ideal for comparing trajectories where the sequence of movement matters. An object that follows the reference path exactly will have a Fréchet distance of 0, while objects that deviate significantly will have larger distances.
Common Use Cases¶
- Path Compliance Monitoring: Monitor whether vehicles, robots, or objects follow predefined routes (e.g., verify vehicles stay in lanes, check robots follow programmed paths, ensure objects follow expected routes), enabling compliance monitoring workflows
- Quality Control: Detect deviations in manufacturing or assembly processes where objects should follow specific paths (e.g., detect conveyor belt deviations, monitor assembly line paths, check product movement patterns), enabling quality control workflows
- Traffic Analysis: Analyze vehicle movement patterns and detect lane departures or route deviations (e.g., detect vehicles leaving lanes, monitor route adherence, analyze traffic pattern compliance), enabling traffic analysis workflows
- Security Monitoring: Detect suspicious movement patterns or deviations from expected paths in security scenarios (e.g., detect unauthorized route deviations, monitor perimeter breach attempts, track movement compliance), enabling security monitoring workflows
- Automated Systems: Monitor and validate that automated systems (robots, AGVs, drones) follow expected paths correctly (e.g., verify robot navigation accuracy, check automated vehicle paths, validate drone flight paths), enabling automated system validation workflows
- Behavioral Analysis: Study movement patterns and path adherence in behavioral research (e.g., analyze animal movement patterns, study path following behavior, measure route preference deviations), enabling behavioral research workflows
Connecting to Other Blocks¶
This block receives tracked detections, an image with embedded video metadata, and a reference path, and produces detections enhanced with path_deviation metadata:
- After Byte Tracker blocks to measure path deviation for tracked objects (e.g., measure tracked vehicle path compliance, analyze tracked person route adherence, monitor tracked object path deviations), enabling tracking-to-path-analysis workflows
- After object detection or instance segmentation blocks with tracking enabled to analyze movement paths (e.g., analyze vehicle paths, track object route compliance, measure path deviations), enabling detection-to-path-analysis workflows
- Before visualization blocks to display path deviation information (e.g., visualize paths and deviations, display reference and actual paths, show deviation metrics), enabling path deviation visualization workflows
- Before logic blocks like Continue If to make decisions based on path deviation thresholds (e.g., continue if deviation exceeds limit, filter based on path compliance, trigger actions on route violations), enabling path-based decision workflows
- Before notification blocks to alert on path deviations or compliance violations (e.g., alert on route deviations, notify on path compliance issues, trigger deviation-based alerts), enabling path-based notification workflows
- Before data storage blocks to record path deviation measurements (e.g., log path compliance data, store deviation statistics, record route adherence metrics), enabling path deviation data logging workflows
Version Differences¶
Enhanced from v1:
- Simplified Input: Uses
imageinput that contains embedded video metadata instead of requiring a separatemetadatafield, simplifying workflow connections and reducing input complexity - Improved Integration: Better integration with image-based workflows since video metadata is accessed directly from the image object rather than requiring separate metadata input
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The reference path must be defined as a list of at least 2 points, where each point is a tuple or list of exactly 2 coordinates (x, y). The image's video_metadata should include video_identifier to maintain separate path tracking state for different videos. The block maintains persistent path tracking across frames for each video, accumulating complete trajectories, so it should be used in video workflows where frames are processed sequentially. For accurate path deviation measurement, detections should be provided consistently across frames with valid tracker IDs. The Fréchet distance is calculated in pixels (same units as image coordinates).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the bounding box used to track object position for path calculation. Options include CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance.. | ✅ |
reference_path |
List[Any] |
Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v2.
- inputs:
Moondream2,Stitch OCR Detections,OpenAI,Byte Tracker,Size Measurement,Time in Zone,Instance Segmentation Model,EasyOCR,Path Deviation,Detections Consensus,Seg Preview,Multi-Label Classification Model,Detections Transformation,SAM 3,Anthropic Claude,Path Deviation,Detections Stabilizer,Clip Comparison,OpenAI,Detections Combine,Segment Anything 2 Model,Local File Sink,Google Gemini,Keypoint Detection Model,VLM As Detector,VLM As Classifier,Google Gemini,Overlap Filter,Bounding Rectangle,Qwen3.5-VL,Object Detection Model,Slack Notification,Florence-2 Model,Byte Tracker,Anthropic Claude,OpenAI,Motion Detection,Buffer,Email Notification,Detections List Roll-Up,Instance Segmentation Model,Roboflow Dataset Upload,Detections Stitch,Camera Focus,Stitch OCR Detections,Llama 3.2 Vision,Time in Zone,Detections Filter,SAM 3,OpenAI,CogVLM,Dimension Collapse,Template Matching,Line Counter,Florence-2 Model,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Roboflow Custom Metadata,Detections Classes Replacement,Webhook Sink,Dynamic Crop,LMM,Detection Offset,Object Detection Model,Detection Event Log,Clip Comparison,PTZ Tracking (ONVIF),Twilio SMS Notification,CSV Formatter,Google Vision OCR,Google Gemini,OCR Model,Anthropic Claude,Email Notification,Twilio SMS/MMS Notification,Mask Area Measurement,Byte Tracker,Time in Zone,SAM 3,Detections Merge,YOLO-World Model,Dynamic Zone,Single-Label Classification Model,Velocity,LMM For Classification,VLM As Detector,Perspective Correction - outputs:
Detections Stitch,Camera Focus,Stitch OCR Detections,Label Visualization,Stitch OCR Detections,Byte Tracker,Time in Zone,Size Measurement,Detections Filter,Color Visualization,Time in Zone,Circle Visualization,Mask Visualization,Heatmap Visualization,Path Deviation,Detections Consensus,Crop Visualization,Line Counter,Florence-2 Model,Detections Transformation,Model Monitoring Inference Aggregator,Bounding Box Visualization,Roboflow Dataset Upload,Polygon Visualization,Pixelate Visualization,Roboflow Custom Metadata,Line Counter,Path Deviation,Detections Stabilizer,Detections Classes Replacement,Detections Combine,Segment Anything 2 Model,Dynamic Crop,Stability AI Inpainting,Background Color Visualization,Detection Offset,Overlap Filter,Bounding Rectangle,Detection Event Log,Icon Visualization,PTZ Tracking (ONVIF),Distance Measurement,Ellipse Visualization,Dot Visualization,Byte Tracker,Halo Visualization,Blur Visualization,Triangle Visualization,Model Comparison Visualization,Polygon Visualization,Trace Visualization,Corner Visualization,Mask Area Measurement,Byte Tracker,Time in Zone,Detections Merge,Detections List Roll-Up,Roboflow Dataset Upload,Dynamic Zone,Halo Visualization,Velocity,Florence-2 Model,Perspective Correction
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v2 has.
Bindings
-
input
image(image): Image with embedded video metadata. The video_metadata contains video_identifier to maintain separate path tracking state for different videos. Required for persistent path accumulation across frames..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked object detection or instance segmentation predictions. Must include tracker_id information from a tracking block. The block tracks anchor point positions across frames to build object trajectories and compares them against the reference path. Output detections include path_deviation metadata containing the Fréchet distance from the reference path..triggering_anchor(string): Point on the bounding box used to track object position for path calculation. Options include CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance..reference_path(list_of_values): Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory..
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": [
[
100,
200
],
[
200,
300
],
[
300,
400
]
]
}
v1¶
Class: PathDeviationAnalyticsBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Measure how closely tracked objects follow a reference path by calculating the Fréchet distance between the object's actual trajectory and the expected reference path, enabling path compliance monitoring, route deviation detection, quality control in automated systems, and behavioral analysis workflows.
How This Block Works¶
This block compares the actual movement path of tracked objects against a predefined reference path to measure deviation. The block:
- Receives tracked detection predictions with unique tracker IDs, video metadata, and a reference path definition
- Validates that detections have tracker IDs (required for tracking object movement across frames)
- Initializes or retrieves path tracking state for the video:
- Maintains a history of positions for each tracked object per video
- Stores object paths using video_identifier to separate state for different videos
- Creates new path tracking entries for objects appearing for the first time
- Extracts anchor point coordinates for each detection:
- Uses the triggering_anchor to determine which point on the bounding box to track (default: CENTER)
- Gets the (x, y) coordinates of the anchor point for each detection in the current frame
- The anchor point represents the position of the object used for path comparison
- Accumulates object paths over time:
- Appends each object's anchor point to its path history as frames are processed
- Maintains separate path histories for each unique tracker_id
- Builds complete trajectory paths by accumulating positions across all processed frames
- Calculates Fréchet distance for each tracked object:
- Fréchet Distance: Measures the similarity between two curves (paths) considering both location and ordering of points
- Compares the object's accumulated path (actual trajectory) against the reference path (expected trajectory)
- Uses dynamic programming to compute the minimum "leash length" required to traverse both paths simultaneously
- Accounts for the order of points along each path, not just point-to-point distances
- Lower values indicate the object follows the reference path closely, higher values indicate greater deviation
- Stores path deviation in detection metadata:
- Adds the Fréchet distance value to each detection's metadata
- Each detection includes path_deviation representing how much it deviates from the reference path
- Distance is measured in pixels (same units as image coordinates)
- Maintains persistent path tracking:
- Path histories accumulate across frames for the entire video
- Each object's deviation is calculated based on its complete path from the start of tracking
- Separate tracking state maintained for each video_identifier
- Returns detections enhanced with path deviation information:
- Outputs detection objects with added path_deviation metadata
- Each detection now includes the Fréchet distance measuring its deviation from the reference path
The Fréchet distance is a metric that measures the similarity between two curves by finding the minimum length of a "leash" that connects a point moving along one curve to a point moving along the other curve, where both points move forward along their respective curves. Unlike simple Euclidean distance, Fréchet distance considers the ordering and continuity of points along paths, making it ideal for comparing trajectories where the sequence of movement matters. An object that follows the reference path exactly will have a Fréchet distance of 0, while objects that deviate significantly will have larger distances.
Common Use Cases¶
- Path Compliance Monitoring: Monitor whether vehicles, robots, or objects follow predefined routes (e.g., verify vehicles stay in lanes, check robots follow programmed paths, ensure objects follow expected routes), enabling compliance monitoring workflows
- Quality Control: Detect deviations in manufacturing or assembly processes where objects should follow specific paths (e.g., detect conveyor belt deviations, monitor assembly line paths, check product movement patterns), enabling quality control workflows
- Traffic Analysis: Analyze vehicle movement patterns and detect lane departures or route deviations (e.g., detect vehicles leaving lanes, monitor route adherence, analyze traffic pattern compliance), enabling traffic analysis workflows
- Security Monitoring: Detect suspicious movement patterns or deviations from expected paths in security scenarios (e.g., detect unauthorized route deviations, monitor perimeter breach attempts, track movement compliance), enabling security monitoring workflows
- Automated Systems: Monitor and validate that automated systems (robots, AGVs, drones) follow expected paths correctly (e.g., verify robot navigation accuracy, check automated vehicle paths, validate drone flight paths), enabling automated system validation workflows
- Behavioral Analysis: Study movement patterns and path adherence in behavioral research (e.g., analyze animal movement patterns, study path following behavior, measure route preference deviations), enabling behavioral research workflows
Connecting to Other Blocks¶
This block receives tracked detections, video metadata, and a reference path, and produces detections enhanced with path_deviation metadata:
- After Byte Tracker blocks to measure path deviation for tracked objects (e.g., measure tracked vehicle path compliance, analyze tracked person route adherence, monitor tracked object path deviations), enabling tracking-to-path-analysis workflows
- After object detection or instance segmentation blocks with tracking enabled to analyze movement paths (e.g., analyze vehicle paths, track object route compliance, measure path deviations), enabling detection-to-path-analysis workflows
- Before visualization blocks to display path deviation information (e.g., visualize paths and deviations, display reference and actual paths, show deviation metrics), enabling path deviation visualization workflows
- Before logic blocks like Continue If to make decisions based on path deviation thresholds (e.g., continue if deviation exceeds limit, filter based on path compliance, trigger actions on route violations), enabling path-based decision workflows
- Before notification blocks to alert on path deviations or compliance violations (e.g., alert on route deviations, notify on path compliance issues, trigger deviation-based alerts), enabling path-based notification workflows
- Before data storage blocks to record path deviation measurements (e.g., log path compliance data, store deviation statistics, record route adherence metrics), enabling path deviation data logging workflows
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The reference path must be defined as a list of at least 2 points, where each point is a tuple or list of exactly 2 coordinates (x, y). The block requires video metadata with video_identifier to maintain separate path tracking state for different videos. The block maintains persistent path tracking across frames for each video, accumulating complete trajectories, so it should be used in video workflows where frames are processed sequentially. For accurate path deviation measurement, detections should be provided consistently across frames with valid tracker IDs. The Fréchet distance is calculated in pixels (same units as image coordinates).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the bounding box used to track object position for path calculation. Options: CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance.. | ✅ |
reference_path |
List[Any] |
Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v1.
- inputs:
Moondream2,Stitch OCR Detections,OpenAI,Byte Tracker,Size Measurement,Time in Zone,Instance Segmentation Model,EasyOCR,Path Deviation,Detections Consensus,Seg Preview,Multi-Label Classification Model,Detections Transformation,SAM 3,Anthropic Claude,Path Deviation,Detections Stabilizer,Clip Comparison,OpenAI,Detections Combine,Segment Anything 2 Model,Local File Sink,Google Gemini,Keypoint Detection Model,VLM As Detector,VLM As Classifier,Google Gemini,Overlap Filter,Bounding Rectangle,Qwen3.5-VL,Object Detection Model,Slack Notification,Florence-2 Model,Byte Tracker,Anthropic Claude,OpenAI,Motion Detection,Buffer,Email Notification,Detections List Roll-Up,Instance Segmentation Model,Roboflow Dataset Upload,Detections Stitch,Camera Focus,Stitch OCR Detections,Llama 3.2 Vision,Time in Zone,Detections Filter,SAM 3,OpenAI,CogVLM,Dimension Collapse,Template Matching,Line Counter,Florence-2 Model,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Roboflow Custom Metadata,Detections Classes Replacement,Webhook Sink,Dynamic Crop,LMM,Detection Offset,Object Detection Model,Detection Event Log,Clip Comparison,PTZ Tracking (ONVIF),Twilio SMS Notification,CSV Formatter,Google Vision OCR,Google Gemini,OCR Model,Anthropic Claude,Email Notification,Twilio SMS/MMS Notification,Mask Area Measurement,Byte Tracker,Time in Zone,SAM 3,Detections Merge,YOLO-World Model,Dynamic Zone,Single-Label Classification Model,Velocity,LMM For Classification,VLM As Detector,Perspective Correction - outputs:
Detections Stitch,Camera Focus,Stitch OCR Detections,Label Visualization,Stitch OCR Detections,Byte Tracker,Time in Zone,Size Measurement,Detections Filter,Color Visualization,Time in Zone,Circle Visualization,Mask Visualization,Heatmap Visualization,Path Deviation,Detections Consensus,Crop Visualization,Line Counter,Florence-2 Model,Detections Transformation,Model Monitoring Inference Aggregator,Bounding Box Visualization,Roboflow Dataset Upload,Polygon Visualization,Pixelate Visualization,Roboflow Custom Metadata,Line Counter,Path Deviation,Detections Stabilizer,Detections Classes Replacement,Detections Combine,Segment Anything 2 Model,Dynamic Crop,Stability AI Inpainting,Background Color Visualization,Detection Offset,Overlap Filter,Bounding Rectangle,Detection Event Log,Icon Visualization,PTZ Tracking (ONVIF),Distance Measurement,Ellipse Visualization,Dot Visualization,Byte Tracker,Halo Visualization,Blur Visualization,Triangle Visualization,Model Comparison Visualization,Polygon Visualization,Trace Visualization,Corner Visualization,Mask Area Measurement,Byte Tracker,Time in Zone,Detections Merge,Detections List Roll-Up,Roboflow Dataset Upload,Dynamic Zone,Halo Visualization,Velocity,Florence-2 Model,Perspective Correction
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v1 has.
Bindings
-
input
metadata(video_metadata): Video metadata containing video_identifier to maintain separate path tracking state for different videos. Required for persistent path accumulation across frames..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked object detection or instance segmentation predictions. Must include tracker_id information from a tracking block. The block tracks anchor point positions across frames to build object trajectories and compares them against the reference path. Output detections include path_deviation metadata containing the Fréchet distance from the reference path..triggering_anchor(string): Point on the bounding box used to track object position for path calculation. Options: CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance..reference_path(list_of_values): Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory..
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": [
[
100,
200
],
[
200,
300
],
[
300,
400
]
]
}