Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Measure how closely tracked objects follow a reference path by calculating the Fréchet distance between the object's actual trajectory and the expected reference path, enabling path compliance monitoring, route deviation detection, quality control in automated systems, and behavioral analysis workflows.
How This Block Works¶
This block compares the actual movement path of tracked objects against a predefined reference path to measure deviation. The block:
- Receives tracked detection predictions with unique tracker IDs, an image with embedded video metadata, and a reference path definition
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts video_identifier to maintain separate path tracking state for different videos
- Uses video metadata to initialize and manage path tracking state per video
- Validates that detections have tracker IDs (required for tracking object movement across frames)
- Initializes or retrieves path tracking state for the video:
- Maintains a history of positions for each tracked object per video
- Stores object paths using video_identifier to separate state for different videos
- Creates new path tracking entries for objects appearing for the first time
- Extracts anchor point coordinates for each detection:
- Uses the triggering_anchor to determine which point on the bounding box to track (default: CENTER)
- Gets the (x, y) coordinates of the anchor point for each detection in the current frame
- The anchor point represents the position of the object used for path comparison
- Accumulates object paths over time:
- Appends each object's anchor point to its path history as frames are processed
- Maintains separate path histories for each unique tracker_id
- Builds complete trajectory paths by accumulating positions across all processed frames
- Calculates Fréchet distance for each tracked object:
- Fréchet Distance: Measures the similarity between two curves (paths) considering both location and ordering of points
- Compares the object's accumulated path (actual trajectory) against the reference path (expected trajectory)
- Uses dynamic programming to compute the minimum "leash length" required to traverse both paths simultaneously
- Accounts for the order of points along each path, not just point-to-point distances
- Lower values indicate the object follows the reference path closely, higher values indicate greater deviation
- Stores path deviation in detection metadata:
- Adds the Fréchet distance value to each detection's metadata
- Each detection includes path_deviation representing how much it deviates from the reference path
- Distance is measured in pixels (same units as image coordinates)
- Maintains persistent path tracking:
- Path histories accumulate across frames for the entire video
- Each object's deviation is calculated based on its complete path from the start of tracking
- Separate tracking state maintained for each video_identifier
- Returns detections enhanced with path deviation information:
- Outputs detection objects with added path_deviation metadata
- Each detection now includes the Fréchet distance measuring its deviation from the reference path
The Fréchet distance is a metric that measures the similarity between two curves by finding the minimum length of a "leash" that connects a point moving along one curve to a point moving along the other curve, where both points move forward along their respective curves. Unlike simple Euclidean distance, Fréchet distance considers the ordering and continuity of points along paths, making it ideal for comparing trajectories where the sequence of movement matters. An object that follows the reference path exactly will have a Fréchet distance of 0, while objects that deviate significantly will have larger distances.
Common Use Cases¶
- Path Compliance Monitoring: Monitor whether vehicles, robots, or objects follow predefined routes (e.g., verify vehicles stay in lanes, check robots follow programmed paths, ensure objects follow expected routes), enabling compliance monitoring workflows
- Quality Control: Detect deviations in manufacturing or assembly processes where objects should follow specific paths (e.g., detect conveyor belt deviations, monitor assembly line paths, check product movement patterns), enabling quality control workflows
- Traffic Analysis: Analyze vehicle movement patterns and detect lane departures or route deviations (e.g., detect vehicles leaving lanes, monitor route adherence, analyze traffic pattern compliance), enabling traffic analysis workflows
- Security Monitoring: Detect suspicious movement patterns or deviations from expected paths in security scenarios (e.g., detect unauthorized route deviations, monitor perimeter breach attempts, track movement compliance), enabling security monitoring workflows
- Automated Systems: Monitor and validate that automated systems (robots, AGVs, drones) follow expected paths correctly (e.g., verify robot navigation accuracy, check automated vehicle paths, validate drone flight paths), enabling automated system validation workflows
- Behavioral Analysis: Study movement patterns and path adherence in behavioral research (e.g., analyze animal movement patterns, study path following behavior, measure route preference deviations), enabling behavioral research workflows
Connecting to Other Blocks¶
This block receives tracked detections, an image with embedded video metadata, and a reference path, and produces detections enhanced with path_deviation metadata:
- After Byte Tracker blocks to measure path deviation for tracked objects (e.g., measure tracked vehicle path compliance, analyze tracked person route adherence, monitor tracked object path deviations), enabling tracking-to-path-analysis workflows
- After object detection or instance segmentation blocks with tracking enabled to analyze movement paths (e.g., analyze vehicle paths, track object route compliance, measure path deviations), enabling detection-to-path-analysis workflows
- Before visualization blocks to display path deviation information (e.g., visualize paths and deviations, display reference and actual paths, show deviation metrics), enabling path deviation visualization workflows
- Before logic blocks like Continue If to make decisions based on path deviation thresholds (e.g., continue if deviation exceeds limit, filter based on path compliance, trigger actions on route violations), enabling path-based decision workflows
- Before notification blocks to alert on path deviations or compliance violations (e.g., alert on route deviations, notify on path compliance issues, trigger deviation-based alerts), enabling path-based notification workflows
- Before data storage blocks to record path deviation measurements (e.g., log path compliance data, store deviation statistics, record route adherence metrics), enabling path deviation data logging workflows
Version Differences¶
Enhanced from v1:
- Simplified Input: Uses
imageinput that contains embedded video metadata instead of requiring a separatemetadatafield, simplifying workflow connections and reducing input complexity - Improved Integration: Better integration with image-based workflows since video metadata is accessed directly from the image object rather than requiring separate metadata input
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The reference path must be defined as a list of at least 2 points, where each point is a tuple or list of exactly 2 coordinates (x, y). The image's video_metadata should include video_identifier to maintain separate path tracking state for different videos. The block maintains persistent path tracking across frames for each video, accumulating complete trajectories, so it should be used in video workflows where frames are processed sequentially. For accurate path deviation measurement, detections should be provided consistently across frames with valid tracker IDs. The Fréchet distance is calculated in pixels (same units as image coordinates).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the bounding box used to track object position for path calculation. Options include CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance.. | ✅ |
reference_path |
List[Any] |
Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v2.
- inputs:
Detections Stitch,Roboflow Dataset Upload,Detections Classes Replacement,LMM,Florence-2 Model,Anthropic Claude,Google Gemini,Llama 3.2 Vision,Motion Detection,VLM As Detector,Size Measurement,Line Counter,Roboflow Dataset Upload,Stitch OCR Detections,Dynamic Zone,Email Notification,Clip Comparison,Buffer,SAM 3,Detections Stabilizer,Slack Notification,Object Detection Model,Path Deviation,EasyOCR,Florence-2 Model,SAM 3,CSV Formatter,Object Detection Model,Anthropic Claude,Detections Combine,OpenAI,Google Gemini,Keypoint Detection Model,Anthropic Claude,VLM As Classifier,Time in Zone,Clip Comparison,Detections Merge,CogVLM,Byte Tracker,Twilio SMS Notification,VLM As Detector,PTZ Tracking (ONVIF).md),Instance Segmentation Model,OpenAI,Detections Transformation,OCR Model,Moondream2,Stitch OCR Detections,Dynamic Crop,Model Monitoring Inference Aggregator,Detection Offset,Byte Tracker,OpenAI,Dimension Collapse,YOLO-World Model,Time in Zone,Velocity,Email Notification,Instance Segmentation Model,Path Deviation,Camera Focus,Twilio SMS/MMS Notification,LMM For Classification,Segment Anything 2 Model,Bounding Rectangle,Detections Filter,Roboflow Custom Metadata,Perspective Correction,Multi-Label Classification Model,Detection Event Log,Local File Sink,Byte Tracker,Google Vision OCR,Time in Zone,SAM 3,Detections Consensus,Template Matching,Detections List Roll-Up,Google Gemini,Overlap Filter,Webhook Sink,Seg Preview,OpenAI,Single-Label Classification Model - outputs:
Detections Stitch,Roboflow Dataset Upload,Triangle Visualization,Detections Classes Replacement,Ellipse Visualization,PTZ Tracking (ONVIF).md),Detections Transformation,Florence-2 Model,Blur Visualization,Halo Visualization,Stitch OCR Detections,Model Comparison Visualization,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Detection Offset,Pixelate Visualization,Byte Tracker,Size Measurement,Line Counter,Stitch OCR Detections,Roboflow Dataset Upload,Trace Visualization,Label Visualization,Color Visualization,Icon Visualization,Dot Visualization,Dynamic Zone,Time in Zone,Distance Measurement,Velocity,Detections Stabilizer,Path Deviation,Corner Visualization,Florence-2 Model,Path Deviation,Camera Focus,Roboflow Custom Metadata,Segment Anything 2 Model,Bounding Rectangle,Detections Filter,Detections Combine,Perspective Correction,Bounding Box Visualization,Detection Event Log,Polygon Visualization,Background Color Visualization,Byte Tracker,Halo Visualization,Time in Zone,Stability AI Inpainting,Crop Visualization,Polygon Visualization,Detections Consensus,Time in Zone,Detections List Roll-Up,Detections Merge,Heatmap Visualization,Overlap Filter,Line Counter,Mask Visualization,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v2 has.
Bindings
-
input
image(image): Image with embedded video metadata. The video_metadata contains video_identifier to maintain separate path tracking state for different videos. Required for persistent path accumulation across frames..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked object detection or instance segmentation predictions. Must include tracker_id information from a tracking block. The block tracks anchor point positions across frames to build object trajectories and compares them against the reference path. Output detections include path_deviation metadata containing the Fréchet distance from the reference path..triggering_anchor(string): Point on the bounding box used to track object position for path calculation. Options include CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance..reference_path(list_of_values): Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory..
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": [
[
100,
200
],
[
200,
300
],
[
300,
400
]
]
}
v1¶
Class: PathDeviationAnalyticsBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Measure how closely tracked objects follow a reference path by calculating the Fréchet distance between the object's actual trajectory and the expected reference path, enabling path compliance monitoring, route deviation detection, quality control in automated systems, and behavioral analysis workflows.
How This Block Works¶
This block compares the actual movement path of tracked objects against a predefined reference path to measure deviation. The block:
- Receives tracked detection predictions with unique tracker IDs, video metadata, and a reference path definition
- Validates that detections have tracker IDs (required for tracking object movement across frames)
- Initializes or retrieves path tracking state for the video:
- Maintains a history of positions for each tracked object per video
- Stores object paths using video_identifier to separate state for different videos
- Creates new path tracking entries for objects appearing for the first time
- Extracts anchor point coordinates for each detection:
- Uses the triggering_anchor to determine which point on the bounding box to track (default: CENTER)
- Gets the (x, y) coordinates of the anchor point for each detection in the current frame
- The anchor point represents the position of the object used for path comparison
- Accumulates object paths over time:
- Appends each object's anchor point to its path history as frames are processed
- Maintains separate path histories for each unique tracker_id
- Builds complete trajectory paths by accumulating positions across all processed frames
- Calculates Fréchet distance for each tracked object:
- Fréchet Distance: Measures the similarity between two curves (paths) considering both location and ordering of points
- Compares the object's accumulated path (actual trajectory) against the reference path (expected trajectory)
- Uses dynamic programming to compute the minimum "leash length" required to traverse both paths simultaneously
- Accounts for the order of points along each path, not just point-to-point distances
- Lower values indicate the object follows the reference path closely, higher values indicate greater deviation
- Stores path deviation in detection metadata:
- Adds the Fréchet distance value to each detection's metadata
- Each detection includes path_deviation representing how much it deviates from the reference path
- Distance is measured in pixels (same units as image coordinates)
- Maintains persistent path tracking:
- Path histories accumulate across frames for the entire video
- Each object's deviation is calculated based on its complete path from the start of tracking
- Separate tracking state maintained for each video_identifier
- Returns detections enhanced with path deviation information:
- Outputs detection objects with added path_deviation metadata
- Each detection now includes the Fréchet distance measuring its deviation from the reference path
The Fréchet distance is a metric that measures the similarity between two curves by finding the minimum length of a "leash" that connects a point moving along one curve to a point moving along the other curve, where both points move forward along their respective curves. Unlike simple Euclidean distance, Fréchet distance considers the ordering and continuity of points along paths, making it ideal for comparing trajectories where the sequence of movement matters. An object that follows the reference path exactly will have a Fréchet distance of 0, while objects that deviate significantly will have larger distances.
Common Use Cases¶
- Path Compliance Monitoring: Monitor whether vehicles, robots, or objects follow predefined routes (e.g., verify vehicles stay in lanes, check robots follow programmed paths, ensure objects follow expected routes), enabling compliance monitoring workflows
- Quality Control: Detect deviations in manufacturing or assembly processes where objects should follow specific paths (e.g., detect conveyor belt deviations, monitor assembly line paths, check product movement patterns), enabling quality control workflows
- Traffic Analysis: Analyze vehicle movement patterns and detect lane departures or route deviations (e.g., detect vehicles leaving lanes, monitor route adherence, analyze traffic pattern compliance), enabling traffic analysis workflows
- Security Monitoring: Detect suspicious movement patterns or deviations from expected paths in security scenarios (e.g., detect unauthorized route deviations, monitor perimeter breach attempts, track movement compliance), enabling security monitoring workflows
- Automated Systems: Monitor and validate that automated systems (robots, AGVs, drones) follow expected paths correctly (e.g., verify robot navigation accuracy, check automated vehicle paths, validate drone flight paths), enabling automated system validation workflows
- Behavioral Analysis: Study movement patterns and path adherence in behavioral research (e.g., analyze animal movement patterns, study path following behavior, measure route preference deviations), enabling behavioral research workflows
Connecting to Other Blocks¶
This block receives tracked detections, video metadata, and a reference path, and produces detections enhanced with path_deviation metadata:
- After Byte Tracker blocks to measure path deviation for tracked objects (e.g., measure tracked vehicle path compliance, analyze tracked person route adherence, monitor tracked object path deviations), enabling tracking-to-path-analysis workflows
- After object detection or instance segmentation blocks with tracking enabled to analyze movement paths (e.g., analyze vehicle paths, track object route compliance, measure path deviations), enabling detection-to-path-analysis workflows
- Before visualization blocks to display path deviation information (e.g., visualize paths and deviations, display reference and actual paths, show deviation metrics), enabling path deviation visualization workflows
- Before logic blocks like Continue If to make decisions based on path deviation thresholds (e.g., continue if deviation exceeds limit, filter based on path compliance, trigger actions on route violations), enabling path-based decision workflows
- Before notification blocks to alert on path deviations or compliance violations (e.g., alert on route deviations, notify on path compliance issues, trigger deviation-based alerts), enabling path-based notification workflows
- Before data storage blocks to record path deviation measurements (e.g., log path compliance data, store deviation statistics, record route adherence metrics), enabling path deviation data logging workflows
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The reference path must be defined as a list of at least 2 points, where each point is a tuple or list of exactly 2 coordinates (x, y). The block requires video metadata with video_identifier to maintain separate path tracking state for different videos. The block maintains persistent path tracking across frames for each video, accumulating complete trajectories, so it should be used in video workflows where frames are processed sequentially. For accurate path deviation measurement, detections should be provided consistently across frames with valid tracker IDs. The Fréchet distance is calculated in pixels (same units as image coordinates).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the bounding box used to track object position for path calculation. Options: CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance.. | ✅ |
reference_path |
List[Any] |
Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v1.
- inputs:
Detections Stitch,Roboflow Dataset Upload,Detections Classes Replacement,LMM,Florence-2 Model,Anthropic Claude,Google Gemini,Llama 3.2 Vision,Motion Detection,VLM As Detector,Size Measurement,Line Counter,Roboflow Dataset Upload,Stitch OCR Detections,Dynamic Zone,Email Notification,Clip Comparison,Buffer,SAM 3,Detections Stabilizer,Slack Notification,Object Detection Model,Path Deviation,EasyOCR,Florence-2 Model,SAM 3,CSV Formatter,Object Detection Model,Anthropic Claude,Detections Combine,OpenAI,Google Gemini,Keypoint Detection Model,Anthropic Claude,VLM As Classifier,Time in Zone,Clip Comparison,Detections Merge,CogVLM,Byte Tracker,Twilio SMS Notification,VLM As Detector,PTZ Tracking (ONVIF).md),Instance Segmentation Model,OpenAI,Detections Transformation,OCR Model,Moondream2,Stitch OCR Detections,Dynamic Crop,Model Monitoring Inference Aggregator,Detection Offset,Byte Tracker,OpenAI,Dimension Collapse,YOLO-World Model,Time in Zone,Velocity,Email Notification,Instance Segmentation Model,Path Deviation,Camera Focus,Twilio SMS/MMS Notification,LMM For Classification,Segment Anything 2 Model,Bounding Rectangle,Detections Filter,Roboflow Custom Metadata,Perspective Correction,Multi-Label Classification Model,Detection Event Log,Local File Sink,Byte Tracker,Google Vision OCR,Time in Zone,SAM 3,Detections Consensus,Template Matching,Detections List Roll-Up,Google Gemini,Overlap Filter,Webhook Sink,Seg Preview,OpenAI,Single-Label Classification Model - outputs:
Detections Stitch,Roboflow Dataset Upload,Triangle Visualization,Detections Classes Replacement,Ellipse Visualization,PTZ Tracking (ONVIF).md),Detections Transformation,Florence-2 Model,Blur Visualization,Halo Visualization,Stitch OCR Detections,Model Comparison Visualization,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Detection Offset,Pixelate Visualization,Byte Tracker,Size Measurement,Line Counter,Stitch OCR Detections,Roboflow Dataset Upload,Trace Visualization,Label Visualization,Color Visualization,Icon Visualization,Dot Visualization,Dynamic Zone,Time in Zone,Distance Measurement,Velocity,Detections Stabilizer,Path Deviation,Corner Visualization,Florence-2 Model,Path Deviation,Camera Focus,Roboflow Custom Metadata,Segment Anything 2 Model,Bounding Rectangle,Detections Filter,Detections Combine,Perspective Correction,Bounding Box Visualization,Detection Event Log,Polygon Visualization,Background Color Visualization,Byte Tracker,Halo Visualization,Time in Zone,Stability AI Inpainting,Crop Visualization,Polygon Visualization,Detections Consensus,Time in Zone,Detections List Roll-Up,Detections Merge,Heatmap Visualization,Overlap Filter,Line Counter,Mask Visualization,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v1 has.
Bindings
-
input
metadata(video_metadata): Video metadata containing video_identifier to maintain separate path tracking state for different videos. Required for persistent path accumulation across frames..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked object detection or instance segmentation predictions. Must include tracker_id information from a tracking block. The block tracks anchor point positions across frames to build object trajectories and compares them against the reference path. Output detections include path_deviation metadata containing the Fréchet distance from the reference path..triggering_anchor(string): Point on the bounding box used to track object position for path calculation. Options: CENTER (default), BOTTOM_CENTER, TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, etc. This anchor point's coordinates are accumulated over frames to build the object's trajectory path, which is compared against the reference path using Fréchet distance..reference_path(list_of_values): Expected reference path as a list of at least 2 points, where each point is a tuple or list of [x, y] coordinates. Example: [(100, 200), (200, 300), (300, 400)] defines a path with 3 points. The Fréchet distance measures how closely tracked objects follow this reference path. Points should be ordered along the expected trajectory..
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": [
[
100,
200
],
[
200,
300
],
[
300,
400
]
]
}