Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v2.
- inputs:
Google Vision OCR,LMM For Classification,Detections Filter,Single-Label Classification Model,Clip Comparison,CSV Formatter,SAM 3,Seg Preview,Byte Tracker,Overlap Filter,SAM 3,Object Detection Model,Path Deviation,Detections Combine,Email Notification,Anthropic Claude,Object Detection Model,Line Counter,Clip Comparison,Email Notification,Moondream2,VLM as Classifier,Model Monitoring Inference Aggregator,OCR Model,Path Deviation,LMM,Time in Zone,Roboflow Dataset Upload,Detections Consensus,OpenAI,SAM 3,VLM as Detector,Florence-2 Model,CogVLM,Roboflow Custom Metadata,Byte Tracker,Stitch OCR Detections,Buffer,Bounding Rectangle,Segment Anything 2 Model,Keypoint Detection Model,Time in Zone,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Detections Transformation,Template Matching,Roboflow Dataset Upload,Anthropic Claude,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,VLM as Detector,Size Measurement,Dynamic Zone,Time in Zone,Twilio SMS Notification,Detections Stitch,Llama 3.2 Vision,Dimension Collapse,Velocity,Slack Notification,Byte Tracker,OpenAI,Local File Sink,Instance Segmentation Model,Multi-Label Classification Model,OpenAI,Dynamic Crop,Detections Stabilizer,Webhook Sink,Instance Segmentation Model,Perspective Correction,Detections Merge,OpenAI - outputs:
Label Visualization,Time in Zone,Line Counter,Blur Visualization,Background Color Visualization,Bounding Box Visualization,Detections Filter,Polygon Visualization,PTZ Tracking (ONVIF).md),Detection Offset,Pixelate Visualization,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Byte Tracker,Overlap Filter,Corner Visualization,Distance Measurement,Florence-2 Model,Color Visualization,Path Deviation,Detections Combine,Halo Visualization,Size Measurement,Dynamic Zone,Circle Visualization,Time in Zone,Dot Visualization,Detections Stitch,Line Counter,Ellipse Visualization,Velocity,Model Monitoring Inference Aggregator,Byte Tracker,Path Deviation,Time in Zone,Roboflow Dataset Upload,Stability AI Inpainting,Dynamic Crop,Detections Consensus,Detections Stabilizer,Crop Visualization,Detections Merge,Florence-2 Model,Perspective Correction,Roboflow Custom Metadata,Mask Visualization,Trace Visualization,Byte Tracker,Stitch OCR Detections,Bounding Rectangle,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Predictions.triggering_anchor(string): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path(list_of_values): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the detection that will be used to calculate the Frechet distance.. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v1.
- inputs:
Google Vision OCR,LMM For Classification,Detections Filter,Single-Label Classification Model,Clip Comparison,CSV Formatter,SAM 3,Seg Preview,Byte Tracker,Overlap Filter,SAM 3,Object Detection Model,Path Deviation,Detections Combine,Email Notification,Anthropic Claude,Object Detection Model,Line Counter,Clip Comparison,Email Notification,Moondream2,VLM as Classifier,Model Monitoring Inference Aggregator,OCR Model,Path Deviation,LMM,Time in Zone,Roboflow Dataset Upload,Detections Consensus,OpenAI,SAM 3,VLM as Detector,Florence-2 Model,CogVLM,Roboflow Custom Metadata,Byte Tracker,Stitch OCR Detections,Buffer,Bounding Rectangle,Segment Anything 2 Model,Keypoint Detection Model,Time in Zone,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Detections Transformation,Template Matching,Roboflow Dataset Upload,Anthropic Claude,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,VLM as Detector,Size Measurement,Dynamic Zone,Time in Zone,Twilio SMS Notification,Detections Stitch,Llama 3.2 Vision,Dimension Collapse,Velocity,Slack Notification,Byte Tracker,OpenAI,Local File Sink,Instance Segmentation Model,Multi-Label Classification Model,OpenAI,Dynamic Crop,Detections Stabilizer,Webhook Sink,Instance Segmentation Model,Perspective Correction,Detections Merge,OpenAI - outputs:
Label Visualization,Time in Zone,Line Counter,Blur Visualization,Background Color Visualization,Bounding Box Visualization,Detections Filter,Polygon Visualization,PTZ Tracking (ONVIF).md),Detection Offset,Pixelate Visualization,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Byte Tracker,Overlap Filter,Corner Visualization,Distance Measurement,Florence-2 Model,Color Visualization,Path Deviation,Detections Combine,Halo Visualization,Size Measurement,Dynamic Zone,Circle Visualization,Time in Zone,Dot Visualization,Detections Stitch,Line Counter,Ellipse Visualization,Velocity,Model Monitoring Inference Aggregator,Byte Tracker,Path Deviation,Time in Zone,Roboflow Dataset Upload,Stability AI Inpainting,Dynamic Crop,Detections Consensus,Detections Stabilizer,Crop Visualization,Detections Merge,Florence-2 Model,Perspective Correction,Roboflow Custom Metadata,Mask Visualization,Trace Visualization,Byte Tracker,Stitch OCR Detections,Bounding Rectangle,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Predictions.triggering_anchor(string): Point on the detection that will be used to calculate the Frechet distance..reference_path(list_of_values): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}