Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v2.
- inputs:
VLM as Detector,Byte Tracker,Google Vision OCR,Overlap Filter,SAM 3,Detections Stabilizer,Detections Filter,LMM For Classification,VLM as Classifier,Detections Combine,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Template Matching,Moondream2,Velocity,OCR Model,Florence-2 Model,Detections Transformation,EasyOCR,Buffer,Florence-2 Model,Detection Offset,Slack Notification,Clip Comparison,Instance Segmentation Model,OpenAI,Byte Tracker,Line Counter,PTZ Tracking (ONVIF).md),Object Detection Model,Keypoint Detection Model,Google Gemini,Email Notification,Llama 3.2 Vision,Byte Tracker,Dynamic Zone,YOLO-World Model,Size Measurement,Email Notification,Time in Zone,OpenAI,CogVLM,Roboflow Custom Metadata,Stitch OCR Detections,Detections Stitch,Time in Zone,CSV Formatter,VLM as Detector,OpenAI,Detections Classes Replacement,Perspective Correction,Twilio SMS Notification,Clip Comparison,Single-Label Classification Model,Seg Preview,Roboflow Dataset Upload,Roboflow Dataset Upload,Webhook Sink,Dimension Collapse,Instance Segmentation Model,Multi-Label Classification Model,Time in Zone,Detections Merge,Path Deviation,Anthropic Claude,LMM,Google Gemini,Dynamic Crop,Bounding Rectangle,Path Deviation,Detections Consensus,Local File Sink,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Detections Stabilizer,Circle Visualization,Time in Zone,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Stability AI Inpainting,Detections Stitch,Roboflow Dataset Upload,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Halo Visualization,Icon Visualization,Polygon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Zone,Dynamic Crop,Bounding Rectangle,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Mask Visualization,Time in Zone,Roboflow Custom Metadata,Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Predictions.triggering_anchor(string): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path(list_of_values): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/path_deviation_analytics@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the detection that will be used to calculate the Frechet distance.. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation in version v1.
- inputs:
VLM as Detector,Byte Tracker,Google Vision OCR,Overlap Filter,SAM 3,Detections Stabilizer,Detections Filter,LMM For Classification,VLM as Classifier,Detections Combine,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Template Matching,Moondream2,Velocity,OCR Model,Florence-2 Model,Detections Transformation,EasyOCR,Buffer,Florence-2 Model,Detection Offset,Slack Notification,Clip Comparison,Instance Segmentation Model,OpenAI,Byte Tracker,Line Counter,PTZ Tracking (ONVIF).md),Object Detection Model,Keypoint Detection Model,Google Gemini,Email Notification,Llama 3.2 Vision,Byte Tracker,Dynamic Zone,YOLO-World Model,Size Measurement,Email Notification,Time in Zone,OpenAI,CogVLM,Roboflow Custom Metadata,Stitch OCR Detections,Detections Stitch,Time in Zone,CSV Formatter,VLM as Detector,OpenAI,Detections Classes Replacement,Perspective Correction,Twilio SMS Notification,Clip Comparison,Single-Label Classification Model,Seg Preview,Roboflow Dataset Upload,Roboflow Dataset Upload,Webhook Sink,Dimension Collapse,Instance Segmentation Model,Multi-Label Classification Model,Time in Zone,Detections Merge,Path Deviation,Anthropic Claude,LMM,Google Gemini,Dynamic Crop,Bounding Rectangle,Path Deviation,Detections Consensus,Local File Sink,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Detections Stabilizer,Circle Visualization,Time in Zone,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Stability AI Inpainting,Detections Stitch,Roboflow Dataset Upload,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Halo Visualization,Icon Visualization,Polygon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Zone,Dynamic Crop,Bounding Rectangle,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Mask Visualization,Time in Zone,Roboflow Custom Metadata,Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Predictions.triggering_anchor(string): Point on the detection that will be used to calculate the Frechet distance..reference_path(list_of_values): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Path Deviation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}