Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v2
.
- inputs:
Keypoint Detection Model
,OpenAI
,Detections Filter
,Email Notification
,VLM as Classifier
,Segment Anything 2 Model
,YOLO-World Model
,Detections Classes Replacement
,Webhook Sink
,Detections Transformation
,Detections Consensus
,Google Gemini
,Line Counter
,Time in Zone
,Path Deviation
,Time in Zone
,Dimension Collapse
,Clip Comparison
,Byte Tracker
,Buffer
,Llama 3.2 Vision
,LMM
,Object Detection Model
,Perspective Correction
,Twilio SMS Notification
,Byte Tracker
,Florence-2 Model
,VLM as Detector
,Dynamic Crop
,Detections Stitch
,Overlap Filter
,Path Deviation
,Instance Segmentation Model
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Local File Sink
,OpenAI
,OCR Model
,PTZ Tracking (ONVIF)
.md),Clip Comparison
,Velocity
,Bounding Rectangle
,Roboflow Dataset Upload
,OpenAI
,LMM For Classification
,CSV Formatter
,Time in Zone
,Stitch OCR Detections
,Template Matching
,Size Measurement
,VLM as Detector
,Detections Stabilizer
,Anthropic Claude
,Slack Notification
,Dynamic Zone
,Moondream2
,Roboflow Dataset Upload
,Detection Offset
,Roboflow Custom Metadata
,CogVLM
,Multi-Label Classification Model
,Detections Merge
,Google Vision OCR
,Byte Tracker
,Single-Label Classification Model
,Object Detection Model
- outputs:
Crop Visualization
,Detections Filter
,Ellipse Visualization
,Stability AI Inpainting
,Blur Visualization
,Segment Anything 2 Model
,Circle Visualization
,Pixelate Visualization
,Detections Classes Replacement
,Model Comparison Visualization
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Line Counter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Label Visualization
,Byte Tracker
,Triangle Visualization
,Detections Stitch
,Florence-2 Model
,Dynamic Crop
,Overlap Filter
,Florence-2 Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Color Visualization
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Roboflow Dataset Upload
,Velocity
,Mask Visualization
,Bounding Rectangle
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Dot Visualization
,Size Measurement
,Detections Stabilizer
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Dynamic Zone
,Distance Measurement
,Detection Offset
,Trace Visualization
,Line Counter
,Detections Merge
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Predictions.triggering_anchor
(string
): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the detection that will be used to calculate the Frechet distance.. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v1
.
- inputs:
Keypoint Detection Model
,OpenAI
,Detections Filter
,Email Notification
,VLM as Classifier
,Segment Anything 2 Model
,YOLO-World Model
,Detections Classes Replacement
,Webhook Sink
,Detections Transformation
,Detections Consensus
,Google Gemini
,Line Counter
,Time in Zone
,Path Deviation
,Time in Zone
,Dimension Collapse
,Clip Comparison
,Byte Tracker
,Buffer
,Llama 3.2 Vision
,LMM
,Object Detection Model
,Perspective Correction
,Twilio SMS Notification
,Byte Tracker
,Florence-2 Model
,VLM as Detector
,Dynamic Crop
,Detections Stitch
,Overlap Filter
,Path Deviation
,Instance Segmentation Model
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Local File Sink
,OpenAI
,OCR Model
,PTZ Tracking (ONVIF)
.md),Clip Comparison
,Velocity
,Bounding Rectangle
,Roboflow Dataset Upload
,OpenAI
,LMM For Classification
,CSV Formatter
,Time in Zone
,Stitch OCR Detections
,Template Matching
,Size Measurement
,VLM as Detector
,Detections Stabilizer
,Anthropic Claude
,Slack Notification
,Dynamic Zone
,Moondream2
,Roboflow Dataset Upload
,Detection Offset
,Roboflow Custom Metadata
,CogVLM
,Multi-Label Classification Model
,Detections Merge
,Google Vision OCR
,Byte Tracker
,Single-Label Classification Model
,Object Detection Model
- outputs:
Crop Visualization
,Detections Filter
,Ellipse Visualization
,Stability AI Inpainting
,Blur Visualization
,Segment Anything 2 Model
,Circle Visualization
,Pixelate Visualization
,Detections Classes Replacement
,Model Comparison Visualization
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Line Counter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Label Visualization
,Byte Tracker
,Triangle Visualization
,Detections Stitch
,Florence-2 Model
,Dynamic Crop
,Overlap Filter
,Florence-2 Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Color Visualization
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Roboflow Dataset Upload
,Velocity
,Mask Visualization
,Bounding Rectangle
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Dot Visualization
,Size Measurement
,Detections Stabilizer
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Dynamic Zone
,Distance Measurement
,Detection Offset
,Trace Visualization
,Line Counter
,Detections Merge
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Predictions.triggering_anchor
(string
): Point on the detection that will be used to calculate the Frechet distance..reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}