Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v2
.
- inputs:
Keypoint Detection Model
,Line Counter
,Google Vision OCR
,Florence-2 Model
,Template Matching
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Overlap Filter
,CogVLM
,OCR Model
,Byte Tracker
,Detections Transformation
,VLM as Detector
,Perspective Correction
,Detections Stitch
,CSV Formatter
,OpenAI
,Byte Tracker
,Clip Comparison
,Dynamic Zone
,Multi-Label Classification Model
,Object Detection Model
,Time in Zone
,Path Deviation
,Slack Notification
,Clip Comparison
,Dimension Collapse
,Byte Tracker
,Detection Offset
,Detections Consensus
,Detections Stabilizer
,Velocity
,YOLO-World Model
,OpenAI
,Llama 3.2 Vision
,Bounding Rectangle
,Detections Filter
,Anthropic Claude
,Size Measurement
,Time in Zone
,Detections Merge
,Moondream2
,Segment Anything 2 Model
,Webhook Sink
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Single-Label Classification Model
,Buffer
,Path Deviation
,VLM as Classifier
,Local File Sink
,Twilio SMS Notification
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Object Detection Model
,Google Gemini
,Email Notification
,OpenAI
,Instance Segmentation Model
,LMM For Classification
,VLM as Detector
,Instance Segmentation Model
,Roboflow Dataset Upload
,LMM
- outputs:
Stability AI Inpainting
,Line Counter
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Florence-2 Model
,Corner Visualization
,Triangle Visualization
,Overlap Filter
,Background Color Visualization
,Model Comparison Visualization
,Byte Tracker
,Detections Transformation
,Circle Visualization
,Perspective Correction
,Line Counter
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Blur Visualization
,Dynamic Zone
,Time in Zone
,Path Deviation
,Byte Tracker
,Detections Consensus
,Velocity
,Detection Offset
,Detections Stabilizer
,Bounding Rectangle
,Detections Filter
,Size Measurement
,Time in Zone
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Detections Merge
,Polygon Visualization
,Roboflow Custom Metadata
,Mask Visualization
,Bounding Box Visualization
,Path Deviation
,Distance Measurement
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Pixelate Visualization
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Halo Visualization
,Dot Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.triggering_anchor
(string
): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the detection that will be used to calculate the Frechet distance.. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v1
.
- inputs:
Keypoint Detection Model
,Line Counter
,Google Vision OCR
,Florence-2 Model
,Template Matching
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Overlap Filter
,CogVLM
,OCR Model
,Byte Tracker
,Detections Transformation
,VLM as Detector
,Perspective Correction
,Detections Stitch
,CSV Formatter
,OpenAI
,Byte Tracker
,Clip Comparison
,Dynamic Zone
,Multi-Label Classification Model
,Object Detection Model
,Time in Zone
,Path Deviation
,Slack Notification
,Clip Comparison
,Dimension Collapse
,Byte Tracker
,Detection Offset
,Detections Consensus
,Detections Stabilizer
,Velocity
,YOLO-World Model
,OpenAI
,Llama 3.2 Vision
,Bounding Rectangle
,Detections Filter
,Anthropic Claude
,Size Measurement
,Time in Zone
,Detections Merge
,Moondream2
,Segment Anything 2 Model
,Webhook Sink
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Single-Label Classification Model
,Buffer
,Path Deviation
,VLM as Classifier
,Local File Sink
,Twilio SMS Notification
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Object Detection Model
,Google Gemini
,Email Notification
,OpenAI
,Instance Segmentation Model
,LMM For Classification
,VLM as Detector
,Instance Segmentation Model
,Roboflow Dataset Upload
,LMM
- outputs:
Stability AI Inpainting
,Line Counter
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Florence-2 Model
,Corner Visualization
,Triangle Visualization
,Overlap Filter
,Background Color Visualization
,Model Comparison Visualization
,Byte Tracker
,Detections Transformation
,Circle Visualization
,Perspective Correction
,Line Counter
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Blur Visualization
,Dynamic Zone
,Time in Zone
,Path Deviation
,Byte Tracker
,Detections Consensus
,Velocity
,Detection Offset
,Detections Stabilizer
,Bounding Rectangle
,Detections Filter
,Size Measurement
,Time in Zone
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Detections Merge
,Polygon Visualization
,Roboflow Custom Metadata
,Mask Visualization
,Bounding Box Visualization
,Path Deviation
,Distance Measurement
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Pixelate Visualization
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Halo Visualization
,Dot Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.triggering_anchor
(string
): Point on the detection that will be used to calculate the Frechet distance..reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}