Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v2
.
- inputs:
Time in Zone
,Detections Stitch
,Path Deviation
,Florence-2 Model
,Multi-Label Classification Model
,LMM For Classification
,Instance Segmentation Model
,Keypoint Detection Model
,Single-Label Classification Model
,OCR Model
,Object Detection Model
,Perspective Correction
,Local File Sink
,Line Counter
,Detections Filter
,YOLO-World Model
,Model Monitoring Inference Aggregator
,VLM as Classifier
,VLM as Detector
,Dimension Collapse
,Google Vision OCR
,Email Notification
,Detections Consensus
,CogVLM
,Webhook Sink
,Byte Tracker
,OpenAI
,Twilio SMS Notification
,Detections Classes Replacement
,Instance Segmentation Model
,Template Matching
,Roboflow Custom Metadata
,Detection Offset
,Buffer
,Roboflow Dataset Upload
,Clip Comparison
,Roboflow Dataset Upload
,Stitch OCR Detections
,Slack Notification
,Anthropic Claude
,Dynamic Zone
,Google Gemini
,Segment Anything 2 Model
,Clip Comparison
,Size Measurement
,Byte Tracker
,Time in Zone
,Florence-2 Model
,LMM
,Detections Stabilizer
,Path Deviation
,VLM as Detector
,OpenAI
,Byte Tracker
,CSV Formatter
,Llama 3.2 Vision
,Bounding Rectangle
,Detections Transformation
,Object Detection Model
- outputs:
Time in Zone
,Florence-2 Model
,Path Deviation
,Pixelate Visualization
,Detections Stitch
,Line Counter
,Corner Visualization
,Blur Visualization
,Mask Visualization
,Perspective Correction
,Line Counter
,Detections Filter
,Model Monitoring Inference Aggregator
,Polygon Visualization
,Halo Visualization
,Trace Visualization
,Model Comparison Visualization
,Size Measurement
,Detections Consensus
,Byte Tracker
,Roboflow Custom Metadata
,Detections Classes Replacement
,Crop Visualization
,Detection Offset
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Stitch OCR Detections
,Dynamic Zone
,Dot Visualization
,Circle Visualization
,Background Color Visualization
,Segment Anything 2 Model
,Bounding Box Visualization
,Ellipse Visualization
,Label Visualization
,Byte Tracker
,Time in Zone
,Florence-2 Model
,Stability AI Inpainting
,Detections Stabilizer
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Triangle Visualization
,Color Visualization
,Bounding Rectangle
,Detections Transformation
,Distance Measurement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.triggering_anchor
(string
): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v1
.
- inputs:
Time in Zone
,Detections Stitch
,Path Deviation
,Florence-2 Model
,Multi-Label Classification Model
,LMM For Classification
,Instance Segmentation Model
,Keypoint Detection Model
,Single-Label Classification Model
,OCR Model
,Object Detection Model
,Perspective Correction
,Local File Sink
,Line Counter
,Detections Filter
,YOLO-World Model
,Model Monitoring Inference Aggregator
,VLM as Classifier
,VLM as Detector
,Dimension Collapse
,Google Vision OCR
,Email Notification
,Detections Consensus
,CogVLM
,Webhook Sink
,Byte Tracker
,OpenAI
,Twilio SMS Notification
,Detections Classes Replacement
,Instance Segmentation Model
,Template Matching
,Roboflow Custom Metadata
,Detection Offset
,Buffer
,Roboflow Dataset Upload
,Clip Comparison
,Roboflow Dataset Upload
,Stitch OCR Detections
,Slack Notification
,Anthropic Claude
,Dynamic Zone
,Google Gemini
,Segment Anything 2 Model
,Clip Comparison
,Size Measurement
,Byte Tracker
,Time in Zone
,Florence-2 Model
,LMM
,Detections Stabilizer
,Path Deviation
,VLM as Detector
,OpenAI
,Byte Tracker
,CSV Formatter
,Llama 3.2 Vision
,Bounding Rectangle
,Detections Transformation
,Object Detection Model
- outputs:
Time in Zone
,Florence-2 Model
,Path Deviation
,Pixelate Visualization
,Detections Stitch
,Line Counter
,Corner Visualization
,Blur Visualization
,Mask Visualization
,Perspective Correction
,Line Counter
,Detections Filter
,Model Monitoring Inference Aggregator
,Polygon Visualization
,Halo Visualization
,Trace Visualization
,Model Comparison Visualization
,Size Measurement
,Detections Consensus
,Byte Tracker
,Roboflow Custom Metadata
,Detections Classes Replacement
,Crop Visualization
,Detection Offset
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Stitch OCR Detections
,Dynamic Zone
,Dot Visualization
,Circle Visualization
,Background Color Visualization
,Segment Anything 2 Model
,Bounding Box Visualization
,Ellipse Visualization
,Label Visualization
,Byte Tracker
,Time in Zone
,Florence-2 Model
,Stability AI Inpainting
,Detections Stabilizer
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Triangle Visualization
,Color Visualization
,Bounding Rectangle
,Detections Transformation
,Distance Measurement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.triggering_anchor
(string
): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}