Path Deviation¶
v2¶
Class: PathDeviationAnalyticsBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v2.PathDeviationAnalyticsBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v2
.
- inputs:
Anthropic Claude
,Email Notification
,YOLO-World Model
,Keypoint Detection Model
,Size Measurement
,LMM
,Perspective Correction
,Detections Consensus
,Path Deviation
,Moondream2
,Bounding Rectangle
,Model Monitoring Inference Aggregator
,Time in Zone
,Roboflow Dataset Upload
,OpenAI
,Detections Filter
,Detections Merge
,Byte Tracker
,Buffer
,Segment Anything 2 Model
,Slack Notification
,Instance Segmentation Model
,Detections Stitch
,Instance Segmentation Model
,Roboflow Dataset Upload
,CogVLM
,Detection Offset
,OpenAI
,Google Vision OCR
,Dynamic Crop
,Detections Classes Replacement
,Twilio SMS Notification
,Dimension Collapse
,Byte Tracker
,Florence-2 Model
,Detections Stabilizer
,Detections Transformation
,Template Matching
,Time in Zone
,Webhook Sink
,CSV Formatter
,VLM as Detector
,VLM as Detector
,Florence-2 Model
,Object Detection Model
,LMM For Classification
,Google Gemini
,OpenAI
,VLM as Classifier
,PTZ Tracking (ONVIF)
.md),Path Deviation
,Dynamic Zone
,Llama 3.2 Vision
,Roboflow Custom Metadata
,Line Counter
,Stitch OCR Detections
,Clip Comparison
,Multi-Label Classification Model
,Single-Label Classification Model
,OCR Model
,Local File Sink
,Velocity
,Byte Tracker
,Object Detection Model
,Overlap Filter
,Clip Comparison
- outputs:
Blur Visualization
,Triangle Visualization
,Polygon Visualization
,Trace Visualization
,Size Measurement
,Label Visualization
,Distance Measurement
,Perspective Correction
,Model Monitoring Inference Aggregator
,Path Deviation
,Detections Consensus
,Roboflow Dataset Upload
,Bounding Rectangle
,Time in Zone
,Detections Filter
,Detections Merge
,Byte Tracker
,Bounding Box Visualization
,Segment Anything 2 Model
,Roboflow Dataset Upload
,Detections Stitch
,Detection Offset
,Detections Classes Replacement
,Dynamic Crop
,Halo Visualization
,Byte Tracker
,Background Color Visualization
,Stability AI Inpainting
,Detections Stabilizer
,Detections Transformation
,Florence-2 Model
,Dot Visualization
,Time in Zone
,Circle Visualization
,Florence-2 Model
,PTZ Tracking (ONVIF)
.md),Path Deviation
,Model Comparison Visualization
,Dynamic Zone
,Ellipse Visualization
,Roboflow Custom Metadata
,Line Counter
,Stitch OCR Detections
,Crop Visualization
,Corner Visualization
,Line Counter
,Pixelate Visualization
,Velocity
,Byte Tracker
,Mask Visualization
,Overlap Filter
,Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Predictions.triggering_anchor
(string
): Triggering anchor. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}
v1¶
Class: PathDeviationAnalyticsBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.path_deviation.v1.PathDeviationAnalyticsBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The PathDeviationAnalyticsBlock
is an analytics block designed to measure the Frechet distance
of tracked objects from a user-defined reference path. The block requires detections to be tracked
(i.e. each object must have a unique tracker_id assigned, which persists between frames).
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/path_deviation_analytics@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
triggering_anchor |
str |
Point on the detection that will be used to calculate the Frechet distance.. | ✅ |
reference_path |
List[Any] |
Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...]. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Path Deviation
in version v1
.
- inputs:
Anthropic Claude
,Email Notification
,YOLO-World Model
,Keypoint Detection Model
,Size Measurement
,LMM
,Perspective Correction
,Detections Consensus
,Path Deviation
,Moondream2
,Bounding Rectangle
,Model Monitoring Inference Aggregator
,Time in Zone
,Roboflow Dataset Upload
,OpenAI
,Detections Filter
,Detections Merge
,Byte Tracker
,Buffer
,Segment Anything 2 Model
,Slack Notification
,Instance Segmentation Model
,Detections Stitch
,Instance Segmentation Model
,Roboflow Dataset Upload
,CogVLM
,Detection Offset
,OpenAI
,Google Vision OCR
,Dynamic Crop
,Detections Classes Replacement
,Twilio SMS Notification
,Dimension Collapse
,Byte Tracker
,Florence-2 Model
,Detections Stabilizer
,Detections Transformation
,Template Matching
,Time in Zone
,Webhook Sink
,CSV Formatter
,VLM as Detector
,VLM as Detector
,Florence-2 Model
,Object Detection Model
,LMM For Classification
,Google Gemini
,OpenAI
,VLM as Classifier
,PTZ Tracking (ONVIF)
.md),Path Deviation
,Dynamic Zone
,Llama 3.2 Vision
,Roboflow Custom Metadata
,Line Counter
,Stitch OCR Detections
,Clip Comparison
,Multi-Label Classification Model
,Single-Label Classification Model
,OCR Model
,Local File Sink
,Velocity
,Byte Tracker
,Object Detection Model
,Overlap Filter
,Clip Comparison
- outputs:
Blur Visualization
,Triangle Visualization
,Polygon Visualization
,Trace Visualization
,Size Measurement
,Label Visualization
,Distance Measurement
,Perspective Correction
,Model Monitoring Inference Aggregator
,Path Deviation
,Detections Consensus
,Roboflow Dataset Upload
,Bounding Rectangle
,Time in Zone
,Detections Filter
,Detections Merge
,Byte Tracker
,Bounding Box Visualization
,Segment Anything 2 Model
,Roboflow Dataset Upload
,Detections Stitch
,Detection Offset
,Detections Classes Replacement
,Dynamic Crop
,Halo Visualization
,Byte Tracker
,Background Color Visualization
,Stability AI Inpainting
,Detections Stabilizer
,Detections Transformation
,Florence-2 Model
,Dot Visualization
,Time in Zone
,Circle Visualization
,Florence-2 Model
,PTZ Tracking (ONVIF)
.md),Path Deviation
,Model Comparison Visualization
,Dynamic Zone
,Ellipse Visualization
,Roboflow Custom Metadata
,Line Counter
,Stitch OCR Detections
,Crop Visualization
,Corner Visualization
,Line Counter
,Pixelate Visualization
,Velocity
,Byte Tracker
,Mask Visualization
,Overlap Filter
,Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Path Deviation
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Predictions.triggering_anchor
(string
): Point on the detection that will be used to calculate the Frechet distance..reference_path
(list_of_values
): Reference path in a format [(x1, y1), (x2, y2), (x3, y3), ...].
-
output
path_deviation_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Path Deviation
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/path_deviation_analytics@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"triggering_anchor": "CENTER",
"reference_path": "$inputs.expected_path"
}