Velocity¶
Class: VelocityBlockV1
Source: inference.core.workflows.core_steps.analytics.velocity.v1.VelocityBlockV1
The VelocityBlock
computes the velocity and speed of objects tracked across video frames.
It includes options to smooth the velocity and speed measurements over time and to convert units from pixels per second to meters per second.
It requires detections from Byte Track with unique tracker_id
assigned to each object, which persists between frames.
The velocities are calculated based on the displacement of object centers over time.
Note: due to perspective and camera distortions calculated velocity will be different depending on object position in relation to the camera.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/velocity@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
smoothing_alpha |
float |
Smoothing factor (alpha) for exponential moving average (0 < alpha <= 1). Lower alpha means more smoothing.. | ✅ |
pixels_per_meter |
float |
Conversion from pixels to meters. Velocity will be converted to meters per second using this value.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Velocity
in version v1
.
- inputs:
Detection Offset
,Moondream2
,Path Deviation
,VLM as Detector
,Detections Classes Replacement
,Gaze Detection
,Camera Focus
,Bounding Rectangle
,Detections Stitch
,Template Matching
,Instance Segmentation Model
,Byte Tracker
,Object Detection Model
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Google Vision OCR
,Dynamic Zone
,Perspective Correction
,Object Detection Model
,Identify Changes
,Detections Filter
,Instance Segmentation Model
,YOLO-World Model
,VLM as Detector
,Detections Merge
,Velocity
,Segment Anything 2 Model
,Time in Zone
,Time in Zone
,Cosine Similarity
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Bounding Rectangle
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Stitch OCR Detections
,Roboflow Dataset Upload
,Dynamic Zone
,Bounding Box Visualization
,Perspective Correction
,Halo Visualization
,Circle Visualization
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Polygon Visualization
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Stability AI Inpainting
,Mask Visualization
,Florence-2 Model
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Velocity
in version v1
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the velocity for..smoothing_alpha
(float
): Smoothing factor (alpha) for exponential moving average (0 < alpha <= 1). Lower alpha means more smoothing..pixels_per_meter
(float
): Conversion from pixels to meters. Velocity will be converted to meters per second using this value..
-
output
velocity_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Velocity
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/velocity@v1",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"smoothing_alpha": 0.5,
"pixels_per_meter": 0.01
}