Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
LMM
,Stitch OCR Detections
,LMM For Classification
,Clip Comparison
,CSV Formatter
,Instance Segmentation Model
,CogVLM
,Byte Tracker
,Roboflow Dataset Upload
,Twilio SMS Notification
,Object Detection Model
,SIFT Comparison
,Florence-2 Model
,Byte Tracker
,OpenAI
,Path Deviation
,Single-Label Classification Model
,Roboflow Dataset Upload
,Dimension Collapse
,Detections Stitch
,Llama 3.2 Vision
,Slack Notification
,Object Detection Model
,VLM as Classifier
,Moondream2
,Bounding Rectangle
,Detections Consensus
,Detections Merge
,Dynamic Zone
,Detection Offset
,Buffer
,Overlap Filter
,Time in Zone
,Model Monitoring Inference Aggregator
,Keypoint Detection Model
,Perspective Correction
,Dynamic Crop
,Detections Transformation
,Detections Classes Replacement
,Time in Zone
,VLM as Detector
,Identify Changes
,Florence-2 Model
,Local File Sink
,Path Deviation
,VLM as Detector
,Google Vision OCR
,OpenAI
,Size Measurement
,Multi-Label Classification Model
,Byte Tracker
,Detections Filter
,Identify Outliers
,Clip Comparison
,Google Gemini
,VLM as Classifier
,Detections Stabilizer
,OCR Model
,Template Matching
,Velocity
,Roboflow Custom Metadata
,Anthropic Claude
,Webhook Sink
,SIFT Comparison
,YOLO-World Model
,JSON Parser
,Instance Segmentation Model
,Segment Anything 2 Model
,Email Notification
,Line Counter
- outputs:
Path Deviation
,Ellipse Visualization
,Size Measurement
,Stitch OCR Detections
,Blur Visualization
,Dot Visualization
,Circle Visualization
,Line Counter
,Background Color Visualization
,Byte Tracker
,Roboflow Dataset Upload
,Distance Measurement
,Color Visualization
,Detections Filter
,Byte Tracker
,Pixelate Visualization
,Florence-2 Model
,Byte Tracker
,Label Visualization
,Path Deviation
,Stability AI Inpainting
,Detections Stabilizer
,Roboflow Dataset Upload
,Detections Stitch
,Bounding Box Visualization
,Mask Visualization
,Model Comparison Visualization
,Velocity
,Bounding Rectangle
,Detections Consensus
,Roboflow Custom Metadata
,Detections Merge
,Dynamic Zone
,Detection Offset
,Halo Visualization
,Trace Visualization
,Overlap Filter
,Corner Visualization
,Polygon Visualization
,Segment Anything 2 Model
,Triangle Visualization
,Time in Zone
,Model Monitoring Inference Aggregator
,Perspective Correction
,Detections Classes Replacement
,Dynamic Crop
,Time in Zone
,Detections Transformation
,Crop Visualization
,Line Counter
,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
LMM
,Depth Estimation
,Classification Label Visualization
,Camera Calibration
,Stitch OCR Detections
,LMM For Classification
,Clip Comparison
,CSV Formatter
,Instance Segmentation Model
,Stitch Images
,CogVLM
,Byte Tracker
,Image Slicer
,Roboflow Dataset Upload
,Absolute Static Crop
,Twilio SMS Notification
,Object Detection Model
,SIFT Comparison
,Florence-2 Model
,Byte Tracker
,OpenAI
,Label Visualization
,Path Deviation
,Single-Label Classification Model
,Roboflow Dataset Upload
,Dimension Collapse
,Detections Stitch
,Bounding Box Visualization
,Llama 3.2 Vision
,Model Comparison Visualization
,Slack Notification
,Object Detection Model
,VLM as Classifier
,Moondream2
,Bounding Rectangle
,Detections Consensus
,Grid Visualization
,Detections Merge
,Dynamic Zone
,Image Convert Grayscale
,Halo Visualization
,Detection Offset
,Buffer
,Overlap Filter
,Triangle Visualization
,Time in Zone
,Model Monitoring Inference Aggregator
,Keypoint Detection Model
,Reference Path Visualization
,Perspective Correction
,Dynamic Crop
,Detections Transformation
,Camera Focus
,Detections Classes Replacement
,VLM as Detector
,Time in Zone
,Identify Changes
,Florence-2 Model
,Local File Sink
,Path Deviation
,VLM as Detector
,Google Vision OCR
,OpenAI
,Stability AI Image Generation
,Ellipse Visualization
,Size Measurement
,SIFT
,Blur Visualization
,Circle Visualization
,Dot Visualization
,Image Blur
,Background Color Visualization
,Multi-Label Classification Model
,Color Visualization
,Byte Tracker
,Detections Filter
,Pixelate Visualization
,Identify Outliers
,Clip Comparison
,Google Gemini
,Stability AI Inpainting
,VLM as Classifier
,Polygon Zone Visualization
,Relative Static Crop
,Detections Stabilizer
,OCR Model
,Keypoint Visualization
,Template Matching
,Mask Visualization
,Velocity
,Image Preprocessing
,Line Counter Visualization
,Roboflow Custom Metadata
,Anthropic Claude
,Webhook Sink
,SIFT Comparison
,YOLO-World Model
,Trace Visualization
,JSON Parser
,Instance Segmentation Model
,Corner Visualization
,Polygon Visualization
,Segment Anything 2 Model
,Crop Visualization
,Email Notification
,Image Contours
,Image Slicer
,Line Counter
,Image Threshold
- outputs:
Path Deviation
,Ellipse Visualization
,Size Measurement
,Stitch OCR Detections
,Blur Visualization
,Dot Visualization
,Circle Visualization
,Line Counter
,Background Color Visualization
,Byte Tracker
,Roboflow Dataset Upload
,Distance Measurement
,Color Visualization
,Detections Filter
,Byte Tracker
,Pixelate Visualization
,Florence-2 Model
,Byte Tracker
,Label Visualization
,Path Deviation
,Stability AI Inpainting
,Detections Stabilizer
,Roboflow Dataset Upload
,Detections Stitch
,Bounding Box Visualization
,Mask Visualization
,Model Comparison Visualization
,Velocity
,Bounding Rectangle
,Detections Consensus
,Roboflow Custom Metadata
,Detections Merge
,Dynamic Zone
,Detection Offset
,Halo Visualization
,Trace Visualization
,Overlap Filter
,Corner Visualization
,Polygon Visualization
,Segment Anything 2 Model
,Triangle Visualization
,Time in Zone
,Model Monitoring Inference Aggregator
,Perspective Correction
,Detections Classes Replacement
,Dynamic Crop
,Time in Zone
,Detections Transformation
,Crop Visualization
,Line Counter
,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}