Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Identify Changes
,Detection Offset
,Line Counter
,Detections Filter
,Slack Notification
,Time in Zone
,Local File Sink
,YOLO-World Model
,Instance Segmentation Model
,Roboflow Custom Metadata
,Perspective Correction
,OpenAI
,Dimension Collapse
,Clip Comparison
,OpenAI
,Size Measurement
,Byte Tracker
,Email Notification
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,LMM
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,VLM as Classifier
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Buffer
,Segment Anything 2 Model
,Object Detection Model
,Llama 3.2 Vision
,Anthropic Claude
,SIFT Comparison
,CogVLM
,VLM as Detector
,Path Deviation
,Clip Comparison
,Moondream2
,Twilio SMS Notification
,OCR Model
,Multi-Label Classification Model
,Google Vision OCR
,Stitch OCR Detections
,Time in Zone
,Single-Label Classification Model
,Webhook Sink
,Google Gemini
,Roboflow Dataset Upload
,JSON Parser
,Identify Outliers
,Byte Tracker
,Detections Transformation
,Florence-2 Model
,SIFT Comparison
,Detections Stabilizer
,LMM For Classification
,Instance Segmentation Model
,CSV Formatter
,Florence-2 Model
,Detections Merge
,Keypoint Detection Model
,Detections Stitch
,Bounding Rectangle
,Byte Tracker
,VLM as Detector
,Path Deviation
,VLM as Classifier
- outputs:
Detection Offset
,Blur Visualization
,Line Counter
,Stability AI Inpainting
,Detections Filter
,Time in Zone
,Dot Visualization
,Background Color Visualization
,Path Deviation
,Trace Visualization
,Roboflow Custom Metadata
,Perspective Correction
,Color Visualization
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Stitch OCR Detections
,Triangle Visualization
,Halo Visualization
,Label Visualization
,Time in Zone
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter
,Size Measurement
,Byte Tracker
,Corner Visualization
,Detections Transformation
,Florence-2 Model
,Detections Stabilizer
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Polygon Visualization
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Detections Stitch
,Segment Anything 2 Model
,Byte Tracker
,Bounding Box Visualization
,Bounding Rectangle
,Model Comparison Visualization
,Path Deviation
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Identify Changes
,Detection Offset
,Camera Focus
,Line Counter
,Polygon Zone Visualization
,Detections Filter
,Slack Notification
,Time in Zone
,Local File Sink
,Grid Visualization
,YOLO-World Model
,Image Convert Grayscale
,Trace Visualization
,Instance Segmentation Model
,Absolute Static Crop
,Roboflow Custom Metadata
,Perspective Correction
,OpenAI
,Circle Visualization
,Dimension Collapse
,Clip Comparison
,Image Slicer
,OpenAI
,Triangle Visualization
,Halo Visualization
,Size Measurement
,Byte Tracker
,Corner Visualization
,Email Notification
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,LMM
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,VLM as Classifier
,Depth Estimation
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Buffer
,Stitch Images
,Segment Anything 2 Model
,Object Detection Model
,Llama 3.2 Vision
,Model Comparison Visualization
,Anthropic Claude
,Crop Visualization
,Blur Visualization
,SIFT Comparison
,CogVLM
,Image Threshold
,Stability AI Inpainting
,VLM as Detector
,Relative Static Crop
,Image Preprocessing
,Dot Visualization
,Keypoint Visualization
,Background Color Visualization
,Path Deviation
,Clip Comparison
,Color Visualization
,Moondream2
,Twilio SMS Notification
,OCR Model
,Multi-Label Classification Model
,Classification Label Visualization
,Google Vision OCR
,Camera Calibration
,Pixelate Visualization
,Stitch OCR Detections
,Label Visualization
,Image Slicer
,Time in Zone
,Reference Path Visualization
,Single-Label Classification Model
,Webhook Sink
,Google Gemini
,Roboflow Dataset Upload
,Line Counter Visualization
,JSON Parser
,Identify Outliers
,Byte Tracker
,Image Blur
,Detections Transformation
,Florence-2 Model
,SIFT Comparison
,Detections Stabilizer
,LMM For Classification
,Image Contours
,Polygon Visualization
,Instance Segmentation Model
,CSV Formatter
,SIFT
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Keypoint Detection Model
,Detections Stitch
,Bounding Box Visualization
,Byte Tracker
,Bounding Rectangle
,Stability AI Image Generation
,VLM as Detector
,Path Deviation
,VLM as Classifier
- outputs:
Detection Offset
,Blur Visualization
,Line Counter
,Stability AI Inpainting
,Detections Filter
,Time in Zone
,Dot Visualization
,Background Color Visualization
,Path Deviation
,Trace Visualization
,Roboflow Custom Metadata
,Perspective Correction
,Color Visualization
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Stitch OCR Detections
,Triangle Visualization
,Halo Visualization
,Label Visualization
,Time in Zone
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter
,Size Measurement
,Byte Tracker
,Corner Visualization
,Detections Transformation
,Florence-2 Model
,Detections Stabilizer
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Polygon Visualization
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Detections Stitch
,Segment Anything 2 Model
,Byte Tracker
,Bounding Box Visualization
,Bounding Rectangle
,Model Comparison Visualization
,Path Deviation
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}