Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Keypoint Detection Model
,CogVLM
,Anthropic Claude
,OpenAI
,Google Vision OCR
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Dimension Collapse
,Single-Label Classification Model
,VLM as Detector
,OpenAI
,Moondream2
,YOLO-World Model
,SIFT Comparison
,Object Detection Model
,Overlap Filter
,VLM as Detector
,Path Deviation
,CSV Formatter
,LMM
,Segment Anything 2 Model
,Multi-Label Classification Model
,Twilio SMS Notification
,Google Gemini
,Byte Tracker
,Roboflow Custom Metadata
,Size Measurement
,Perspective Correction
,Slack Notification
,Detections Transformation
,Instance Segmentation Model
,VLM as Classifier
,Detections Merge
,Webhook Sink
,Identify Changes
,Detections Consensus
,LMM For Classification
,Detections Stitch
,SIFT Comparison
,JSON Parser
,Clip Comparison
,Line Counter
,Detections Filter
,Identify Outliers
,Dynamic Zone
,Instance Segmentation Model
,Florence-2 Model
,Detections Stabilizer
,Model Monitoring Inference Aggregator
,Stitch OCR Detections
,Template Matching
,Bounding Rectangle
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Email Notification
,Path Deviation
,Object Detection Model
,Llama 3.2 Vision
,Byte Tracker
,ONVIF Control
,VLM as Classifier
,Byte Tracker
,Clip Comparison
,Time in Zone
,Buffer
,Dynamic Crop
,OpenAI
,Velocity
,OCR Model
,Local File Sink
- outputs:
Detections Stitch
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Line Counter
,Mask Visualization
,Detections Filter
,Overlap Filter
,Dot Visualization
,Dynamic Zone
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Triangle Visualization
,Stitch OCR Detections
,Bounding Rectangle
,Path Deviation
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Label Visualization
,Segment Anything 2 Model
,Stability AI Inpainting
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Line Counter
,Bounding Box Visualization
,Halo Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Corner Visualization
,Time in Zone
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Polygon Visualization
,Detections Transformation
,Detections Merge
,Trace Visualization
,Velocity
,Color Visualization
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Keypoint Detection Model
,CogVLM
,Image Convert Grayscale
,Anthropic Claude
,Google Vision OCR
,OpenAI
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Pixelate Visualization
,Dimension Collapse
,Single-Label Classification Model
,SIFT
,VLM as Detector
,OpenAI
,Moondream2
,Stability AI Image Generation
,YOLO-World Model
,SIFT Comparison
,Mask Visualization
,Object Detection Model
,Image Slicer
,Overlap Filter
,Triangle Visualization
,VLM as Detector
,Path Deviation
,CSV Formatter
,Polygon Zone Visualization
,Model Comparison Visualization
,Crop Visualization
,Classification Label Visualization
,LMM
,Segment Anything 2 Model
,Reference Path Visualization
,Multi-Label Classification Model
,Twilio SMS Notification
,Google Gemini
,Bounding Box Visualization
,Image Contours
,Byte Tracker
,Size Measurement
,Roboflow Custom Metadata
,Circle Visualization
,Perspective Correction
,Slack Notification
,Polygon Visualization
,Detections Transformation
,Instance Segmentation Model
,VLM as Classifier
,Trace Visualization
,Detections Merge
,Webhook Sink
,Color Visualization
,Identify Changes
,Detections Consensus
,LMM For Classification
,Image Threshold
,Detections Stitch
,SIFT Comparison
,JSON Parser
,Absolute Static Crop
,Clip Comparison
,Line Counter Visualization
,Stitch Images
,Line Counter
,Dot Visualization
,Detections Filter
,Identify Outliers
,Dynamic Zone
,Instance Segmentation Model
,Background Color Visualization
,Florence-2 Model
,Detections Stabilizer
,Model Monitoring Inference Aggregator
,Stitch OCR Detections
,Image Slicer
,Template Matching
,Bounding Rectangle
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Image Blur
,Time in Zone
,Email Notification
,Camera Focus
,Grid Visualization
,Path Deviation
,Object Detection Model
,Blur Visualization
,Label Visualization
,Stability AI Inpainting
,Depth Estimation
,Image Preprocessing
,Llama 3.2 Vision
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,VLM as Classifier
,Byte Tracker
,Halo Visualization
,Corner Visualization
,Camera Calibration
,Clip Comparison
,Time in Zone
,Buffer
,Dynamic Crop
,Relative Static Crop
,OpenAI
,Keypoint Visualization
,Velocity
,OCR Model
,Local File Sink
- outputs:
Detections Stitch
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Line Counter
,Mask Visualization
,Detections Filter
,Overlap Filter
,Dot Visualization
,Dynamic Zone
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Triangle Visualization
,Stitch OCR Detections
,Bounding Rectangle
,Path Deviation
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Label Visualization
,Segment Anything 2 Model
,Stability AI Inpainting
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Line Counter
,Bounding Box Visualization
,Halo Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Corner Visualization
,Time in Zone
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Polygon Visualization
,Detections Transformation
,Detections Merge
,Trace Visualization
,Velocity
,Color Visualization
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}