Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Line Counter
,Keypoint Detection Model
,OpenAI
,Roboflow Dataset Upload
,Template Matching
,Size Measurement
,VLM as Classifier
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,JSON Parser
,Email Notification
,Bounding Rectangle
,OpenAI
,Dynamic Zone
,Slack Notification
,VLM as Detector
,Twilio SMS Notification
,Byte Tracker
,Webhook Sink
,Google Vision OCR
,CSV Formatter
,Single-Label Classification Model
,Buffer
,Velocity
,Identify Changes
,Model Monitoring Inference Aggregator
,Roboflow Custom Metadata
,Florence-2 Model
,VLM as Detector
,Anthropic Claude
,Detections Transformation
,OCR Model
,Dimension Collapse
,Florence-2 Model
,LMM For Classification
,Clip Comparison
,Detections Merge
,Time in Zone
,Dynamic Crop
,Local File Sink
,Perspective Correction
,Stitch OCR Detections
,Byte Tracker
,Detections Filter
,SIFT Comparison
,CogVLM
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,VLM as Classifier
,LMM
,OpenAI
,Byte Tracker
,Moondream2
,Detection Offset
,Detections Consensus
,Time in Zone
,Llama 3.2 Vision
,Detections Stabilizer
,Roboflow Dataset Upload
,Clip Comparison
,YOLO-World Model
,Overlap Filter
,Google Gemini
,PTZ Tracking (ONVIF)
.md),SIFT Comparison
,Multi-Label Classification Model
,Detections Stitch
,Object Detection Model
,Path Deviation
,Identify Outliers
- outputs:
Line Counter
,Florence-2 Model
,Halo Visualization
,Corner Visualization
,Detections Merge
,Model Comparison Visualization
,Crop Visualization
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Bounding Box Visualization
,Distance Measurement
,Line Counter
,Dynamic Crop
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Roboflow Dataset Upload
,Size Measurement
,Detections Filter
,Mask Visualization
,Path Deviation
,Detections Classes Replacement
,Segment Anything 2 Model
,Ellipse Visualization
,Color Visualization
,Stability AI Inpainting
,Bounding Rectangle
,Byte Tracker
,Dynamic Zone
,Detection Offset
,Byte Tracker
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Label Visualization
,Roboflow Dataset Upload
,Overlap Filter
,Polygon Visualization
,Velocity
,PTZ Tracking (ONVIF)
.md),Model Monitoring Inference Aggregator
,Detections Stitch
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Line Counter
,Keypoint Detection Model
,Halo Visualization
,OpenAI
,Stability AI Image Generation
,Model Comparison Visualization
,Keypoint Visualization
,Crop Visualization
,Image Blur
,Bounding Box Visualization
,Roboflow Dataset Upload
,Template Matching
,Size Measurement
,Mask Visualization
,Image Slicer
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,VLM as Classifier
,JSON Parser
,Ellipse Visualization
,Email Notification
,Bounding Rectangle
,OpenAI
,Dynamic Zone
,Slack Notification
,VLM as Detector
,Twilio SMS Notification
,Byte Tracker
,Webhook Sink
,Label Visualization
,Google Vision OCR
,Stability AI Outpainting
,CSV Formatter
,Single-Label Classification Model
,Polygon Visualization
,Velocity
,Buffer
,Identify Changes
,Model Monitoring Inference Aggregator
,Roboflow Custom Metadata
,Reference Path Visualization
,VLM as Detector
,Florence-2 Model
,Detections Transformation
,Anthropic Claude
,OCR Model
,Camera Calibration
,Image Preprocessing
,Image Contours
,Line Counter Visualization
,Dimension Collapse
,Florence-2 Model
,LMM For Classification
,Corner Visualization
,Detections Merge
,Clip Comparison
,Stitch Images
,Depth Estimation
,SIFT
,Time in Zone
,Blur Visualization
,Image Convert Grayscale
,Background Color Visualization
,Image Slicer
,Dynamic Crop
,Local File Sink
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Dot Visualization
,Byte Tracker
,Detections Filter
,SIFT Comparison
,CogVLM
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,VLM as Classifier
,LMM
,Color Visualization
,Stability AI Inpainting
,Classification Label Visualization
,OpenAI
,Byte Tracker
,Moondream2
,Detection Offset
,Absolute Static Crop
,Image Threshold
,Detections Consensus
,Time in Zone
,Llama 3.2 Vision
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Camera Focus
,Roboflow Dataset Upload
,Grid Visualization
,Clip Comparison
,YOLO-World Model
,Overlap Filter
,Google Gemini
,PTZ Tracking (ONVIF)
.md),SIFT Comparison
,Multi-Label Classification Model
,Relative Static Crop
,Detections Stitch
,Object Detection Model
,Path Deviation
,Polygon Zone Visualization
,Identify Outliers
- outputs:
Line Counter
,Florence-2 Model
,Halo Visualization
,Corner Visualization
,Detections Merge
,Model Comparison Visualization
,Crop Visualization
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Bounding Box Visualization
,Distance Measurement
,Line Counter
,Dynamic Crop
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Roboflow Dataset Upload
,Size Measurement
,Detections Filter
,Mask Visualization
,Path Deviation
,Detections Classes Replacement
,Segment Anything 2 Model
,Ellipse Visualization
,Color Visualization
,Stability AI Inpainting
,Bounding Rectangle
,Byte Tracker
,Dynamic Zone
,Detection Offset
,Byte Tracker
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Label Visualization
,Roboflow Dataset Upload
,Overlap Filter
,Polygon Visualization
,Velocity
,PTZ Tracking (ONVIF)
.md),Model Monitoring Inference Aggregator
,Detections Stitch
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}