Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
Detections Filter,Detections Stitch,Detections Classes Replacement,Path Deviation,Time in Zone,Template Matching,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Overlap Filter,Single-Label Classification Model,Slack Notification,Clip Comparison,Perspective Correction,Roboflow Dataset Upload,Anthropic Claude,OpenAI,Florence-2 Model,Object Detection Model,Llama 3.2 Vision,Bounding Rectangle,Dynamic Crop,OCR Model,EasyOCR,Velocity,Keypoint Detection Model,Dimension Collapse,Detection Offset,Identify Changes,Florence-2 Model,Moondream2,Size Measurement,SIFT Comparison,Byte Tracker,Dynamic Zone,Buffer,Stitch OCR Detections,Detections Transformation,Byte Tracker,Detections Combine,YOLO-World Model,Clip Comparison,VLM as Detector,Google Vision OCR,OpenAI,Line Counter,Webhook Sink,Instance Segmentation Model,CogVLM,Time in Zone,Twilio SMS Notification,Detections Consensus,SIFT Comparison,Multi-Label Classification Model,LMM,Time in Zone,JSON Parser,LMM For Classification,Detections Stabilizer,Instance Segmentation Model,Segment Anything 2 Model,VLM as Classifier,VLM as Detector,Email Notification,Local File Sink,Roboflow Custom Metadata,Google Gemini,Object Detection Model,VLM as Classifier,PTZ Tracking (ONVIF).md),CSV Formatter,Path Deviation,Detections Merge,OpenAI,Byte Tracker,Identify Outliers - outputs:
Detections Filter,Detections Combine,Detections Stitch,Distance Measurement,Detections Classes Replacement,Icon Visualization,Stability AI Inpainting,Path Deviation,Model Monitoring Inference Aggregator,Time in Zone,Circle Visualization,Roboflow Dataset Upload,Dot Visualization,Overlap Filter,Line Counter,Blur Visualization,Perspective Correction,Time in Zone,Roboflow Dataset Upload,Background Color Visualization,Florence-2 Model,Mask Visualization,Bounding Rectangle,Detections Consensus,Dynamic Crop,Crop Visualization,Trace Visualization,Time in Zone,Color Visualization,Velocity,Triangle Visualization,Detections Stabilizer,Ellipse Visualization,Model Comparison Visualization,Segment Anything 2 Model,Detection Offset,Polygon Visualization,Corner Visualization,Florence-2 Model,Halo Visualization,Size Measurement,Roboflow Custom Metadata,Line Counter,Bounding Box Visualization,Byte Tracker,Dynamic Zone,Stitch OCR Detections,Pixelate Visualization,PTZ Tracking (ONVIF).md),Path Deviation,Label Visualization,Detections Merge,Byte Tracker,Detections Transformation,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
Detections Filter,Detections Stitch,Detections Classes Replacement,Path Deviation,Time in Zone,Template Matching,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Overlap Filter,Single-Label Classification Model,Slack Notification,Clip Comparison,Perspective Correction,Roboflow Dataset Upload,Anthropic Claude,OpenAI,Florence-2 Model,Object Detection Model,Llama 3.2 Vision,Bounding Rectangle,Dynamic Crop,OCR Model,EasyOCR,Velocity,Keypoint Detection Model,Dimension Collapse,Detection Offset,Identify Changes,Florence-2 Model,Moondream2,Size Measurement,SIFT Comparison,Byte Tracker,Dynamic Zone,Buffer,Stitch OCR Detections,Detections Transformation,Byte Tracker,Detections Combine,YOLO-World Model,Clip Comparison,VLM as Detector,Google Vision OCR,OpenAI,Line Counter,Webhook Sink,Instance Segmentation Model,CogVLM,Time in Zone,Twilio SMS Notification,Detections Consensus,SIFT Comparison,Multi-Label Classification Model,LMM,Time in Zone,JSON Parser,LMM For Classification,Detections Stabilizer,Instance Segmentation Model,Segment Anything 2 Model,VLM as Classifier,VLM as Detector,Email Notification,Local File Sink,Roboflow Custom Metadata,Google Gemini,Object Detection Model,VLM as Classifier,PTZ Tracking (ONVIF).md),CSV Formatter,Path Deviation,Detections Merge,OpenAI,Byte Tracker,Identify Outliers - outputs:
Detections Filter,Detections Combine,Detections Stitch,Distance Measurement,Detections Classes Replacement,Icon Visualization,Stability AI Inpainting,Path Deviation,Model Monitoring Inference Aggregator,Time in Zone,Circle Visualization,Roboflow Dataset Upload,Dot Visualization,Overlap Filter,Line Counter,Blur Visualization,Perspective Correction,Time in Zone,Roboflow Dataset Upload,Background Color Visualization,Florence-2 Model,Mask Visualization,Bounding Rectangle,Detections Consensus,Dynamic Crop,Crop Visualization,Trace Visualization,Time in Zone,Color Visualization,Velocity,Triangle Visualization,Detections Stabilizer,Ellipse Visualization,Model Comparison Visualization,Segment Anything 2 Model,Detection Offset,Polygon Visualization,Corner Visualization,Florence-2 Model,Halo Visualization,Size Measurement,Roboflow Custom Metadata,Line Counter,Bounding Box Visualization,Byte Tracker,Dynamic Zone,Stitch OCR Detections,Pixelate Visualization,PTZ Tracking (ONVIF).md),Path Deviation,Label Visualization,Detections Merge,Byte Tracker,Detections Transformation,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
Detections Filter,Grid Visualization,Detections Stitch,Detections Classes Replacement,Circle Visualization,Path Deviation,Time in Zone,Template Matching,Model Monitoring Inference Aggregator,QR Code Generator,Roboflow Dataset Upload,Image Slicer,Dot Visualization,Overlap Filter,Single-Label Classification Model,Blur Visualization,Slack Notification,Clip Comparison,Perspective Correction,Roboflow Dataset Upload,Anthropic Claude,Background Color Visualization,OpenAI,Florence-2 Model,Object Detection Model,Llama 3.2 Vision,Bounding Rectangle,Dynamic Crop,Crop Visualization,OCR Model,EasyOCR,Trace Visualization,Velocity,Keypoint Detection Model,Image Threshold,Triangle Visualization,Reference Path Visualization,Model Comparison Visualization,Dimension Collapse,Detection Offset,Polygon Visualization,Identify Changes,Corner Visualization,Image Slicer,Florence-2 Model,Image Blur,Moondream2,SIFT Comparison,Size Measurement,Bounding Box Visualization,Byte Tracker,Dynamic Zone,Buffer,Stitch OCR Detections,Keypoint Visualization,Detections Transformation,Image Convert Grayscale,Byte Tracker,Detections Combine,YOLO-World Model,Line Counter Visualization,Clip Comparison,SIFT,Icon Visualization,Stability AI Inpainting,VLM as Detector,Google Vision OCR,Polygon Zone Visualization,OpenAI,Line Counter,Webhook Sink,Camera Calibration,Instance Segmentation Model,CogVLM,Time in Zone,Mask Visualization,Camera Focus,Twilio SMS Notification,Detections Consensus,SIFT Comparison,Stability AI Outpainting,Classification Label Visualization,Multi-Label Classification Model,LMM,Image Preprocessing,Time in Zone,Color Visualization,Morphological Transformation,JSON Parser,Depth Estimation,LMM For Classification,Detections Stabilizer,Ellipse Visualization,Instance Segmentation Model,Stability AI Image Generation,Segment Anything 2 Model,VLM as Classifier,VLM as Detector,Email Notification,Halo Visualization,Stitch Images,Local File Sink,Roboflow Custom Metadata,Absolute Static Crop,Google Gemini,Object Detection Model,VLM as Classifier,Image Contours,Pixelate Visualization,PTZ Tracking (ONVIF).md),CSV Formatter,Path Deviation,Label Visualization,Detections Merge,OpenAI,Byte Tracker,Identify Outliers,Contrast Equalization,Relative Static Crop - outputs:
Detections Filter,Detections Combine,Detections Stitch,Distance Measurement,Detections Classes Replacement,Icon Visualization,Stability AI Inpainting,Path Deviation,Model Monitoring Inference Aggregator,Time in Zone,Circle Visualization,Roboflow Dataset Upload,Dot Visualization,Overlap Filter,Line Counter,Blur Visualization,Perspective Correction,Time in Zone,Roboflow Dataset Upload,Background Color Visualization,Florence-2 Model,Mask Visualization,Bounding Rectangle,Detections Consensus,Dynamic Crop,Crop Visualization,Trace Visualization,Time in Zone,Color Visualization,Velocity,Triangle Visualization,Detections Stabilizer,Ellipse Visualization,Model Comparison Visualization,Segment Anything 2 Model,Detection Offset,Polygon Visualization,Corner Visualization,Florence-2 Model,Halo Visualization,Size Measurement,Roboflow Custom Metadata,Line Counter,Bounding Box Visualization,Byte Tracker,Dynamic Zone,Stitch OCR Detections,Pixelate Visualization,PTZ Tracking (ONVIF).md),Path Deviation,Label Visualization,Detections Merge,Byte Tracker,Detections Transformation,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): The input image for this step..metadata(video_metadata): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}