Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
VLM as Detector,Byte Tracker,Google Vision OCR,Overlap Filter,SAM 3,Detections Stabilizer,SIFT Comparison,Detections Filter,LMM For Classification,VLM as Classifier,Detections Combine,VLM as Classifier,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Template Matching,Moondream2,Velocity,OCR Model,Florence-2 Model,Detections Transformation,EasyOCR,SIFT Comparison,Buffer,Florence-2 Model,Detection Offset,Slack Notification,Clip Comparison,Instance Segmentation Model,OpenAI,Byte Tracker,Line Counter,PTZ Tracking (ONVIF).md),Object Detection Model,Keypoint Detection Model,Google Gemini,JSON Parser,Email Notification,Llama 3.2 Vision,Byte Tracker,Dynamic Zone,YOLO-World Model,Size Measurement,Email Notification,Time in Zone,CogVLM,OpenAI,Roboflow Custom Metadata,Detections Stitch,Stitch OCR Detections,Time in Zone,CSV Formatter,VLM as Detector,OpenAI,Detections Classes Replacement,Perspective Correction,Twilio SMS Notification,Clip Comparison,Single-Label Classification Model,Seg Preview,Roboflow Dataset Upload,Roboflow Dataset Upload,Webhook Sink,Dimension Collapse,Instance Segmentation Model,Multi-Label Classification Model,Time in Zone,Detections Merge,Path Deviation,Anthropic Claude,LMM,Google Gemini,Identify Outliers,Dynamic Crop,Bounding Rectangle,Path Deviation,Detections Consensus,Local File Sink,Identify Changes,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Detections Stabilizer,Circle Visualization,Time in Zone,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Stability AI Inpainting,Detections Stitch,Roboflow Dataset Upload,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Halo Visualization,Icon Visualization,Polygon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Zone,Dynamic Crop,Bounding Rectangle,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Mask Visualization,Time in Zone,Roboflow Custom Metadata,Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
VLM as Detector,Byte Tracker,Google Vision OCR,Overlap Filter,SAM 3,Detections Stabilizer,SIFT Comparison,Detections Filter,LMM For Classification,VLM as Classifier,Detections Combine,VLM as Classifier,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Template Matching,Moondream2,Velocity,OCR Model,Florence-2 Model,Detections Transformation,EasyOCR,SIFT Comparison,Buffer,Florence-2 Model,Detection Offset,Slack Notification,Clip Comparison,Instance Segmentation Model,OpenAI,Byte Tracker,Line Counter,PTZ Tracking (ONVIF).md),Object Detection Model,Keypoint Detection Model,Google Gemini,JSON Parser,Email Notification,Llama 3.2 Vision,Byte Tracker,Dynamic Zone,YOLO-World Model,Size Measurement,Email Notification,Time in Zone,CogVLM,OpenAI,Roboflow Custom Metadata,Detections Stitch,Stitch OCR Detections,Time in Zone,CSV Formatter,VLM as Detector,OpenAI,Detections Classes Replacement,Perspective Correction,Twilio SMS Notification,Clip Comparison,Single-Label Classification Model,Seg Preview,Roboflow Dataset Upload,Roboflow Dataset Upload,Webhook Sink,Dimension Collapse,Instance Segmentation Model,Multi-Label Classification Model,Time in Zone,Detections Merge,Path Deviation,Anthropic Claude,LMM,Google Gemini,Identify Outliers,Dynamic Crop,Bounding Rectangle,Path Deviation,Detections Consensus,Local File Sink,Identify Changes,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Detections Stabilizer,Circle Visualization,Time in Zone,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Stability AI Inpainting,Detections Stitch,Roboflow Dataset Upload,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Halo Visualization,Icon Visualization,Polygon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Zone,Dynamic Crop,Bounding Rectangle,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Mask Visualization,Time in Zone,Roboflow Custom Metadata,Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
VLM as Detector,Byte Tracker,Google Vision OCR,Overlap Filter,SAM 3,Classification Label Visualization,Detections Stabilizer,Circle Visualization,SIFT Comparison,Image Contours,Relative Static Crop,Detections Filter,Image Preprocessing,LMM For Classification,VLM as Classifier,Ellipse Visualization,Stitch Images,Triangle Visualization,Stability AI Inpainting,Detections Combine,QR Code Generator,Image Slicer,VLM as Classifier,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Template Matching,Moondream2,Velocity,OCR Model,Dot Visualization,Florence-2 Model,SIFT,Morphological Transformation,Detections Transformation,EasyOCR,Reference Path Visualization,Halo Visualization,SIFT Comparison,Buffer,Polygon Visualization,Image Slicer,Florence-2 Model,Detection Offset,Slack Notification,Clip Comparison,Image Convert Grayscale,Instance Segmentation Model,OpenAI,Byte Tracker,Color Visualization,Line Counter,PTZ Tracking (ONVIF).md),Object Detection Model,Keypoint Detection Model,Google Gemini,JSON Parser,Label Visualization,Email Notification,Llama 3.2 Vision,Trace Visualization,Byte Tracker,Dynamic Zone,YOLO-World Model,Size Measurement,Email Notification,Corner Visualization,Mask Visualization,Time in Zone,CogVLM,Stability AI Outpainting,OpenAI,Roboflow Custom Metadata,Detections Stitch,Stitch OCR Detections,Blur Visualization,Time in Zone,CSV Formatter,Crop Visualization,VLM as Detector,OpenAI,Grid Visualization,Detections Classes Replacement,Perspective Correction,Twilio SMS Notification,Absolute Static Crop,Clip Comparison,Single-Label Classification Model,Seg Preview,Contrast Equalization,Roboflow Dataset Upload,Roboflow Dataset Upload,Polygon Zone Visualization,Stability AI Image Generation,Webhook Sink,Depth Estimation,Dimension Collapse,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Instance Segmentation Model,Multi-Label Classification Model,Icon Visualization,Image Blur,Time in Zone,Pixelate Visualization,Image Threshold,Detections Merge,Path Deviation,Anthropic Claude,LMM,Google Gemini,Identify Outliers,Dynamic Crop,Bounding Rectangle,Path Deviation,Detections Consensus,Model Comparison Visualization,Camera Calibration,Local File Sink,Keypoint Visualization,Identify Changes,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Detections Stabilizer,Circle Visualization,Time in Zone,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Stability AI Inpainting,Detections Stitch,Roboflow Dataset Upload,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Halo Visualization,Icon Visualization,Polygon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Zone,Dynamic Crop,Bounding Rectangle,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Mask Visualization,Time in Zone,Roboflow Custom Metadata,Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): The input image for this step..metadata(video_metadata): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}