Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
Size Measurement,VLM as Detector,JSON Parser,Byte Tracker,Object Detection Model,Clip Comparison,LMM For Classification,VLM as Classifier,Google Vision OCR,Slack Notification,Dimension Collapse,Seg Preview,Instance Segmentation Model,CSV Formatter,Identify Changes,OCR Model,Dynamic Zone,VLM as Classifier,Segment Anything 2 Model,Detections Classes Replacement,OpenAI,Single-Label Classification Model,Detection Offset,Time in Zone,Roboflow Custom Metadata,CogVLM,YOLO-World Model,OpenAI,SIFT Comparison,Florence-2 Model,Roboflow Dataset Upload,Stitch OCR Detections,Path Deviation,PTZ Tracking (ONVIF).md),Multi-Label Classification Model,Roboflow Dataset Upload,Template Matching,OpenAI,Line Counter,Path Deviation,Time in Zone,Velocity,Instance Segmentation Model,Model Monitoring Inference Aggregator,Detections Stabilizer,Llama 3.2 Vision,Bounding Rectangle,Local File Sink,Time in Zone,Clip Comparison,Identify Outliers,VLM as Detector,Object Detection Model,Overlap Filter,Moondream2,Twilio SMS Notification,Webhook Sink,Dynamic Crop,Detections Consensus,Byte Tracker,Email Notification,Buffer,Detections Combine,Detections Filter,SIFT Comparison,Detections Merge,Detections Transformation,Google Gemini,Perspective Correction,Keypoint Detection Model,EasyOCR,Anthropic Claude,LMM,Detections Stitch,Florence-2 Model,Byte Tracker - outputs:
Size Measurement,Line Counter,Line Counter,Path Deviation,Velocity,Byte Tracker,Model Monitoring Inference Aggregator,Time in Zone,Polygon Visualization,Detections Stabilizer,Icon Visualization,Bounding Rectangle,Time in Zone,Blur Visualization,Trace Visualization,Color Visualization,Halo Visualization,Overlap Filter,Bounding Box Visualization,Dynamic Zone,Triangle Visualization,Background Color Visualization,Segment Anything 2 Model,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Byte Tracker,Detections Consensus,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,Detections Combine,Time in Zone,Detection Offset,Crop Visualization,Roboflow Custom Metadata,Detections Filter,Mask Visualization,Detections Merge,Florence-2 Model,Detections Transformation,Roboflow Dataset Upload,Stitch OCR Detections,Perspective Correction,Path Deviation,Distance Measurement,Label Visualization,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Circle Visualization,Roboflow Dataset Upload,Detections Stitch,Florence-2 Model,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
Size Measurement,VLM as Detector,JSON Parser,Byte Tracker,Object Detection Model,Clip Comparison,LMM For Classification,VLM as Classifier,Google Vision OCR,Slack Notification,Dimension Collapse,Seg Preview,Instance Segmentation Model,CSV Formatter,Identify Changes,OCR Model,Dynamic Zone,VLM as Classifier,Segment Anything 2 Model,Detections Classes Replacement,OpenAI,Single-Label Classification Model,Detection Offset,Time in Zone,Roboflow Custom Metadata,CogVLM,YOLO-World Model,OpenAI,SIFT Comparison,Florence-2 Model,Roboflow Dataset Upload,Stitch OCR Detections,Path Deviation,PTZ Tracking (ONVIF).md),Multi-Label Classification Model,Roboflow Dataset Upload,Template Matching,OpenAI,Line Counter,Path Deviation,Time in Zone,Velocity,Instance Segmentation Model,Model Monitoring Inference Aggregator,Detections Stabilizer,Llama 3.2 Vision,Bounding Rectangle,Local File Sink,Time in Zone,Clip Comparison,Identify Outliers,VLM as Detector,Object Detection Model,Overlap Filter,Moondream2,Twilio SMS Notification,Webhook Sink,Dynamic Crop,Detections Consensus,Byte Tracker,Email Notification,Buffer,Detections Combine,Detections Filter,SIFT Comparison,Detections Merge,Detections Transformation,Google Gemini,Perspective Correction,Keypoint Detection Model,EasyOCR,Anthropic Claude,LMM,Detections Stitch,Florence-2 Model,Byte Tracker - outputs:
Size Measurement,Line Counter,Line Counter,Path Deviation,Velocity,Byte Tracker,Model Monitoring Inference Aggregator,Time in Zone,Polygon Visualization,Detections Stabilizer,Icon Visualization,Bounding Rectangle,Time in Zone,Blur Visualization,Trace Visualization,Color Visualization,Halo Visualization,Overlap Filter,Bounding Box Visualization,Dynamic Zone,Triangle Visualization,Background Color Visualization,Segment Anything 2 Model,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Byte Tracker,Detections Consensus,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,Detections Combine,Time in Zone,Detection Offset,Crop Visualization,Roboflow Custom Metadata,Detections Filter,Mask Visualization,Detections Merge,Florence-2 Model,Detections Transformation,Roboflow Dataset Upload,Stitch OCR Detections,Perspective Correction,Path Deviation,Distance Measurement,Label Visualization,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Circle Visualization,Roboflow Dataset Upload,Detections Stitch,Florence-2 Model,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
Size Measurement,Absolute Static Crop,VLM as Detector,Relative Static Crop,JSON Parser,Keypoint Visualization,Byte Tracker,Object Detection Model,Clip Comparison,LMM For Classification,VLM as Classifier,Google Vision OCR,Slack Notification,Dimension Collapse,Trace Visualization,Color Visualization,Seg Preview,Instance Segmentation Model,Polygon Zone Visualization,Camera Focus,Halo Visualization,CSV Formatter,OCR Model,Identify Changes,Camera Calibration,Dynamic Zone,VLM as Classifier,Triangle Visualization,Segment Anything 2 Model,Stability AI Inpainting,Image Threshold,Reference Path Visualization,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,OpenAI,Single-Label Classification Model,Detection Offset,Morphological Transformation,Time in Zone,Roboflow Custom Metadata,Grid Visualization,Image Preprocessing,CogVLM,Line Counter Visualization,YOLO-World Model,OpenAI,SIFT Comparison,Florence-2 Model,Roboflow Dataset Upload,Stitch OCR Detections,Path Deviation,Label Visualization,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Multi-Label Classification Model,Roboflow Dataset Upload,Template Matching,OpenAI,Line Counter,Path Deviation,Polygon Visualization,Time in Zone,Velocity,Instance Segmentation Model,Detections Stabilizer,Llama 3.2 Vision,Model Monitoring Inference Aggregator,Icon Visualization,Bounding Rectangle,Local File Sink,Time in Zone,Blur Visualization,Image Contours,Clip Comparison,Identify Outliers,VLM as Detector,Object Detection Model,Overlap Filter,Moondream2,Bounding Box Visualization,SIFT,Twilio SMS Notification,Classification Label Visualization,Background Color Visualization,Webhook Sink,Dynamic Crop,Dot Visualization,Pixelate Visualization,Detections Consensus,Byte Tracker,Email Notification,Buffer,Detections Combine,Image Slicer,Stitch Images,Crop Visualization,Detections Filter,Mask Visualization,SIFT Comparison,Detections Merge,QR Code Generator,Detections Transformation,Depth Estimation,Image Slicer,Google Gemini,Perspective Correction,Image Convert Grayscale,Stability AI Image Generation,Keypoint Detection Model,EasyOCR,Contrast Equalization,Anthropic Claude,Image Blur,Circle Visualization,Stability AI Outpainting,LMM,Detections Stitch,Florence-2 Model,Byte Tracker - outputs:
Size Measurement,Line Counter,Line Counter,Path Deviation,Velocity,Byte Tracker,Model Monitoring Inference Aggregator,Time in Zone,Polygon Visualization,Detections Stabilizer,Icon Visualization,Bounding Rectangle,Time in Zone,Blur Visualization,Trace Visualization,Color Visualization,Halo Visualization,Overlap Filter,Bounding Box Visualization,Dynamic Zone,Triangle Visualization,Background Color Visualization,Segment Anything 2 Model,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Byte Tracker,Detections Consensus,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,Detections Combine,Time in Zone,Detection Offset,Crop Visualization,Roboflow Custom Metadata,Detections Filter,Mask Visualization,Detections Merge,Florence-2 Model,Detections Transformation,Roboflow Dataset Upload,Stitch OCR Detections,Perspective Correction,Path Deviation,Distance Measurement,Label Visualization,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Circle Visualization,Roboflow Dataset Upload,Detections Stitch,Florence-2 Model,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): The input image for this step..metadata(video_metadata): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}