Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
Detections Consensus,Path Deviation,Llama 3.2 Vision,SAM 3,Dimension Collapse,Perspective Correction,Velocity,Roboflow Custom Metadata,Detections Transformation,Segment Anything 2 Model,Dynamic Crop,Identify Outliers,Dynamic Zone,Clip Comparison,LMM,OpenAI,Size Measurement,Florence-2 Model,Single-Label Classification Model,SAM 3,Time in Zone,SIFT Comparison,Moondream2,Google Gemini,Florence-2 Model,LMM For Classification,Time in Zone,Object Detection Model,OCR Model,Anthropic Claude,Google Vision OCR,VLM as Detector,Local File Sink,EasyOCR,Line Counter,VLM as Detector,Email Notification,Detections Combine,Detections Filter,Byte Tracker,Overlap Filter,Roboflow Dataset Upload,Slack Notification,Keypoint Detection Model,Object Detection Model,Detections Stabilizer,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Multi-Label Classification Model,Detections Merge,Twilio SMS Notification,Byte Tracker,Instance Segmentation Model,Seg Preview,VLM as Classifier,CSV Formatter,Motion Detection,OpenAI,Byte Tracker,Webhook Sink,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Instance Segmentation Model,Detections Stitch,YOLO-World Model,Stitch OCR Detections,JSON Parser,Clip Comparison,CogVLM,Identify Changes,Template Matching,Path Deviation,Email Notification,VLM as Classifier,Buffer,OpenAI,Bounding Rectangle,SAM 3,Anthropic Claude,Time in Zone,Detection Offset,SIFT Comparison,OpenAI - outputs:
Line Counter,Detections Consensus,Path Deviation,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,Detections Merge,Perspective Correction,Byte Tracker,Bounding Box Visualization,Pixelate Visualization,Distance Measurement,Trace Visualization,Roboflow Custom Metadata,Velocity,Byte Tracker,Detections Transformation,Segment Anything 2 Model,PTZ Tracking (ONVIF).md),Polygon Visualization,Dynamic Crop,Icon Visualization,Detections Classes Replacement,Model Comparison Visualization,Dynamic Zone,Detections Stitch,Size Measurement,Florence-2 Model,Mask Visualization,Stitch OCR Detections,Stability AI Inpainting,Time in Zone,Circle Visualization,Florence-2 Model,Ellipse Visualization,Time in Zone,Path Deviation,Crop Visualization,Color Visualization,Bounding Rectangle,Line Counter,Detections Combine,Label Visualization,Byte Tracker,Roboflow Dataset Upload,Overlap Filter,Triangle Visualization,Background Color Visualization,Detections Filter,Time in Zone,Detection Offset,Halo Visualization,Detections Stabilizer,Corner Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[instance_segmentation_prediction,object_detection_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
Detections Consensus,Path Deviation,Llama 3.2 Vision,SAM 3,Dimension Collapse,Perspective Correction,Velocity,Roboflow Custom Metadata,Detections Transformation,Segment Anything 2 Model,Dynamic Crop,Identify Outliers,Dynamic Zone,Clip Comparison,LMM,OpenAI,Size Measurement,Florence-2 Model,Single-Label Classification Model,SAM 3,Time in Zone,SIFT Comparison,Moondream2,Google Gemini,Florence-2 Model,LMM For Classification,Time in Zone,Object Detection Model,OCR Model,Anthropic Claude,Google Vision OCR,VLM as Detector,Local File Sink,EasyOCR,Line Counter,VLM as Detector,Email Notification,Detections Combine,Detections Filter,Byte Tracker,Overlap Filter,Roboflow Dataset Upload,Slack Notification,Keypoint Detection Model,Object Detection Model,Detections Stabilizer,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Multi-Label Classification Model,Detections Merge,Twilio SMS Notification,Byte Tracker,Instance Segmentation Model,Seg Preview,VLM as Classifier,CSV Formatter,Motion Detection,OpenAI,Byte Tracker,Webhook Sink,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Instance Segmentation Model,Detections Stitch,YOLO-World Model,Stitch OCR Detections,JSON Parser,Clip Comparison,CogVLM,Identify Changes,Template Matching,Path Deviation,Email Notification,VLM as Classifier,Buffer,OpenAI,Bounding Rectangle,SAM 3,Anthropic Claude,Time in Zone,Detection Offset,SIFT Comparison,OpenAI - outputs:
Line Counter,Detections Consensus,Path Deviation,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,Detections Merge,Perspective Correction,Byte Tracker,Bounding Box Visualization,Pixelate Visualization,Distance Measurement,Trace Visualization,Roboflow Custom Metadata,Velocity,Byte Tracker,Detections Transformation,Segment Anything 2 Model,PTZ Tracking (ONVIF).md),Polygon Visualization,Dynamic Crop,Icon Visualization,Detections Classes Replacement,Model Comparison Visualization,Dynamic Zone,Detections Stitch,Size Measurement,Florence-2 Model,Mask Visualization,Stitch OCR Detections,Stability AI Inpainting,Time in Zone,Circle Visualization,Florence-2 Model,Ellipse Visualization,Time in Zone,Path Deviation,Crop Visualization,Color Visualization,Bounding Rectangle,Line Counter,Detections Combine,Label Visualization,Byte Tracker,Roboflow Dataset Upload,Overlap Filter,Triangle Visualization,Background Color Visualization,Detections Filter,Time in Zone,Detection Offset,Halo Visualization,Detections Stabilizer,Corner Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): The input image for this step..detections(Union[instance_segmentation_prediction,object_detection_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
Detections Consensus,Path Deviation,Llama 3.2 Vision,Blur Visualization,SAM 3,Dimension Collapse,Perspective Correction,Polygon Zone Visualization,Bounding Box Visualization,QR Code Generator,Pixelate Visualization,Trace Visualization,Velocity,Roboflow Custom Metadata,Detections Transformation,Segment Anything 2 Model,Image Threshold,Polygon Visualization,Dynamic Crop,Icon Visualization,Image Slicer,Identify Outliers,Stability AI Outpainting,Model Comparison Visualization,Dynamic Zone,Clip Comparison,LMM,OpenAI,Classification Label Visualization,Stitch Images,Size Measurement,Mask Visualization,Florence-2 Model,Single-Label Classification Model,Relative Static Crop,Absolute Static Crop,SIFT Comparison,SAM 3,Time in Zone,Moondream2,Google Gemini,Circle Visualization,Florence-2 Model,LMM For Classification,Ellipse Visualization,Image Convert Grayscale,Time in Zone,Object Detection Model,OCR Model,Image Preprocessing,Color Visualization,Image Blur,Stability AI Image Generation,Google Vision OCR,Anthropic Claude,Keypoint Visualization,Camera Calibration,VLM as Detector,Local File Sink,EasyOCR,Image Slicer,Line Counter,VLM as Detector,Email Notification,Detections Combine,Detections Filter,Byte Tracker,Overlap Filter,Background Color Visualization,Triangle Visualization,Roboflow Dataset Upload,Slack Notification,Keypoint Detection Model,Halo Visualization,Object Detection Model,Corner Visualization,Detections Stabilizer,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Dot Visualization,Image Contours,Detections Merge,Multi-Label Classification Model,Twilio SMS Notification,Byte Tracker,Instance Segmentation Model,Seg Preview,VLM as Classifier,CSV Formatter,Reference Path Visualization,Morphological Transformation,Motion Detection,OpenAI,Byte Tracker,Webhook Sink,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Instance Segmentation Model,Detections Stitch,Contrast Equalization,Camera Focus,YOLO-World Model,Stitch OCR Detections,Stability AI Inpainting,JSON Parser,Clip Comparison,CogVLM,Line Counter Visualization,Identify Changes,Template Matching,Path Deviation,Email Notification,Crop Visualization,Grid Visualization,VLM as Classifier,Buffer,OpenAI,Bounding Rectangle,SIFT,Depth Estimation,Background Subtraction,Label Visualization,SAM 3,Anthropic Claude,Time in Zone,Detection Offset,SIFT Comparison,OpenAI - outputs:
Line Counter,Detections Consensus,Path Deviation,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,Detections Merge,Perspective Correction,Byte Tracker,Bounding Box Visualization,Pixelate Visualization,Distance Measurement,Trace Visualization,Roboflow Custom Metadata,Velocity,Byte Tracker,Detections Transformation,Segment Anything 2 Model,PTZ Tracking (ONVIF).md),Polygon Visualization,Dynamic Crop,Icon Visualization,Detections Classes Replacement,Model Comparison Visualization,Dynamic Zone,Detections Stitch,Size Measurement,Florence-2 Model,Mask Visualization,Stitch OCR Detections,Stability AI Inpainting,Time in Zone,Circle Visualization,Florence-2 Model,Ellipse Visualization,Time in Zone,Path Deviation,Crop Visualization,Color Visualization,Bounding Rectangle,Line Counter,Detections Combine,Label Visualization,Byte Tracker,Roboflow Dataset Upload,Overlap Filter,Triangle Visualization,Background Color Visualization,Detections Filter,Time in Zone,Detection Offset,Halo Visualization,Detections Stabilizer,Corner Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): The input image for this step..metadata(video_metadata): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Model predictions to calculate the time spent in zone for..zone(list_of_values): Coordinates of the target zone..triggering_anchor(string): The point on the detection that must be inside the zone..remove_out_of_zone_detections(boolean): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections(boolean): If true, detections found outside of zone will have time reset..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}