Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Object Detection Model
,Detections Merge
,LMM For Classification
,Google Gemini
,Detections Classes Replacement
,Byte Tracker
,OpenAI
,YOLO-World Model
,Clip Comparison
,Detections Stabilizer
,VLM as Classifier
,Florence-2 Model
,Google Vision OCR
,Detections Transformation
,Dimension Collapse
,Velocity
,Keypoint Detection Model
,Single-Label Classification Model
,Roboflow Dataset Upload
,Line Counter
,Detections Consensus
,Perspective Correction
,Dynamic Crop
,Local File Sink
,Llama 3.2 Vision
,Instance Segmentation Model
,Detections Filter
,Anthropic Claude
,Roboflow Custom Metadata
,Bounding Rectangle
,Moondream2
,Roboflow Dataset Upload
,Identify Changes
,Byte Tracker
,Identify Outliers
,SIFT Comparison
,Object Detection Model
,Multi-Label Classification Model
,Stitch OCR Detections
,VLM as Classifier
,Webhook Sink
,Buffer
,Instance Segmentation Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Slack Notification
,Time in Zone
,Segment Anything 2 Model
,OpenAI
,Email Notification
,SIFT Comparison
,Dynamic Zone
,CogVLM
,Path Deviation
,PTZ Tracking (ONVIF)
.md),Overlap Filter
,Twilio SMS Notification
,Byte Tracker
,JSON Parser
,Template Matching
,OCR Model
,CSV Formatter
,Detections Stitch
,Time in Zone
,Florence-2 Model
,VLM as Detector
,OpenAI
,Detection Offset
,Size Measurement
,VLM as Detector
,LMM
,Clip Comparison
- outputs:
Byte Tracker
,Crop Visualization
,Stability AI Inpainting
,Detections Merge
,Stitch OCR Detections
,Distance Measurement
,Byte Tracker
,Detections Classes Replacement
,Line Counter
,Detections Stabilizer
,Path Deviation
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Time in Zone
,Segment Anything 2 Model
,Detections Transformation
,Blur Visualization
,Velocity
,Dynamic Zone
,Trace Visualization
,Roboflow Dataset Upload
,Line Counter
,Path Deviation
,PTZ Tracking (ONVIF)
.md),Mask Visualization
,Overlap Filter
,Byte Tracker
,Color Visualization
,Detections Consensus
,Perspective Correction
,Detections Stitch
,Icon Visualization
,Dot Visualization
,Model Comparison Visualization
,Time in Zone
,Dynamic Crop
,Pixelate Visualization
,Florence-2 Model
,Bounding Box Visualization
,Bounding Rectangle
,Halo Visualization
,Detections Filter
,Ellipse Visualization
,Background Color Visualization
,Detection Offset
,Circle Visualization
,Size Measurement
,Corner Visualization
,Triangle Visualization
,Roboflow Custom Metadata
,Roboflow Dataset Upload
,Label Visualization
,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Crop Visualization
,Grid Visualization
,Stability AI Inpainting
,Object Detection Model
,Detections Merge
,LMM For Classification
,Google Gemini
,Detections Classes Replacement
,Byte Tracker
,Image Blur
,YOLO-World Model
,Clip Comparison
,Detections Stabilizer
,OpenAI
,VLM as Classifier
,SIFT
,Florence-2 Model
,Google Vision OCR
,Detections Transformation
,Dimension Collapse
,Velocity
,Keypoint Detection Model
,Single-Label Classification Model
,Roboflow Dataset Upload
,Line Counter
,Detections Consensus
,Stability AI Outpainting
,Perspective Correction
,Line Counter Visualization
,Dot Visualization
,Dynamic Crop
,Local File Sink
,Llama 3.2 Vision
,Instance Segmentation Model
,Bounding Box Visualization
,Depth Estimation
,Image Contours
,Detections Filter
,Background Color Visualization
,Ellipse Visualization
,Anthropic Claude
,Camera Focus
,Roboflow Custom Metadata
,Color Visualization
,Bounding Rectangle
,Stitch Images
,Moondream2
,Roboflow Dataset Upload
,Identify Changes
,Label Visualization
,Relative Static Crop
,Byte Tracker
,Identify Outliers
,SIFT Comparison
,Object Detection Model
,Reference Path Visualization
,Multi-Label Classification Model
,Stitch OCR Detections
,VLM as Classifier
,Webhook Sink
,Buffer
,Instance Segmentation Model
,Keypoint Visualization
,Path Deviation
,Model Monitoring Inference Aggregator
,Slack Notification
,Time in Zone
,Segment Anything 2 Model
,OpenAI
,Email Notification
,Blur Visualization
,Image Threshold
,Image Slicer
,SIFT Comparison
,Dynamic Zone
,Image Preprocessing
,Stability AI Image Generation
,Trace Visualization
,Classification Label Visualization
,CogVLM
,Mask Visualization
,Image Convert Grayscale
,Path Deviation
,PTZ Tracking (ONVIF)
.md),Overlap Filter
,Twilio SMS Notification
,Byte Tracker
,JSON Parser
,Polygon Zone Visualization
,Template Matching
,OCR Model
,CSV Formatter
,Camera Calibration
,Detections Stitch
,Icon Visualization
,Model Comparison Visualization
,Pixelate Visualization
,QR Code Generator
,Time in Zone
,Florence-2 Model
,Image Slicer
,Halo Visualization
,Absolute Static Crop
,VLM as Detector
,OpenAI
,Detection Offset
,Circle Visualization
,Size Measurement
,Corner Visualization
,Triangle Visualization
,VLM as Detector
,LMM
,Clip Comparison
,Polygon Visualization
- outputs:
Byte Tracker
,Crop Visualization
,Stability AI Inpainting
,Detections Merge
,Stitch OCR Detections
,Distance Measurement
,Byte Tracker
,Detections Classes Replacement
,Line Counter
,Detections Stabilizer
,Path Deviation
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Time in Zone
,Segment Anything 2 Model
,Detections Transformation
,Blur Visualization
,Velocity
,Dynamic Zone
,Trace Visualization
,Roboflow Dataset Upload
,Line Counter
,Path Deviation
,PTZ Tracking (ONVIF)
.md),Mask Visualization
,Overlap Filter
,Byte Tracker
,Color Visualization
,Detections Consensus
,Perspective Correction
,Detections Stitch
,Icon Visualization
,Dot Visualization
,Model Comparison Visualization
,Time in Zone
,Dynamic Crop
,Pixelate Visualization
,Florence-2 Model
,Bounding Box Visualization
,Bounding Rectangle
,Halo Visualization
,Detections Filter
,Ellipse Visualization
,Background Color Visualization
,Detection Offset
,Circle Visualization
,Size Measurement
,Corner Visualization
,Triangle Visualization
,Roboflow Custom Metadata
,Roboflow Dataset Upload
,Label Visualization
,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}