Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
SIFT Comparison
,Detections Classes Replacement
,Dimension Collapse
,Clip Comparison
,Local File Sink
,VLM as Classifier
,Slack Notification
,Twilio SMS Notification
,Detections Stabilizer
,Google Vision OCR
,SIFT Comparison
,OCR Model
,CSV Formatter
,OpenAI
,Email Notification
,Instance Segmentation Model
,Size Measurement
,Path Deviation
,Bounding Rectangle
,Detections Stitch
,Instance Segmentation Model
,Velocity
,Byte Tracker
,LMM For Classification
,Overlap Filter
,Roboflow Custom Metadata
,Keypoint Detection Model
,Object Detection Model
,OpenAI
,Model Monitoring Inference Aggregator
,Buffer
,Llama 3.2 Vision
,Perspective Correction
,Clip Comparison
,Detections Filter
,OpenAI
,Time in Zone
,Florence-2 Model
,CogVLM
,Google Gemini
,VLM as Detector
,Dynamic Crop
,YOLO-World Model
,Roboflow Dataset Upload
,Detections Merge
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Single-Label Classification Model
,JSON Parser
,Detections Transformation
,VLM as Detector
,Detections Consensus
,Anthropic Claude
,Template Matching
,Segment Anything 2 Model
,Line Counter
,Object Detection Model
,Webhook Sink
,Identify Outliers
,Byte Tracker
,Path Deviation
,Dynamic Zone
,Multi-Label Classification Model
,Florence-2 Model
,Byte Tracker
,Moondream2
,Identify Changes
,Time in Zone
,Roboflow Dataset Upload
,Detection Offset
,LMM
,VLM as Classifier
- outputs:
Triangle Visualization
,Time in Zone
,Florence-2 Model
,Detections Classes Replacement
,Background Color Visualization
,Roboflow Dataset Upload
,Dynamic Crop
,Halo Visualization
,Detections Merge
,Detections Stabilizer
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Crop Visualization
,Line Counter
,Detections Transformation
,Detections Filter
,Detections Consensus
,Size Measurement
,Blur Visualization
,Ellipse Visualization
,Color Visualization
,Path Deviation
,Segment Anything 2 Model
,Line Counter
,Bounding Rectangle
,Bounding Box Visualization
,Detections Stitch
,Model Comparison Visualization
,Corner Visualization
,Velocity
,Byte Tracker
,Byte Tracker
,Pixelate Visualization
,Overlap Filter
,Path Deviation
,Dynamic Zone
,Roboflow Custom Metadata
,Florence-2 Model
,Distance Measurement
,Mask Visualization
,Byte Tracker
,Model Monitoring Inference Aggregator
,Label Visualization
,Stability AI Inpainting
,Perspective Correction
,Trace Visualization
,Time in Zone
,Roboflow Dataset Upload
,Circle Visualization
,Dot Visualization
,Detection Offset
,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Triangle Visualization
,SIFT Comparison
,Detections Classes Replacement
,Dimension Collapse
,Clip Comparison
,Local File Sink
,VLM as Classifier
,Slack Notification
,Twilio SMS Notification
,Detections Stabilizer
,SIFT Comparison
,Google Vision OCR
,OCR Model
,CSV Formatter
,OpenAI
,Email Notification
,Instance Segmentation Model
,Size Measurement
,Camera Calibration
,Ellipse Visualization
,Path Deviation
,Bounding Rectangle
,Detections Stitch
,Instance Segmentation Model
,Model Comparison Visualization
,Corner Visualization
,Velocity
,Byte Tracker
,Pixelate Visualization
,LMM For Classification
,Overlap Filter
,Reference Path Visualization
,Roboflow Custom Metadata
,Keypoint Detection Model
,Object Detection Model
,OpenAI
,Mask Visualization
,Label Visualization
,Model Monitoring Inference Aggregator
,SIFT
,Image Convert Grayscale
,Stability AI Outpainting
,Polygon Zone Visualization
,Stability AI Inpainting
,Buffer
,Llama 3.2 Vision
,Perspective Correction
,Image Preprocessing
,Clip Comparison
,Camera Focus
,Dot Visualization
,Depth Estimation
,Keypoint Visualization
,Polygon Visualization
,Detections Filter
,OpenAI
,Image Slicer
,Image Blur
,Image Contours
,Time in Zone
,Florence-2 Model
,CogVLM
,Google Gemini
,Background Color Visualization
,VLM as Detector
,Dynamic Crop
,YOLO-World Model
,Roboflow Dataset Upload
,Halo Visualization
,Classification Label Visualization
,Detections Merge
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Crop Visualization
,Single-Label Classification Model
,JSON Parser
,Detections Transformation
,VLM as Detector
,Detections Consensus
,Anthropic Claude
,Blur Visualization
,Color Visualization
,Template Matching
,Segment Anything 2 Model
,Line Counter
,Relative Static Crop
,Object Detection Model
,Bounding Box Visualization
,Webhook Sink
,Identify Outliers
,Grid Visualization
,Stitch Images
,Byte Tracker
,Path Deviation
,Dynamic Zone
,Image Slicer
,Line Counter Visualization
,Multi-Label Classification Model
,Florence-2 Model
,Byte Tracker
,Stability AI Image Generation
,Moondream2
,Image Threshold
,Identify Changes
,Trace Visualization
,Time in Zone
,Roboflow Dataset Upload
,Absolute Static Crop
,Circle Visualization
,Detection Offset
,LMM
,VLM as Classifier
- outputs:
Triangle Visualization
,Time in Zone
,Florence-2 Model
,Detections Classes Replacement
,Background Color Visualization
,Roboflow Dataset Upload
,Dynamic Crop
,Halo Visualization
,Detections Merge
,Detections Stabilizer
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Crop Visualization
,Line Counter
,Detections Transformation
,Detections Filter
,Detections Consensus
,Size Measurement
,Blur Visualization
,Ellipse Visualization
,Color Visualization
,Path Deviation
,Segment Anything 2 Model
,Line Counter
,Bounding Rectangle
,Bounding Box Visualization
,Detections Stitch
,Model Comparison Visualization
,Corner Visualization
,Velocity
,Byte Tracker
,Byte Tracker
,Pixelate Visualization
,Overlap Filter
,Path Deviation
,Dynamic Zone
,Roboflow Custom Metadata
,Florence-2 Model
,Distance Measurement
,Mask Visualization
,Byte Tracker
,Model Monitoring Inference Aggregator
,Label Visualization
,Stability AI Inpainting
,Perspective Correction
,Trace Visualization
,Time in Zone
,Roboflow Dataset Upload
,Circle Visualization
,Dot Visualization
,Detection Offset
,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}