Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Twilio SMS Notification
,Slack Notification
,VLM as Detector
,VLM as Classifier
,LMM
,Path Deviation
,Detections Merge
,Google Gemini
,Detection Offset
,Roboflow Dataset Upload
,Line Counter
,OpenAI
,Detections Consensus
,Webhook Sink
,Detections Classes Replacement
,VLM as Classifier
,Dynamic Zone
,Instance Segmentation Model
,CogVLM
,Email Notification
,Object Detection Model
,Single-Label Classification Model
,Llama 3.2 Vision
,Google Vision OCR
,Roboflow Dataset Upload
,Byte Tracker
,Size Measurement
,JSON Parser
,Object Detection Model
,Clip Comparison
,Local File Sink
,Detections Transformation
,Anthropic Claude
,Identify Outliers
,Detections Stitch
,OCR Model
,YOLO-World Model
,Stitch OCR Detections
,Perspective Correction
,Byte Tracker
,OpenAI
,Path Deviation
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,CSV Formatter
,Byte Tracker
,Florence-2 Model
,Instance Segmentation Model
,Keypoint Detection Model
,Buffer
,SIFT Comparison
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Bounding Rectangle
,Velocity
,SIFT Comparison
,VLM as Detector
,Identify Changes
,Multi-Label Classification Model
,LMM For Classification
,Roboflow Custom Metadata
,Segment Anything 2 Model
,Detections Stabilizer
,Dimension Collapse
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Roboflow Dataset Upload
,Model Comparison Visualization
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,Stitch OCR Detections
,Perspective Correction
,Detections Consensus
,Byte Tracker
,Path Deviation
,Detections Stabilizer
,Distance Measurement
,Mask Visualization
,Time in Zone
,Detections Filter
,Color Visualization
,Time in Zone
,Dynamic Crop
,Halo Visualization
,Polygon Visualization
,Byte Tracker
,Detections Classes Replacement
,Florence-2 Model
,Dynamic Zone
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Bounding Rectangle
,Velocity
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Ellipse Visualization
,Size Measurement
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Twilio SMS Notification
,Slack Notification
,VLM as Detector
,VLM as Classifier
,LMM
,Polygon Zone Visualization
,Camera Focus
,Image Slicer
,Image Blur
,Dot Visualization
,Path Deviation
,Detections Merge
,Detection Offset
,Google Gemini
,Roboflow Dataset Upload
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,OpenAI
,Detections Consensus
,Image Convert Grayscale
,Absolute Static Crop
,Stability AI Image Generation
,Webhook Sink
,Color Visualization
,Image Threshold
,Halo Visualization
,Polygon Visualization
,Detections Classes Replacement
,VLM as Classifier
,Dynamic Zone
,Instance Segmentation Model
,CogVLM
,Camera Calibration
,Email Notification
,Object Detection Model
,Classification Label Visualization
,Single-Label Classification Model
,Llama 3.2 Vision
,Google Vision OCR
,Roboflow Dataset Upload
,Byte Tracker
,Ellipse Visualization
,Size Measurement
,Bounding Box Visualization
,JSON Parser
,Object Detection Model
,Line Counter Visualization
,Image Preprocessing
,Trace Visualization
,Label Visualization
,Clip Comparison
,Local File Sink
,Image Slicer
,Detections Transformation
,Anthropic Claude
,Crop Visualization
,Detections Stitch
,OCR Model
,Identify Outliers
,YOLO-World Model
,Relative Static Crop
,Model Comparison Visualization
,Stitch OCR Detections
,Perspective Correction
,Byte Tracker
,OpenAI
,Path Deviation
,Mask Visualization
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,CSV Formatter
,Byte Tracker
,Florence-2 Model
,Instance Segmentation Model
,Keypoint Detection Model
,Image Contours
,Buffer
,SIFT
,SIFT Comparison
,Reference Path Visualization
,Florence-2 Model
,Triangle Visualization
,Bounding Rectangle
,Model Monitoring Inference Aggregator
,Velocity
,SIFT Comparison
,VLM as Detector
,Keypoint Visualization
,Identify Changes
,Multi-Label Classification Model
,LMM For Classification
,Roboflow Custom Metadata
,Grid Visualization
,Segment Anything 2 Model
,Detections Stabilizer
,Stitch Images
,Dimension Collapse
,Blur Visualization
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Roboflow Dataset Upload
,Model Comparison Visualization
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,Stitch OCR Detections
,Perspective Correction
,Detections Consensus
,Byte Tracker
,Path Deviation
,Detections Stabilizer
,Distance Measurement
,Mask Visualization
,Time in Zone
,Detections Filter
,Color Visualization
,Time in Zone
,Dynamic Crop
,Halo Visualization
,Polygon Visualization
,Byte Tracker
,Detections Classes Replacement
,Florence-2 Model
,Dynamic Zone
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Bounding Rectangle
,Velocity
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Ellipse Visualization
,Size Measurement
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}