Time in Zone¶
v3¶
Class: TimeInZoneBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v3
.
- inputs:
Keypoint Detection Model
,Detections Filter
,VLM as Classifier
,YOLO-World Model
,Webhook Sink
,Detections Transformation
,Detections Consensus
,Time in Zone
,Time in Zone
,Path Deviation
,VLM as Classifier
,Identify Outliers
,Llama 3.2 Vision
,LMM
,Object Detection Model
,Twilio SMS Notification
,Florence-2 Model
,VLM as Detector
,Dynamic Crop
,Detections Stitch
,Instance Segmentation Model
,Florence-2 Model
,OCR Model
,PTZ Tracking (ONVIF)
.md),LMM For Classification
,Time in Zone
,Template Matching
,Size Measurement
,Clip Comparison
,Anthropic Claude
,Slack Notification
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,CogVLM
,Byte Tracker
,OpenAI
,Email Notification
,Segment Anything 2 Model
,Detections Classes Replacement
,Google Gemini
,SIFT Comparison
,SIFT Comparison
,Line Counter
,Dimension Collapse
,Byte Tracker
,Buffer
,Perspective Correction
,Byte Tracker
,JSON Parser
,Overlap Filter
,Path Deviation
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Local File Sink
,OpenAI
,Clip Comparison
,Velocity
,Bounding Rectangle
,Roboflow Dataset Upload
,OpenAI
,CSV Formatter
,Stitch OCR Detections
,VLM as Detector
,Detections Stabilizer
,Dynamic Zone
,Moondream2
,Detection Offset
,Multi-Label Classification Model
,Detections Merge
,Google Vision OCR
,Identify Changes
,Single-Label Classification Model
,Object Detection Model
- outputs:
Crop Visualization
,Detections Filter
,Ellipse Visualization
,Stability AI Inpainting
,Blur Visualization
,Segment Anything 2 Model
,Circle Visualization
,Pixelate Visualization
,Detections Classes Replacement
,Model Comparison Visualization
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Line Counter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Label Visualization
,Byte Tracker
,Triangle Visualization
,Detections Stitch
,Florence-2 Model
,Dynamic Crop
,Overlap Filter
,Florence-2 Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Color Visualization
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Roboflow Dataset Upload
,Velocity
,Mask Visualization
,Bounding Rectangle
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Dot Visualization
,Size Measurement
,Detections Stabilizer
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Dynamic Zone
,Distance Measurement
,Detection Offset
,Trace Visualization
,Line Counter
,Detections Merge
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v3
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Keypoint Detection Model
,Detections Filter
,VLM as Classifier
,YOLO-World Model
,Webhook Sink
,Detections Transformation
,Detections Consensus
,Time in Zone
,Time in Zone
,Path Deviation
,VLM as Classifier
,Identify Outliers
,Llama 3.2 Vision
,LMM
,Object Detection Model
,Twilio SMS Notification
,Florence-2 Model
,VLM as Detector
,Dynamic Crop
,Detections Stitch
,Instance Segmentation Model
,Florence-2 Model
,OCR Model
,PTZ Tracking (ONVIF)
.md),LMM For Classification
,Time in Zone
,Template Matching
,Size Measurement
,Clip Comparison
,Anthropic Claude
,Slack Notification
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,CogVLM
,Byte Tracker
,OpenAI
,Email Notification
,Segment Anything 2 Model
,Detections Classes Replacement
,Google Gemini
,SIFT Comparison
,SIFT Comparison
,Line Counter
,Dimension Collapse
,Byte Tracker
,Buffer
,Perspective Correction
,Byte Tracker
,JSON Parser
,Overlap Filter
,Path Deviation
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Local File Sink
,OpenAI
,Clip Comparison
,Velocity
,Bounding Rectangle
,Roboflow Dataset Upload
,OpenAI
,CSV Formatter
,Stitch OCR Detections
,VLM as Detector
,Detections Stabilizer
,Dynamic Zone
,Moondream2
,Detection Offset
,Multi-Label Classification Model
,Detections Merge
,Google Vision OCR
,Identify Changes
,Single-Label Classification Model
,Object Detection Model
- outputs:
Crop Visualization
,Detections Filter
,Ellipse Visualization
,Stability AI Inpainting
,Blur Visualization
,Segment Anything 2 Model
,Circle Visualization
,Pixelate Visualization
,Detections Classes Replacement
,Model Comparison Visualization
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Line Counter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Label Visualization
,Byte Tracker
,Triangle Visualization
,Detections Stitch
,Florence-2 Model
,Dynamic Crop
,Overlap Filter
,Florence-2 Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Color Visualization
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Roboflow Dataset Upload
,Velocity
,Mask Visualization
,Bounding Rectangle
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Dot Visualization
,Size Measurement
,Detections Stabilizer
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Dynamic Zone
,Distance Measurement
,Detection Offset
,Trace Visualization
,Line Counter
,Detections Merge
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Crop Visualization
,Keypoint Detection Model
,Detections Filter
,Ellipse Visualization
,VLM as Classifier
,Stability AI Inpainting
,Stability AI Image Generation
,Blur Visualization
,Circle Visualization
,Pixelate Visualization
,YOLO-World Model
,Model Comparison Visualization
,Webhook Sink
,Stability AI Outpainting
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Identify Changes
,Grid Visualization
,VLM as Classifier
,Identify Outliers
,Background Color Visualization
,Llama 3.2 Vision
,Image Contours
,LMM
,Object Detection Model
,Twilio SMS Notification
,Label Visualization
,Triangle Visualization
,Dynamic Crop
,VLM as Detector
,Detections Stitch
,Google Vision OCR
,Florence-2 Model
,Instance Segmentation Model
,Florence-2 Model
,Keypoint Visualization
,OCR Model
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Line Counter Visualization
,LMM For Classification
,Time in Zone
,Template Matching
,Dot Visualization
,Size Measurement
,Clip Comparison
,Anthropic Claude
,Slack Notification
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Depth Estimation
,CogVLM
,Byte Tracker
,Polygon Zone Visualization
,Stitch Images
,Image Slicer
,OpenAI
,Email Notification
,Relative Static Crop
,Segment Anything 2 Model
,Detections Classes Replacement
,Google Gemini
,SIFT Comparison
,SIFT Comparison
,Line Counter
,Dimension Collapse
,Byte Tracker
,Buffer
,Image Slicer
,Image Threshold
,Perspective Correction
,Byte Tracker
,Camera Focus
,JSON Parser
,Camera Calibration
,Classification Label Visualization
,Reference Path Visualization
,Color Visualization
,Overlap Filter
,Path Deviation
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Local File Sink
,OpenAI
,Clip Comparison
,Velocity
,Mask Visualization
,QR Code Generator
,Bounding Rectangle
,OpenAI
,Roboflow Dataset Upload
,CSV Formatter
,Stitch OCR Detections
,SIFT
,Polygon Visualization
,Image Convert Grayscale
,VLM as Detector
,Detections Stabilizer
,Dynamic Zone
,Moondream2
,Detection Offset
,Image Blur
,Multi-Label Classification Model
,Trace Visualization
,Detections Merge
,Absolute Static Crop
,Image Preprocessing
,Single-Label Classification Model
,Object Detection Model
- outputs:
Crop Visualization
,Detections Filter
,Ellipse Visualization
,Stability AI Inpainting
,Blur Visualization
,Segment Anything 2 Model
,Circle Visualization
,Pixelate Visualization
,Detections Classes Replacement
,Model Comparison Visualization
,Detections Transformation
,Detections Consensus
,Bounding Box Visualization
,Time in Zone
,Time in Zone
,Path Deviation
,Line Counter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Label Visualization
,Byte Tracker
,Triangle Visualization
,Detections Stitch
,Florence-2 Model
,Dynamic Crop
,Overlap Filter
,Florence-2 Model
,Path Deviation
,Model Monitoring Inference Aggregator
,Color Visualization
,Halo Visualization
,PTZ Tracking (ONVIF)
.md),Corner Visualization
,Roboflow Dataset Upload
,Velocity
,Mask Visualization
,Bounding Rectangle
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Dot Visualization
,Size Measurement
,Detections Stabilizer
,Roboflow Dataset Upload
,Icon Visualization
,Roboflow Custom Metadata
,Dynamic Zone
,Distance Measurement
,Detection Offset
,Trace Visualization
,Line Counter
,Detections Merge
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}