Time in Zone¶
v3¶
Class: TimeInZoneBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v3
.
- inputs:
Byte Tracker
,LMM For Classification
,Time in Zone
,VLM as Classifier
,Identify Outliers
,Size Measurement
,Perspective Correction
,Clip Comparison
,CSV Formatter
,LMM
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Florence-2 Model
,OpenAI
,Multi-Label Classification Model
,Detections Combine
,Detection Offset
,CogVLM
,EasyOCR
,Byte Tracker
,Stitch OCR Detections
,VLM as Detector
,Twilio SMS Notification
,VLM as Classifier
,Keypoint Detection Model
,Google Vision OCR
,Identify Changes
,Roboflow Dataset Upload
,Email Notification
,Instance Segmentation Model
,Clip Comparison
,Template Matching
,OCR Model
,Detections Stabilizer
,Llama 3.2 Vision
,Instance Segmentation Model
,Dynamic Crop
,Roboflow Dataset Upload
,Local File Sink
,Webhook Sink
,OpenAI
,Detections Stitch
,SIFT Comparison
,Time in Zone
,Velocity
,Object Detection Model
,Detections Transformation
,Buffer
,Byte Tracker
,Overlap Filter
,Dimension Collapse
,SIFT Comparison
,Roboflow Custom Metadata
,Object Detection Model
,Dynamic Zone
,Model Monitoring Inference Aggregator
,Line Counter
,Anthropic Claude
,Time in Zone
,Path Deviation
,OpenAI
,Slack Notification
,JSON Parser
,Detections Filter
,YOLO-World Model
,Detections Classes Replacement
,Bounding Rectangle
,VLM as Detector
,Google Gemini
,Detections Merge
,Path Deviation
,Single-Label Classification Model
,Detections Consensus
,Moondream2
,Segment Anything 2 Model
- outputs:
Byte Tracker
,Distance Measurement
,Time in Zone
,Dot Visualization
,Detections Stitch
,Size Measurement
,Time in Zone
,Blur Visualization
,Perspective Correction
,Velocity
,Trace Visualization
,Corner Visualization
,Detections Transformation
,Byte Tracker
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Crop Visualization
,Florence-2 Model
,Halo Visualization
,Roboflow Custom Metadata
,Detections Combine
,Detection Offset
,Model Comparison Visualization
,Pixelate Visualization
,Dynamic Zone
,Byte Tracker
,Model Monitoring Inference Aggregator
,Line Counter
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Path Deviation
,Triangle Visualization
,Roboflow Dataset Upload
,Detections Filter
,Detections Classes Replacement
,Bounding Rectangle
,Circle Visualization
,Detections Stabilizer
,Bounding Box Visualization
,Line Counter
,Label Visualization
,Stability AI Inpainting
,Icon Visualization
,Ellipse Visualization
,Dynamic Crop
,Color Visualization
,Roboflow Dataset Upload
,Mask Visualization
,Detections Merge
,Segment Anything 2 Model
,Path Deviation
,Detections Consensus
,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v3
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Byte Tracker
,LMM For Classification
,Time in Zone
,VLM as Classifier
,Identify Outliers
,Size Measurement
,Perspective Correction
,Clip Comparison
,CSV Formatter
,LMM
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Florence-2 Model
,OpenAI
,Multi-Label Classification Model
,Detections Combine
,Detection Offset
,CogVLM
,EasyOCR
,Byte Tracker
,Stitch OCR Detections
,VLM as Detector
,Twilio SMS Notification
,VLM as Classifier
,Keypoint Detection Model
,Google Vision OCR
,Identify Changes
,Roboflow Dataset Upload
,Email Notification
,Instance Segmentation Model
,Clip Comparison
,Template Matching
,OCR Model
,Detections Stabilizer
,Llama 3.2 Vision
,Instance Segmentation Model
,Dynamic Crop
,Roboflow Dataset Upload
,Local File Sink
,Webhook Sink
,OpenAI
,Detections Stitch
,SIFT Comparison
,Time in Zone
,Velocity
,Object Detection Model
,Detections Transformation
,Buffer
,Byte Tracker
,Overlap Filter
,Dimension Collapse
,SIFT Comparison
,Roboflow Custom Metadata
,Object Detection Model
,Dynamic Zone
,Model Monitoring Inference Aggregator
,Line Counter
,Anthropic Claude
,Time in Zone
,Path Deviation
,OpenAI
,Slack Notification
,JSON Parser
,Detections Filter
,YOLO-World Model
,Detections Classes Replacement
,Bounding Rectangle
,VLM as Detector
,Google Gemini
,Detections Merge
,Path Deviation
,Single-Label Classification Model
,Detections Consensus
,Moondream2
,Segment Anything 2 Model
- outputs:
Byte Tracker
,Distance Measurement
,Time in Zone
,Dot Visualization
,Detections Stitch
,Size Measurement
,Time in Zone
,Blur Visualization
,Perspective Correction
,Velocity
,Trace Visualization
,Corner Visualization
,Detections Transformation
,Byte Tracker
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Crop Visualization
,Florence-2 Model
,Halo Visualization
,Roboflow Custom Metadata
,Detections Combine
,Detection Offset
,Model Comparison Visualization
,Pixelate Visualization
,Dynamic Zone
,Byte Tracker
,Model Monitoring Inference Aggregator
,Line Counter
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Path Deviation
,Triangle Visualization
,Roboflow Dataset Upload
,Detections Filter
,Detections Classes Replacement
,Bounding Rectangle
,Circle Visualization
,Detections Stabilizer
,Bounding Box Visualization
,Line Counter
,Label Visualization
,Stability AI Inpainting
,Icon Visualization
,Ellipse Visualization
,Dynamic Crop
,Color Visualization
,Roboflow Dataset Upload
,Mask Visualization
,Detections Merge
,Segment Anything 2 Model
,Path Deviation
,Detections Consensus
,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Byte Tracker
,Polygon Zone Visualization
,Time in Zone
,LMM For Classification
,VLM as Classifier
,Identify Outliers
,Dot Visualization
,Morphological Transformation
,Size Measurement
,Blur Visualization
,Perspective Correction
,Clip Comparison
,CSV Formatter
,Corner Visualization
,LMM
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Grid Visualization
,Image Threshold
,Florence-2 Model
,Halo Visualization
,OpenAI
,Multi-Label Classification Model
,Detections Combine
,Detection Offset
,CogVLM
,EasyOCR
,Byte Tracker
,Line Counter Visualization
,Stitch OCR Detections
,VLM as Detector
,Stability AI Outpainting
,Twilio SMS Notification
,VLM as Classifier
,Keypoint Detection Model
,Google Vision OCR
,Identify Changes
,Camera Focus
,Roboflow Dataset Upload
,SIFT
,Instance Segmentation Model
,Clip Comparison
,Image Slicer
,Image Convert Grayscale
,Keypoint Visualization
,Email Notification
,Template Matching
,OCR Model
,Detections Stabilizer
,Bounding Box Visualization
,Instance Segmentation Model
,Llama 3.2 Vision
,Reference Path Visualization
,Dynamic Crop
,Roboflow Dataset Upload
,Mask Visualization
,Image Preprocessing
,Background Color Visualization
,Local File Sink
,Webhook Sink
,Camera Calibration
,OpenAI
,Depth Estimation
,Image Slicer
,QR Code Generator
,Detections Stitch
,SIFT Comparison
,Trace Visualization
,Time in Zone
,Velocity
,Object Detection Model
,Detections Transformation
,Contrast Equalization
,Buffer
,Byte Tracker
,Overlap Filter
,Crop Visualization
,Stability AI Image Generation
,Dimension Collapse
,Moondream2
,SIFT Comparison
,Roboflow Custom Metadata
,Object Detection Model
,Model Comparison Visualization
,Pixelate Visualization
,Dynamic Zone
,Model Monitoring Inference Aggregator
,Line Counter
,Anthropic Claude
,Time in Zone
,Relative Static Crop
,Image Contours
,Polygon Visualization
,Path Deviation
,OpenAI
,Slack Notification
,JSON Parser
,Triangle Visualization
,Detections Filter
,YOLO-World Model
,Classification Label Visualization
,Detections Classes Replacement
,Bounding Rectangle
,Circle Visualization
,Image Blur
,Label Visualization
,VLM as Detector
,Google Gemini
,Absolute Static Crop
,Stability AI Inpainting
,Icon Visualization
,Ellipse Visualization
,Color Visualization
,Detections Merge
,Path Deviation
,Single-Label Classification Model
,Detections Consensus
,Stitch Images
,Segment Anything 2 Model
- outputs:
Byte Tracker
,Distance Measurement
,Time in Zone
,Dot Visualization
,Detections Stitch
,Size Measurement
,Time in Zone
,Blur Visualization
,Perspective Correction
,Velocity
,Trace Visualization
,Corner Visualization
,Detections Transformation
,Byte Tracker
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Crop Visualization
,Florence-2 Model
,Halo Visualization
,Roboflow Custom Metadata
,Detections Combine
,Detection Offset
,Model Comparison Visualization
,Pixelate Visualization
,Dynamic Zone
,Byte Tracker
,Model Monitoring Inference Aggregator
,Line Counter
,Time in Zone
,Stitch OCR Detections
,Polygon Visualization
,Path Deviation
,Triangle Visualization
,Roboflow Dataset Upload
,Detections Filter
,Detections Classes Replacement
,Bounding Rectangle
,Circle Visualization
,Detections Stabilizer
,Bounding Box Visualization
,Line Counter
,Label Visualization
,Stability AI Inpainting
,Icon Visualization
,Ellipse Visualization
,Dynamic Crop
,Color Visualization
,Roboflow Dataset Upload
,Mask Visualization
,Detections Merge
,Segment Anything 2 Model
,Path Deviation
,Detections Consensus
,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}