Time in Zone¶
v2¶
Class: TimeInZoneBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v2
.
- inputs:
Detection Offset
,Size Measurement
,LMM
,Buffer
,Roboflow Custom Metadata
,VLM as Detector
,Multi-Label Classification Model
,Detections Classes Replacement
,OCR Model
,Keypoint Detection Model
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Google Vision OCR
,Llama 3.2 Vision
,Dynamic Zone
,Roboflow Dataset Upload
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Webhook Sink
,Identify Changes
,Detections Filter
,Email Notification
,Instance Segmentation Model
,Dimension Collapse
,Slack Notification
,Detections Merge
,Time in Zone
,Time in Zone
,Local File Sink
,CogVLM
,Overlap Filter
,JSON Parser
,OpenAI
,Moondream2
,Path Deviation
,Florence-2 Model
,Twilio SMS Notification
,Bounding Rectangle
,Detections Stitch
,Template Matching
,Byte Tracker
,LMM For Classification
,Stitch OCR Detections
,OpenAI
,Google Gemini
,SIFT Comparison
,YOLO-World Model
,VLM as Detector
,Velocity
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Single-Label Classification Model
,VLM as Classifier
,CSV Formatter
,Model Monitoring Inference Aggregator
,Clip Comparison
,Anthropic Claude
,Florence-2 Model
,VLM as Classifier
,Identify Outliers
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Bounding Rectangle
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Stitch OCR Detections
,Roboflow Dataset Upload
,Dynamic Zone
,Bounding Box Visualization
,Perspective Correction
,Halo Visualization
,Circle Visualization
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Polygon Visualization
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Stability AI Inpainting
,Mask Visualization
,Florence-2 Model
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v2
has.
Bindings
-
input
image
(image
): The input image for this step..detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The TimeInZoneBlock
is an analytics block designed to measure time spent by objects in a zone.
The block requires detections to be tracked (i.e. each object must have unique tracker_id assigned,
which persists between frames)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/time_in_zone@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Coordinates of the target zone.. | ✅ |
triggering_anchor |
str |
The point on the detection that must be inside the zone.. | ✅ |
remove_out_of_zone_detections |
bool |
If true, detections found outside of zone will be filtered out.. | ✅ |
reset_out_of_zone_detections |
bool |
If true, detections found outside of zone will have time reset.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone
in version v1
.
- inputs:
Detection Offset
,Size Measurement
,LMM
,Buffer
,Image Convert Grayscale
,Roboflow Custom Metadata
,VLM as Detector
,Absolute Static Crop
,Multi-Label Classification Model
,Relative Static Crop
,Line Counter Visualization
,Detections Classes Replacement
,Background Color Visualization
,OCR Model
,Camera Focus
,Image Contours
,Image Slicer
,Reference Path Visualization
,Instance Segmentation Model
,Keypoint Detection Model
,SIFT Comparison
,Object Detection Model
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Depth Estimation
,Google Vision OCR
,Llama 3.2 Vision
,Dynamic Zone
,Roboflow Dataset Upload
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Crop Visualization
,Webhook Sink
,Identify Changes
,Dot Visualization
,Detections Filter
,Model Comparison Visualization
,Email Notification
,Classification Label Visualization
,Camera Calibration
,Instance Segmentation Model
,Dimension Collapse
,Slack Notification
,Stability AI Image Generation
,Detections Merge
,Time in Zone
,Trace Visualization
,Time in Zone
,Corner Visualization
,Image Threshold
,Blur Visualization
,Local File Sink
,CogVLM
,Stability AI Inpainting
,SIFT
,Overlap Filter
,Circle Visualization
,JSON Parser
,OpenAI
,Moondream2
,Path Deviation
,Florence-2 Model
,Twilio SMS Notification
,Label Visualization
,Stitch Images
,Image Preprocessing
,Bounding Rectangle
,Detections Stitch
,Template Matching
,Byte Tracker
,Grid Visualization
,Polygon Zone Visualization
,Keypoint Visualization
,LMM For Classification
,Stitch OCR Detections
,Bounding Box Visualization
,Image Blur
,OpenAI
,Halo Visualization
,Google Gemini
,Ellipse Visualization
,Color Visualization
,Pixelate Visualization
,SIFT Comparison
,YOLO-World Model
,VLM as Detector
,Velocity
,Roboflow Dataset Upload
,Polygon Visualization
,Segment Anything 2 Model
,Single-Label Classification Model
,VLM as Classifier
,CSV Formatter
,Image Slicer
,Clip Comparison
,Model Monitoring Inference Aggregator
,Mask Visualization
,Anthropic Claude
,Florence-2 Model
,VLM as Classifier
,Identify Outliers
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Bounding Rectangle
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Stitch OCR Detections
,Roboflow Dataset Upload
,Dynamic Zone
,Bounding Box Visualization
,Perspective Correction
,Halo Visualization
,Circle Visualization
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Polygon Visualization
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Stability AI Inpainting
,Mask Visualization
,Florence-2 Model
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone
in version v1
has.
Bindings
-
input
image
(image
): The input image for this step..metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Model predictions to calculate the time spent in zone for..zone
(list_of_values
): Coordinates of the target zone..triggering_anchor
(string
): The point on the detection that must be inside the zone..remove_out_of_zone_detections
(boolean
): If true, detections found outside of zone will be filtered out..reset_out_of_zone_detections
(boolean
): If true, detections found outside of zone will have time reset..
-
output
timed_detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.
Example JSON definition of step Time in Zone
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}