Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within one or more defined polygon zones, measure duration of object presence in specific areas (supporting multiple zones where objects are considered 'in zone' if present in any zone), filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside one or more defined polygon zones by tracking entry and exit times for each unique track ID. The block supports multiple zones, treating objects as 'in zone' if they are present in any of the defined zones. The block:
- Receives tracked detection predictions with track IDs, an image with embedded video metadata, and polygon zone definition(s) (single zone or list of zones)
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
- Uses video_identifier to maintain separate tracking state for different videos
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Normalizes zone input to a list of polygons:
- Accepts a single polygon zone or a list of polygon zones
- Automatically wraps single polygons in a list for consistent processing
- Validates nesting depth and coordinate format for all zones
- Enables flexible zone input formats (single zone or multiple zones)
- Initializes or retrieves polygon zones for the video:
- Creates a list of PolygonZone objects from zone coordinates for each unique zone combination
- Validates zone coordinates (each zone must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configurations in an OrderedDict with zone cache (max 100 zone combinations)
- Uses zone key combining video_identifier and zone coordinates for cache lookup
- Implements FIFO eviction when cache exceeds 100 zone combinations
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered any zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in any zone:
- Tests each detection against all polygon zones using polygon zone triggers
- Creates a matrix of zone membership (zones x detections)
- Uses logical OR operation: objects are considered 'in zone' if they're in ANY of the zones
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership in any zone
- Updates time tracking for each tracked object:
- For objects entering any zone: Records entry timestamp if not already tracked
- For objects in any zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving all zones:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside all zones from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in any zone: Time represents duration spent in any zone
- Objects outside all zones: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in any zone over multiple frames. Time is measured from when an object first enters any zone (based on its track_id) until the current frame, providing real-time duration tracking. When multiple zones are provided, objects are considered 'in zone' if their anchor point is inside any of the zones, allowing tracking across multiple areas as a single combined zone. The zone cache efficiently manages multiple zone configurations per video using FIFO eviction to limit memory usage. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Multi-Zone Dwell Time Analysis: Measure how long objects remain in any of multiple areas for behavior analysis (e.g., measure customer time in any store section, track time spent in multiple parking areas, analyze time in overlapping zones), enabling multi-zone dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence across multiple defined areas for security and safety (e.g., detect loitering in any restricted area, monitor time in multiple danger zones, track presence across secure zones), enabling multi-zone monitoring workflows
- Retail Analytics: Track customer time across multiple store sections for retail insights (e.g., measure time in any product aisle, analyze shopping patterns across departments, track engagement in multiple zones), enabling multi-zone retail analytics workflows
- Occupancy Management: Measure time objects spend in any of multiple spaces for space utilization (e.g., track vehicle parking duration in multiple lots, measure table occupancy across zones, analyze space usage in multiple areas), enabling multi-zone occupancy management workflows
- Safety Compliance: Monitor time violations across multiple restricted or time-limited zones (e.g., detect extended stays in any hazardous area, monitor time limit violations across zones, track safety compliance in multiple areas), enabling multi-zone safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in any of multiple traffic zones or intersections (e.g., track time at multiple intersections, measure queue waiting time across zones, analyze traffic flow in multiple areas), enabling multi-zone traffic analytics workflows
Connecting to Other Blocks¶
This block receives an image with embedded video metadata, tracked detections, and zone coordinates (single or multiple zones), and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects across multiple zones (e.g., track time in multiple zones for tracked objects, measure dwell time with consistent IDs across areas, analyze tracked object presence in multiple zones), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to multiple defined areas (e.g., measure time across multiple polygon zones, track duration in custom multi-zone configurations, analyze zone-based presence across areas), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in any zone (e.g., continue if time exceeds threshold in any zone, filter based on dwell time across zones, trigger actions on time violations in multiple areas), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics across multiple zones (e.g., analyze dwell time patterns across zones, process time-in-zone data for multiple areas, work with duration metrics across zones), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds in any zone (e.g., alert on extended stays in any zone, notify on time limit violations across areas, trigger time-based alerts for multiple zones), enabling time-based notification workflows
- Before data storage blocks to record time metrics across multiple zones (e.g., store dwell time data for multiple areas, log time-in-zone metrics across zones, record duration measurements for multiple zones), enabling time metrics logging workflows
Version Differences¶
Enhanced from v2:
- Multiple Zone Support: Supports tracking time across multiple polygon zones simultaneously, where objects are considered 'in zone' if they're present in any of the defined zones, enabling multi-zone time tracking and analysis
- Flexible Zone Input: Accepts either a single polygon zone or a list of polygon zones, automatically normalizing the input to handle both formats seamlessly
- Zone Cache Management: Implements a zone cache with FIFO eviction (max 100 zone combinations) to efficiently manage multiple zone configurations per video while limiting memory usage
- Combined Zone Logic: Uses logical OR operation across all zones, allowing tracking across multiple areas as a unified zone system for comprehensive presence monitoring
- Enhanced Zone Key System: Uses combined zone keys (video_identifier + zone coordinates) for cache lookup, enabling efficient storage and retrieval of zone configurations
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone can be a single polygon or a list of polygons, where each polygon must be defined as a list of at least 3 points, with each point being a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs. When multiple zones are provided, objects are considered 'in zone' if they're present in any of the zones.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
Anthropic Claude,Clip Comparison,Email Notification,CSV Formatter,Object Detection Model,Segment Anything 2 Model,Detections Stabilizer,VLM As Detector,Dimension Collapse,Dynamic Crop,Roboflow Dataset Upload,Multi-Label Classification Model,SAM 3,Camera Focus,Stitch OCR Detections,VLM As Classifier,Path Deviation,YOLO-World Model,Detections Combine,Bounding Rectangle,PTZ Tracking (ONVIF).md),Moondream2,Local File Sink,Florence-2 Model,Size Measurement,Stitch OCR Detections,SIFT Comparison,Roboflow Custom Metadata,Byte Tracker,Identify Changes,LMM,SAM 3,Detections List Roll-Up,OpenAI,Twilio SMS Notification,Identify Outliers,Byte Tracker,SIFT Comparison,Velocity,Template Matching,Overlap Filter,Single-Label Classification Model,Google Gemini,Dynamic Zone,Google Vision OCR,Llama 3.2 Vision,OpenAI,VLM As Classifier,LMM For Classification,Email Notification,VLM As Detector,Anthropic Claude,OpenAI,Detections Classes Replacement,EasyOCR,OCR Model,Instance Segmentation Model,Detections Consensus,Slack Notification,Time in Zone,Line Counter,Detection Event Log,Google Gemini,Florence-2 Model,JSON Parser,Mask Area Measurement,Seg Preview,Detections Filter,Google Gemini,Time in Zone,Detections Stitch,OpenAI,Time in Zone,Detections Merge,Object Detection Model,Buffer,CogVLM,Webhook Sink,Instance Segmentation Model,Byte Tracker,Anthropic Claude,Path Deviation,Perspective Correction,SAM 3,Detections Transformation,Motion Detection,Keypoint Detection Model,Detection Offset,Model Monitoring Inference Aggregator,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Clip Comparison - outputs:
Polygon Visualization,Halo Visualization,Icon Visualization,Color Visualization,Segment Anything 2 Model,Blur Visualization,Detections Stabilizer,Roboflow Dataset Upload,Corner Visualization,Dynamic Crop,Distance Measurement,Trace Visualization,Triangle Visualization,Detections Classes Replacement,Bounding Box Visualization,Model Comparison Visualization,Camera Focus,Mask Visualization,Detections Consensus,Stitch OCR Detections,Halo Visualization,Time in Zone,Stability AI Inpainting,Path Deviation,Line Counter,Detections Combine,Detection Event Log,Florence-2 Model,Background Color Visualization,Mask Area Measurement,Bounding Rectangle,PTZ Tracking (ONVIF).md),Heatmap Visualization,Polygon Visualization,Florence-2 Model,Stitch OCR Detections,Size Measurement,Line Counter,Ellipse Visualization,Detections Filter,Pixelate Visualization,Label Visualization,Time in Zone,Roboflow Custom Metadata,Byte Tracker,Detections Stitch,Time in Zone,Detections List Roll-Up,Detections Merge,Dot Visualization,Circle Visualization,Byte Tracker,Byte Tracker,Path Deviation,Velocity,Perspective Correction,Detections Transformation,Overlap Filter,Model Monitoring Inference Aggregator,Detection Offset,Crop Visualization,Roboflow Dataset Upload,Dynamic Zone
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version supports multiple zones per video with efficient zone cache management..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered any of the zones. Objects are considered 'in zone' if their anchor point is inside any of the provided zones. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside any zone are included in the output..zone(list_of_values): Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:
- Receives tracked detection predictions with track IDs, an image with embedded video metadata, and a polygon zone definition
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
- Uses video_identifier to maintain separate tracking state for different videos
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Initializes or retrieves a polygon zone for the video:
- Creates a PolygonZone object from zone coordinates for each unique video
- Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configuration per video using video_identifier
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered the zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in the zone:
- Uses polygon zone trigger to test if each detection's anchor point is inside the zone
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership
- Updates time tracking for each tracked object:
- For objects entering the zone: Records entry timestamp if not already tracked
- For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving the zone:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in zone: Time represents duration spent in zone
- Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
- Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
- Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
- Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows
Connecting to Other Blocks¶
This block receives an image with embedded video metadata, tracked detections, and zone coordinates, and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
- Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows
Version Differences¶
Enhanced from v1:
- Simplified Input: Uses
imageinput that contains embedded video metadata instead of requiring a separatemetadatafield, simplifying workflow connections and reducing input complexity - Improved Integration: Better integration with image-based workflows since video metadata is accessed directly from the image object rather than requiring separate metadata input
- Streamlined Workflow: Reduces the number of inputs needed, making it easier to connect in workflows where image and metadata come from the same source
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
Anthropic Claude,Clip Comparison,Email Notification,CSV Formatter,Object Detection Model,Segment Anything 2 Model,Detections Stabilizer,VLM As Detector,Dimension Collapse,Dynamic Crop,Roboflow Dataset Upload,Multi-Label Classification Model,SAM 3,Camera Focus,Stitch OCR Detections,VLM As Classifier,Path Deviation,YOLO-World Model,Detections Combine,Bounding Rectangle,PTZ Tracking (ONVIF).md),Moondream2,Local File Sink,Florence-2 Model,Size Measurement,Stitch OCR Detections,SIFT Comparison,Roboflow Custom Metadata,Byte Tracker,Identify Changes,LMM,SAM 3,Detections List Roll-Up,OpenAI,Twilio SMS Notification,Identify Outliers,Byte Tracker,SIFT Comparison,Velocity,Template Matching,Overlap Filter,Single-Label Classification Model,Google Gemini,Dynamic Zone,Google Vision OCR,Llama 3.2 Vision,OpenAI,VLM As Classifier,LMM For Classification,Email Notification,VLM As Detector,Anthropic Claude,OpenAI,Detections Classes Replacement,EasyOCR,OCR Model,Instance Segmentation Model,Detections Consensus,Slack Notification,Time in Zone,Line Counter,Detection Event Log,Google Gemini,Florence-2 Model,JSON Parser,Mask Area Measurement,Seg Preview,Detections Filter,Google Gemini,Time in Zone,Detections Stitch,OpenAI,Time in Zone,Detections Merge,Object Detection Model,Buffer,CogVLM,Webhook Sink,Instance Segmentation Model,Byte Tracker,Anthropic Claude,Path Deviation,Perspective Correction,SAM 3,Detections Transformation,Motion Detection,Keypoint Detection Model,Detection Offset,Model Monitoring Inference Aggregator,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Clip Comparison - outputs:
Polygon Visualization,Halo Visualization,Icon Visualization,Color Visualization,Segment Anything 2 Model,Blur Visualization,Detections Stabilizer,Roboflow Dataset Upload,Corner Visualization,Dynamic Crop,Distance Measurement,Trace Visualization,Triangle Visualization,Detections Classes Replacement,Bounding Box Visualization,Model Comparison Visualization,Camera Focus,Mask Visualization,Detections Consensus,Stitch OCR Detections,Halo Visualization,Time in Zone,Stability AI Inpainting,Path Deviation,Line Counter,Detections Combine,Detection Event Log,Florence-2 Model,Background Color Visualization,Mask Area Measurement,Bounding Rectangle,PTZ Tracking (ONVIF).md),Heatmap Visualization,Polygon Visualization,Florence-2 Model,Stitch OCR Detections,Size Measurement,Line Counter,Ellipse Visualization,Detections Filter,Pixelate Visualization,Label Visualization,Time in Zone,Roboflow Custom Metadata,Byte Tracker,Detections Stitch,Time in Zone,Detections List Roll-Up,Detections Merge,Dot Visualization,Circle Visualization,Byte Tracker,Byte Tracker,Path Deviation,Velocity,Perspective Correction,Detections Transformation,Overlap Filter,Model Monitoring Inference Aggregator,Detection Offset,Crop Visualization,Roboflow Dataset Upload,Dynamic Zone
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version simplifies input by embedding metadata in the image object rather than requiring a separate metadata field..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..zone(list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:
- Receives tracked detection predictions with track IDs, an image, video metadata, and a polygon zone definition
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Initializes or retrieves a polygon zone for the video:
- Creates a PolygonZone object from zone coordinates for each unique video
- Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configuration per video using video_identifier
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered the zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in the zone:
- Uses polygon zone trigger to test if each detection's anchor point is inside the zone
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership
- Updates time tracking for each tracked object:
- For objects entering the zone: Records entry timestamp if not already tracked
- For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving the zone:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in zone: Time represents duration spent in zone
- Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
- Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
- Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
- Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows
Connecting to Other Blocks¶
This block receives tracked detections, image, video metadata, and zone coordinates, and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
- Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The block requires video metadata with frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
Anthropic Claude,Camera Focus,Text Display,CSV Formatter,Segment Anything 2 Model,Dynamic Crop,Relative Static Crop,SAM 3,Camera Focus,Mask Visualization,SIFT,VLM As Classifier,Path Deviation,Detections Combine,Background Color Visualization,PTZ Tracking (ONVIF).md),Polygon Visualization,Moondream2,Size Measurement,Stitch OCR Detections,SIFT Comparison,Byte Tracker,Detections List Roll-Up,OpenAI,Identify Outliers,Crop Visualization,Single-Label Classification Model,Keypoint Visualization,Icon Visualization,Llama 3.2 Vision,OpenAI,Image Contours,Stitch Images,VLM As Classifier,Anthropic Claude,OpenAI,Triangle Visualization,Detections Classes Replacement,Background Subtraction,Reference Path Visualization,EasyOCR,Instance Segmentation Model,Line Counter Visualization,Slack Notification,Time in Zone,Line Counter,Detection Event Log,Google Gemini,JSON Parser,Mask Area Measurement,Image Convert Grayscale,Stability AI Image Generation,Seg Preview,Time in Zone,OpenAI,Detections Merge,Stability AI Outpainting,Object Detection Model,Buffer,Image Threshold,Instance Segmentation Model,Anthropic Claude,Webhook Sink,Perspective Correction,SAM 3,Detection Offset,Clip Comparison,Clip Comparison,Halo Visualization,Image Blur,Email Notification,Contrast Equalization,Object Detection Model,Blur Visualization,Detections Stabilizer,VLM As Detector,Dimension Collapse,Corner Visualization,Classification Label Visualization,Roboflow Dataset Upload,Trace Visualization,Multi-Label Classification Model,Stitch OCR Detections,YOLO-World Model,Bounding Rectangle,Local File Sink,Florence-2 Model,Ellipse Visualization,Pixelate Visualization,Roboflow Custom Metadata,Identify Changes,LMM,SAM 3,Circle Visualization,Dot Visualization,Twilio SMS Notification,Byte Tracker,SIFT Comparison,Velocity,Absolute Static Crop,Morphological Transformation,Template Matching,Overlap Filter,Dynamic Zone,Google Gemini,Polygon Visualization,Google Vision OCR,Color Visualization,LMM For Classification,Email Notification,VLM As Detector,Bounding Box Visualization,Model Comparison Visualization,Grid Visualization,OCR Model,Halo Visualization,Detections Consensus,Stability AI Inpainting,Image Slicer,Florence-2 Model,Heatmap Visualization,Polygon Zone Visualization,Image Slicer,Detections Filter,Label Visualization,Google Gemini,Depth Estimation,Image Preprocessing,Detections Stitch,Time in Zone,CogVLM,Byte Tracker,Camera Calibration,Path Deviation,Detections Transformation,QR Code Generator,Motion Detection,Keypoint Detection Model,Model Monitoring Inference Aggregator,Twilio SMS/MMS Notification,Roboflow Dataset Upload - outputs:
Polygon Visualization,Halo Visualization,Icon Visualization,Color Visualization,Segment Anything 2 Model,Blur Visualization,Detections Stabilizer,Roboflow Dataset Upload,Corner Visualization,Dynamic Crop,Distance Measurement,Trace Visualization,Triangle Visualization,Detections Classes Replacement,Bounding Box Visualization,Model Comparison Visualization,Camera Focus,Mask Visualization,Detections Consensus,Stitch OCR Detections,Halo Visualization,Time in Zone,Stability AI Inpainting,Path Deviation,Line Counter,Detections Combine,Detection Event Log,Florence-2 Model,Background Color Visualization,Mask Area Measurement,Bounding Rectangle,PTZ Tracking (ONVIF).md),Heatmap Visualization,Polygon Visualization,Florence-2 Model,Stitch OCR Detections,Size Measurement,Line Counter,Ellipse Visualization,Detections Filter,Pixelate Visualization,Label Visualization,Time in Zone,Roboflow Custom Metadata,Byte Tracker,Detections Stitch,Time in Zone,Detections List Roll-Up,Detections Merge,Dot Visualization,Circle Visualization,Byte Tracker,Byte Tracker,Path Deviation,Velocity,Perspective Correction,Detections Transformation,Overlap Filter,Model Monitoring Inference Aggregator,Detection Offset,Crop Visualization,Roboflow Dataset Upload,Dynamic Zone
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): Input image for the current video frame. Used for zone visualization and reference. The block uses the image dimensions to validate zone coordinates. The image metadata may be used for time calculation if frame timestamps are needed..metadata(video_metadata): Video metadata containing frame rate (fps), frame number, frame timestamp, video identifier, and video source information required for time calculation and state management. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. The metadata must include valid fps for video files or frame_timestamp for streams to enable accurate time measurement..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..zone(list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}