Time in Zone¶
v3¶
Class: TimeInZoneBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within one or more defined polygon zones, measure duration of object presence in specific areas (supporting multiple zones where objects are considered 'in zone' if present in any zone), filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside one or more defined polygon zones by tracking entry and exit times for each unique track ID. The block supports multiple zones, treating objects as 'in zone' if they are present in any of the defined zones. The block:
- Receives tracked detection predictions with track IDs, an image with embedded video metadata, and polygon zone definition(s) (single zone or list of zones)
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
- Uses video_identifier to maintain separate tracking state for different videos
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Normalizes zone input to a list of polygons:
- Accepts a single polygon zone or a list of polygon zones
- Automatically wraps single polygons in a list for consistent processing
- Validates nesting depth and coordinate format for all zones
- Enables flexible zone input formats (single zone or multiple zones)
- Initializes or retrieves polygon zones for the video:
- Creates a list of PolygonZone objects from zone coordinates for each unique zone combination
- Validates zone coordinates (each zone must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configurations in an OrderedDict with zone cache (max 100 zone combinations)
- Uses zone key combining video_identifier and zone coordinates for cache lookup
- Implements FIFO eviction when cache exceeds 100 zone combinations
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered any zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in any zone:
- Tests each detection against all polygon zones using polygon zone triggers
- Creates a matrix of zone membership (zones x detections)
- Uses logical OR operation: objects are considered 'in zone' if they're in ANY of the zones
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership in any zone
- Updates time tracking for each tracked object:
- For objects entering any zone: Records entry timestamp if not already tracked
- For objects in any zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving all zones:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside all zones from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in any zone: Time represents duration spent in any zone
- Objects outside all zones: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in any zone over multiple frames. Time is measured from when an object first enters any zone (based on its track_id) until the current frame, providing real-time duration tracking. When multiple zones are provided, objects are considered 'in zone' if their anchor point is inside any of the zones, allowing tracking across multiple areas as a single combined zone. The zone cache efficiently manages multiple zone configurations per video using FIFO eviction to limit memory usage. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Multi-Zone Dwell Time Analysis: Measure how long objects remain in any of multiple areas for behavior analysis (e.g., measure customer time in any store section, track time spent in multiple parking areas, analyze time in overlapping zones), enabling multi-zone dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence across multiple defined areas for security and safety (e.g., detect loitering in any restricted area, monitor time in multiple danger zones, track presence across secure zones), enabling multi-zone monitoring workflows
- Retail Analytics: Track customer time across multiple store sections for retail insights (e.g., measure time in any product aisle, analyze shopping patterns across departments, track engagement in multiple zones), enabling multi-zone retail analytics workflows
- Occupancy Management: Measure time objects spend in any of multiple spaces for space utilization (e.g., track vehicle parking duration in multiple lots, measure table occupancy across zones, analyze space usage in multiple areas), enabling multi-zone occupancy management workflows
- Safety Compliance: Monitor time violations across multiple restricted or time-limited zones (e.g., detect extended stays in any hazardous area, monitor time limit violations across zones, track safety compliance in multiple areas), enabling multi-zone safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in any of multiple traffic zones or intersections (e.g., track time at multiple intersections, measure queue waiting time across zones, analyze traffic flow in multiple areas), enabling multi-zone traffic analytics workflows
Connecting to Other Blocks¶
This block receives an image with embedded video metadata, tracked detections, and zone coordinates (single or multiple zones), and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects across multiple zones (e.g., track time in multiple zones for tracked objects, measure dwell time with consistent IDs across areas, analyze tracked object presence in multiple zones), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to multiple defined areas (e.g., measure time across multiple polygon zones, track duration in custom multi-zone configurations, analyze zone-based presence across areas), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in any zone (e.g., continue if time exceeds threshold in any zone, filter based on dwell time across zones, trigger actions on time violations in multiple areas), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics across multiple zones (e.g., analyze dwell time patterns across zones, process time-in-zone data for multiple areas, work with duration metrics across zones), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds in any zone (e.g., alert on extended stays in any zone, notify on time limit violations across areas, trigger time-based alerts for multiple zones), enabling time-based notification workflows
- Before data storage blocks to record time metrics across multiple zones (e.g., store dwell time data for multiple areas, log time-in-zone metrics across zones, record duration measurements for multiple zones), enabling time metrics logging workflows
Version Differences¶
Enhanced from v2:
- Multiple Zone Support: Supports tracking time across multiple polygon zones simultaneously, where objects are considered 'in zone' if they're present in any of the defined zones, enabling multi-zone time tracking and analysis
- Flexible Zone Input: Accepts either a single polygon zone or a list of polygon zones, automatically normalizing the input to handle both formats seamlessly
- Zone Cache Management: Implements a zone cache with FIFO eviction (max 100 zone combinations) to efficiently manage multiple zone configurations per video while limiting memory usage
- Combined Zone Logic: Uses logical OR operation across all zones, allowing tracking across multiple areas as a unified zone system for comprehensive presence monitoring
- Enhanced Zone Key System: Uses combined zone keys (video_identifier + zone coordinates) for cache lookup, enabling efficient storage and retrieval of zone configurations
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone can be a single polygon or a list of polygons, where each polygon must be defined as a list of at least 3 points, with each point being a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs. When multiple zones are provided, objects are considered 'in zone' if they're present in any of the zones.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v3.
- inputs:
SIFT Comparison,LMM,Twilio SMS Notification,Moondream2,Roboflow Custom Metadata,Detections Filter,Detections Classes Replacement,Detections Merge,Llama 3.2 Vision,Template Matching,Velocity,Detections List Roll-Up,Bounding Rectangle,Buffer,OpenAI,Email Notification,SAM 3,Detection Event Log,EasyOCR,Google Gemini,Roboflow Dataset Upload,Dimension Collapse,Twilio SMS/MMS Notification,Detections Transformation,Instance Segmentation Model,VLM as Detector,Single-Label Classification Model,Detection Offset,Clip Comparison,Google Vision OCR,CogVLM,Path Deviation,Stitch OCR Detections,Multi-Label Classification Model,VLM as Detector,Segment Anything 2 Model,LMM For Classification,Dynamic Zone,SAM 3,Seg Preview,Slack Notification,OpenAI,Local File Sink,Anthropic Claude,Google Gemini,Time in Zone,JSON Parser,Webhook Sink,Perspective Correction,Overlap Filter,Florence-2 Model,Detections Consensus,Identify Changes,Instance Segmentation Model,Object Detection Model,Anthropic Claude,VLM as Classifier,CSV Formatter,Byte Tracker,Size Measurement,VLM as Classifier,Path Deviation,Line Counter,Time in Zone,Google Gemini,Motion Detection,Florence-2 Model,Identify Outliers,Object Detection Model,SIFT Comparison,Detections Stitch,Byte Tracker,Keypoint Detection Model,Camera Focus,YOLO-World Model,Clip Comparison,OCR Model,Byte Tracker,OpenAI,SAM 3,Detections Stabilizer,Roboflow Dataset Upload,Dynamic Crop,Email Notification,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Time in Zone,Detections Combine,Anthropic Claude,OpenAI - outputs:
Icon Visualization,Perspective Correction,Overlap Filter,Blur Visualization,Florence-2 Model,Roboflow Custom Metadata,Detections Consensus,Pixelate Visualization,Detections Filter,Detections Classes Replacement,Color Visualization,Detections Merge,Circle Visualization,Velocity,Byte Tracker,Detections List Roll-Up,Size Measurement,Bounding Rectangle,Stability AI Inpainting,Path Deviation,Line Counter,Time in Zone,Halo Visualization,Detection Event Log,Florence-2 Model,Trace Visualization,Roboflow Dataset Upload,Detections Stitch,Detections Transformation,Line Counter,Byte Tracker,Detection Offset,Dot Visualization,Camera Focus,Crop Visualization,Path Deviation,Background Color Visualization,Bounding Box Visualization,Stitch OCR Detections,Byte Tracker,Detections Stabilizer,Segment Anything 2 Model,Roboflow Dataset Upload,Dynamic Crop,Triangle Visualization,Distance Measurement,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Dynamic Zone,Ellipse Visualization,Time in Zone,Detections Combine,Mask Visualization,Corner Visualization,Polygon Visualization,Time in Zone,Model Comparison Visualization,Label Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v3 has.
Bindings
-
input
image(image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version supports multiple zones per video with efficient zone cache management..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered any of the zones. Objects are considered 'in zone' if their anchor point is inside any of the provided zones. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside any zone are included in the output..zone(list_of_values): Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v3",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v2¶
Class: TimeInZoneBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:
- Receives tracked detection predictions with track IDs, an image with embedded video metadata, and a polygon zone definition
- Extracts video metadata from the image:
- Accesses video_metadata from the WorkflowImageData object
- Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
- Uses video_identifier to maintain separate tracking state for different videos
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Initializes or retrieves a polygon zone for the video:
- Creates a PolygonZone object from zone coordinates for each unique video
- Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configuration per video using video_identifier
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered the zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in the zone:
- Uses polygon zone trigger to test if each detection's anchor point is inside the zone
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership
- Updates time tracking for each tracked object:
- For objects entering the zone: Records entry timestamp if not already tracked
- For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving the zone:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in zone: Time represents duration spent in zone
- Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
- Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
- Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
- Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows
Connecting to Other Blocks¶
This block receives an image with embedded video metadata, tracked detections, and zone coordinates, and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
- Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows
Version Differences¶
Enhanced from v1:
- Simplified Input: Uses
imageinput that contains embedded video metadata instead of requiring a separatemetadatafield, simplifying workflow connections and reducing input complexity - Improved Integration: Better integration with image-based workflows since video metadata is accessed directly from the image object rather than requiring separate metadata input
- Streamlined Workflow: Reduces the number of inputs needed, making it easier to connect in workflows where image and metadata come from the same source
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v2.
- inputs:
SIFT Comparison,LMM,Twilio SMS Notification,Moondream2,Roboflow Custom Metadata,Detections Filter,Detections Classes Replacement,Detections Merge,Llama 3.2 Vision,Template Matching,Velocity,Detections List Roll-Up,Bounding Rectangle,Buffer,OpenAI,Email Notification,SAM 3,Detection Event Log,EasyOCR,Google Gemini,Roboflow Dataset Upload,Dimension Collapse,Twilio SMS/MMS Notification,Detections Transformation,Instance Segmentation Model,VLM as Detector,Single-Label Classification Model,Detection Offset,Clip Comparison,Google Vision OCR,CogVLM,Path Deviation,Stitch OCR Detections,Multi-Label Classification Model,VLM as Detector,Segment Anything 2 Model,LMM For Classification,Dynamic Zone,SAM 3,Seg Preview,Slack Notification,OpenAI,Local File Sink,Anthropic Claude,Google Gemini,Time in Zone,JSON Parser,Webhook Sink,Perspective Correction,Overlap Filter,Florence-2 Model,Detections Consensus,Identify Changes,Instance Segmentation Model,Object Detection Model,Anthropic Claude,VLM as Classifier,CSV Formatter,Byte Tracker,Size Measurement,VLM as Classifier,Path Deviation,Line Counter,Time in Zone,Google Gemini,Motion Detection,Florence-2 Model,Identify Outliers,Object Detection Model,SIFT Comparison,Detections Stitch,Byte Tracker,Keypoint Detection Model,Camera Focus,YOLO-World Model,Clip Comparison,OCR Model,Byte Tracker,OpenAI,SAM 3,Detections Stabilizer,Roboflow Dataset Upload,Dynamic Crop,Email Notification,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Time in Zone,Detections Combine,Anthropic Claude,OpenAI - outputs:
Icon Visualization,Perspective Correction,Overlap Filter,Blur Visualization,Florence-2 Model,Roboflow Custom Metadata,Detections Consensus,Pixelate Visualization,Detections Filter,Detections Classes Replacement,Color Visualization,Detections Merge,Circle Visualization,Velocity,Byte Tracker,Detections List Roll-Up,Size Measurement,Bounding Rectangle,Stability AI Inpainting,Path Deviation,Line Counter,Time in Zone,Halo Visualization,Detection Event Log,Florence-2 Model,Trace Visualization,Roboflow Dataset Upload,Detections Stitch,Detections Transformation,Line Counter,Byte Tracker,Detection Offset,Dot Visualization,Camera Focus,Crop Visualization,Path Deviation,Background Color Visualization,Bounding Box Visualization,Stitch OCR Detections,Byte Tracker,Detections Stabilizer,Segment Anything 2 Model,Roboflow Dataset Upload,Dynamic Crop,Triangle Visualization,Distance Measurement,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Dynamic Zone,Ellipse Visualization,Time in Zone,Detections Combine,Mask Visualization,Corner Visualization,Polygon Visualization,Time in Zone,Model Comparison Visualization,Label Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v2 has.
Bindings
-
input
image(image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version simplifies input by embedding metadata in the image object rather than requiring a separate metadata field..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..zone(list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v2",
"image": "$inputs.image",
"detections": "$steps.object_detection_model.predictions",
"zone": [
[
100,
100
],
[
100,
200
],
[
300,
200
],
[
300,
100
]
],
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}
v1¶
Class: TimeInZoneBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.
How This Block Works¶
This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:
- Receives tracked detection predictions with track IDs, an image, video metadata, and a polygon zone definition
- Validates that detections have track IDs (tracker_id must be present):
- Requires detections to come from a tracking block (e.g., Byte Tracker)
- Each object must have a unique tracker_id that persists across frames
- Raises an error if tracker_id is missing
- Initializes or retrieves a polygon zone for the video:
- Creates a PolygonZone object from zone coordinates for each unique video
- Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
- Stores zone configuration per video using video_identifier
- Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
- Initializes or retrieves time tracking state for the video:
- Maintains a dictionary tracking when each track_id entered the zone
- Stores entry timestamps per video using video_identifier
- Maintains separate tracking state for each video
- Calculates current timestamp for time measurement:
- For video files: Calculates timestamp as frame_number / fps
- For streamed video: Uses frame_timestamp from metadata
- Provides accurate time measurement for duration calculation
- Checks which detections are in the zone:
- Uses polygon zone trigger to test if each detection's anchor point is inside the zone
- The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
- Returns boolean for each detection indicating zone membership
- Updates time tracking for each tracked object:
- For objects entering the zone: Records entry timestamp if not already tracked
- For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
- For objects leaving the zone:
- If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
- If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
- Handles out-of-zone detections:
- If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
- If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
- Adds time_in_zone information to each detection:
- Attaches time_in_zone value (in seconds) to each detection as metadata
- Objects in zone: Time represents duration spent in zone
- Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
- Returns detections with time_in_zone information:
- Outputs tracked detections enhanced with time_in_zone metadata
- Filtered or unfiltered based on remove_out_of_zone_detections setting
- Maintains all original detection properties plus time tracking information
The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.
Common Use Cases¶
- Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
- Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
- Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
- Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
- Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
- Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows
Connecting to Other Blocks¶
This block receives tracked detections, image, video metadata, and zone coordinates, and produces timed_detections with time_in_zone metadata:
- After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
- After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
- Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
- Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
- Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
- Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows
Requirements¶
This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The block requires video metadata with frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
zone |
List[Any] |
Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video.. | ✅ |
triggering_anchor |
str |
Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'.. | ✅ |
remove_out_of_zone_detections |
bool |
If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity.. | ✅ |
reset_out_of_zone_detections |
bool |
If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Time in Zone in version v1.
- inputs:
Icon Visualization,Image Preprocessing,LMM,Blur Visualization,Detections Classes Replacement,Color Visualization,Contrast Equalization,Detections Merge,Llama 3.2 Vision,Velocity,Reference Path Visualization,SIFT,Buffer,OpenAI,SAM 3,Halo Visualization,Trace Visualization,Dimension Collapse,Roboflow Dataset Upload,Twilio SMS/MMS Notification,Detections Transformation,VLM as Detector,Single-Label Classification Model,Image Convert Grayscale,Path Deviation,Background Color Visualization,Multi-Label Classification Model,Camera Calibration,VLM as Detector,Triangle Visualization,Dynamic Zone,Ellipse Visualization,Seg Preview,Slack Notification,Absolute Static Crop,Time in Zone,Google Gemini,Webhook Sink,Line Counter Visualization,Florence-2 Model,Detections Consensus,QR Code Generator,Object Detection Model,Anthropic Claude,Image Slicer,Byte Tracker,Image Contours,Stability AI Inpainting,VLM as Classifier,Line Counter,Google Gemini,Motion Detection,SIFT Comparison,Byte Tracker,Dot Visualization,Camera Focus,YOLO-World Model,Keypoint Detection Model,Bounding Box Visualization,OCR Model,Background Subtraction,OpenAI,SAM 3,Dynamic Crop,Keypoint Visualization,Email Notification,Image Threshold,Anthropic Claude,Corner Visualization,Pixelate Visualization,SIFT Comparison,Twilio SMS Notification,Moondream2,Morphological Transformation,Roboflow Custom Metadata,Stitch Images,Detections Filter,Circle Visualization,Stability AI Image Generation,Image Blur,Template Matching,Detections List Roll-Up,Bounding Rectangle,Email Notification,Detection Event Log,EasyOCR,Google Gemini,Instance Segmentation Model,Classification Label Visualization,Detection Offset,Clip Comparison,Google Vision OCR,CogVLM,Stitch OCR Detections,Segment Anything 2 Model,LMM For Classification,Text Display,SAM 3,Mask Visualization,OpenAI,Local File Sink,Anthropic Claude,Polygon Zone Visualization,Polygon Visualization,Model Comparison Visualization,Label Visualization,JSON Parser,Perspective Correction,Overlap Filter,Image Slicer,Identify Changes,Instance Segmentation Model,Stability AI Outpainting,VLM as Classifier,Grid Visualization,Relative Static Crop,CSV Formatter,Size Measurement,Path Deviation,Camera Focus,Time in Zone,Florence-2 Model,Identify Outliers,Object Detection Model,Detections Stitch,Crop Visualization,Clip Comparison,Byte Tracker,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Time in Zone,Detections Combine,Depth Estimation,OpenAI - outputs:
Icon Visualization,Perspective Correction,Overlap Filter,Blur Visualization,Florence-2 Model,Roboflow Custom Metadata,Detections Consensus,Pixelate Visualization,Detections Filter,Detections Classes Replacement,Color Visualization,Detections Merge,Circle Visualization,Velocity,Byte Tracker,Detections List Roll-Up,Size Measurement,Bounding Rectangle,Stability AI Inpainting,Path Deviation,Line Counter,Time in Zone,Halo Visualization,Detection Event Log,Florence-2 Model,Trace Visualization,Roboflow Dataset Upload,Detections Stitch,Detections Transformation,Line Counter,Byte Tracker,Detection Offset,Dot Visualization,Camera Focus,Crop Visualization,Path Deviation,Background Color Visualization,Bounding Box Visualization,Stitch OCR Detections,Byte Tracker,Detections Stabilizer,Segment Anything 2 Model,Roboflow Dataset Upload,Dynamic Crop,Triangle Visualization,Distance Measurement,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Dynamic Zone,Ellipse Visualization,Time in Zone,Detections Combine,Mask Visualization,Corner Visualization,Polygon Visualization,Time in Zone,Model Comparison Visualization,Label Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Time in Zone in version v1 has.
Bindings
-
input
image(image): Input image for the current video frame. Used for zone visualization and reference. The block uses the image dimensions to validate zone coordinates. The image metadata may be used for time calculation if frame timestamps are needed..metadata(video_metadata): Video metadata containing frame rate (fps), frame number, frame timestamp, video identifier, and video source information required for time calculation and state management. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. The metadata must include valid fps for video files or frame_timestamp for streams to enable accurate time measurement..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..zone(list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..triggering_anchor(string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..remove_out_of_zone_detections(boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..reset_out_of_zone_detections(boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
-
output
timed_detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/time_in_zone@v1",
"image": "$inputs.image",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"zone": "$inputs.zones",
"triggering_anchor": "CENTER",
"remove_out_of_zone_detections": true,
"reset_out_of_zone_detections": true
}