Skip to content

Time in Zone

v3

Class: TimeInZoneBlockV3 (there are multiple versions of this block)

Source: inference.core.workflows.core_steps.analytics.time_in_zone.v3.TimeInZoneBlockV3

Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning

Calculate and track the time spent by tracked objects within one or more defined polygon zones, measure duration of object presence in specific areas (supporting multiple zones where objects are considered 'in zone' if present in any zone), filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.

How This Block Works

This block measures how long each tracked object has been inside one or more defined polygon zones by tracking entry and exit times for each unique track ID. The block supports multiple zones, treating objects as 'in zone' if they are present in any of the defined zones. The block:

  1. Receives tracked detection predictions with track IDs, an image with embedded video metadata, and polygon zone definition(s) (single zone or list of zones)
  2. Extracts video metadata from the image:
  3. Accesses video_metadata from the WorkflowImageData object
  4. Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
  5. Uses video_identifier to maintain separate tracking state for different videos
  6. Validates that detections have track IDs (tracker_id must be present):
  7. Requires detections to come from a tracking block (e.g., Byte Tracker)
  8. Each object must have a unique tracker_id that persists across frames
  9. Raises an error if tracker_id is missing
  10. Normalizes zone input to a list of polygons:
  11. Accepts a single polygon zone or a list of polygon zones
  12. Automatically wraps single polygons in a list for consistent processing
  13. Validates nesting depth and coordinate format for all zones
  14. Enables flexible zone input formats (single zone or multiple zones)
  15. Initializes or retrieves polygon zones for the video:
  16. Creates a list of PolygonZone objects from zone coordinates for each unique zone combination
  17. Validates zone coordinates (each zone must be a list of at least 3 points, each with 2 coordinates)
  18. Stores zone configurations in an OrderedDict with zone cache (max 100 zone combinations)
  19. Uses zone key combining video_identifier and zone coordinates for cache lookup
  20. Implements FIFO eviction when cache exceeds 100 zone combinations
  21. Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
  22. Initializes or retrieves time tracking state for the video:
  23. Maintains a dictionary tracking when each track_id entered any zone
  24. Stores entry timestamps per video using video_identifier
  25. Maintains separate tracking state for each video
  26. Calculates current timestamp for time measurement:
  27. For video files: Calculates timestamp as frame_number / fps
  28. For streamed video: Uses frame_timestamp from metadata
  29. Provides accurate time measurement for duration calculation
  30. Checks which detections are in any zone:
  31. Tests each detection against all polygon zones using polygon zone triggers
  32. Creates a matrix of zone membership (zones x detections)
  33. Uses logical OR operation: objects are considered 'in zone' if they're in ANY of the zones
  34. The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
  35. Returns boolean for each detection indicating zone membership in any zone
  36. Updates time tracking for each tracked object:
  37. For objects entering any zone: Records entry timestamp if not already tracked
  38. For objects in any zone: Calculates time spent as current_timestamp - entry_timestamp
  39. For objects leaving all zones:
    • If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
    • If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
  40. Handles out-of-zone detections:
    • If remove_out_of_zone_detections is True: Filters out detections outside all zones from output
    • If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
  41. Adds time_in_zone information to each detection:
    • Attaches time_in_zone value (in seconds) to each detection as metadata
    • Objects in any zone: Time represents duration spent in any zone
    • Objects outside all zones: Time is 0 (if not reset) or undefined (if removed)
  42. Returns detections with time_in_zone information:
    • Outputs tracked detections enhanced with time_in_zone metadata
    • Filtered or unfiltered based on remove_out_of_zone_detections setting
    • Maintains all original detection properties plus time tracking information

The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in any zone over multiple frames. Time is measured from when an object first enters any zone (based on its track_id) until the current frame, providing real-time duration tracking. When multiple zones are provided, objects are considered 'in zone' if their anchor point is inside any of the zones, allowing tracking across multiple areas as a single combined zone. The zone cache efficiently manages multiple zone configurations per video using FIFO eviction to limit memory usage. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.

Common Use Cases

  • Multi-Zone Dwell Time Analysis: Measure how long objects remain in any of multiple areas for behavior analysis (e.g., measure customer time in any store section, track time spent in multiple parking areas, analyze time in overlapping zones), enabling multi-zone dwell time analytics workflows
  • Zone-Based Monitoring: Monitor object presence across multiple defined areas for security and safety (e.g., detect loitering in any restricted area, monitor time in multiple danger zones, track presence across secure zones), enabling multi-zone monitoring workflows
  • Retail Analytics: Track customer time across multiple store sections for retail insights (e.g., measure time in any product aisle, analyze shopping patterns across departments, track engagement in multiple zones), enabling multi-zone retail analytics workflows
  • Occupancy Management: Measure time objects spend in any of multiple spaces for space utilization (e.g., track vehicle parking duration in multiple lots, measure table occupancy across zones, analyze space usage in multiple areas), enabling multi-zone occupancy management workflows
  • Safety Compliance: Monitor time violations across multiple restricted or time-limited zones (e.g., detect extended stays in any hazardous area, monitor time limit violations across zones, track safety compliance in multiple areas), enabling multi-zone safety monitoring workflows
  • Traffic Analysis: Measure time vehicles spend in any of multiple traffic zones or intersections (e.g., track time at multiple intersections, measure queue waiting time across zones, analyze traffic flow in multiple areas), enabling multi-zone traffic analytics workflows

Connecting to Other Blocks

This block receives an image with embedded video metadata, tracked detections, and zone coordinates (single or multiple zones), and produces timed_detections with time_in_zone metadata:

  • After Byte Tracker blocks to measure time for tracked objects across multiple zones (e.g., track time in multiple zones for tracked objects, measure dwell time with consistent IDs across areas, analyze tracked object presence in multiple zones), enabling tracking-to-time workflows
  • After zone definition blocks to apply time tracking to multiple defined areas (e.g., measure time across multiple polygon zones, track duration in custom multi-zone configurations, analyze zone-based presence across areas), enabling zone-to-time workflows
  • Before logic blocks like Continue If to make decisions based on time in any zone (e.g., continue if time exceeds threshold in any zone, filter based on dwell time across zones, trigger actions on time violations in multiple areas), enabling time-based decision workflows
  • Before analysis blocks to analyze time-based metrics across multiple zones (e.g., analyze dwell time patterns across zones, process time-in-zone data for multiple areas, work with duration metrics across zones), enabling time analysis workflows
  • Before notification blocks to alert on time violations or thresholds in any zone (e.g., alert on extended stays in any zone, notify on time limit violations across areas, trigger time-based alerts for multiple zones), enabling time-based notification workflows
  • Before data storage blocks to record time metrics across multiple zones (e.g., store dwell time data for multiple areas, log time-in-zone metrics across zones, record duration measurements for multiple zones), enabling time metrics logging workflows

Version Differences

Enhanced from v2:

  • Multiple Zone Support: Supports tracking time across multiple polygon zones simultaneously, where objects are considered 'in zone' if they're present in any of the defined zones, enabling multi-zone time tracking and analysis
  • Flexible Zone Input: Accepts either a single polygon zone or a list of polygon zones, automatically normalizing the input to handle both formats seamlessly
  • Zone Cache Management: Implements a zone cache with FIFO eviction (max 100 zone combinations) to efficiently manage multiple zone configurations per video while limiting memory usage
  • Combined Zone Logic: Uses logical OR operation across all zones, allowing tracking across multiple areas as a unified zone system for comprehensive presence monitoring
  • Enhanced Zone Key System: Uses combined zone keys (video_identifier + zone coordinates) for cache lookup, enabling efficient storage and retrieval of zone configurations

Requirements

This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone can be a single polygon or a list of polygons, where each polygon must be defined as a list of at least 3 points, with each point being a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs. When multiple zones are provided, objects are considered 'in zone' if they're present in any of the zones.

Type identifier

Use the following identifier in step "type" field: roboflow_core/time_in_zone@v3to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
zone List[Any] Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction..
triggering_anchor str Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
remove_out_of_zone_detections bool If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity..
reset_out_of_zone_detections bool If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Time in Zone in version v3.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Time in Zone in version v3 has.

Bindings
  • input

    • image (image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version supports multiple zones per video with efficient zone cache management..
    • detections (Union[object_detection_prediction, instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered any of the zones. Objects are considered 'in zone' if their anchor point is inside any of the provided zones. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside any zone are included in the output..
    • zone (list_of_values): Polygon zone coordinates defining one or more areas for time measurement. Can be a single polygon zone or a list of polygon zones. Each zone must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example for single zone: [(100, 100), (100, 200), (300, 200), (300, 100)]. Example for multiple zones: [[(100, 100), (100, 200), (300, 200), (300, 100)], [(400, 400), (400, 500), (600, 500), (600, 400)]]. Objects are considered 'in zone' if their triggering_anchor point is inside ANY of the provided zones. Zone coordinates are validated and PolygonZone objects are created for each zone. Zone configurations are cached (max 100 combinations) with FIFO eviction..
    • triggering_anchor (string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon(s). When multiple zones are provided, the object is considered 'in zone' if its anchor point is inside ANY of the zones. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
    • remove_out_of_zone_detections (boolean): If True (default), detections found outside all zones are filtered out and not included in the output. Only detections inside at least one zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside all zones. Use True to focus analysis only on objects in any zone, or False to maintain all detections with zone status. When multiple zones are provided, objects are considered 'in zone' if present in any zone. Default is True for cleaner output focused on zone activity..
    • reset_out_of_zone_detections (boolean): If True (default), when a tracked object leaves all zones, its time tracking is reset (entry timestamp is cleared). When the object re-enters any zone, time tracking starts from 0 again. If False, time tracking continues even after leaving all zones, and re-entry maintains cumulative time. Use True to measure current continuous time in any zone (resets on exit from all zones), or False to measure cumulative time across multiple entries. When multiple zones are provided, time is reset only when the object leaves all zones. Default is True for measuring continuous presence duration..
  • output

    • timed_detections (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v3
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/time_in_zone@v3",
    "image": "$inputs.image",
    "detections": "$steps.object_detection_model.predictions",
    "zone": [
        [
            100,
            100
        ],
        [
            100,
            200
        ],
        [
            300,
            200
        ],
        [
            300,
            100
        ]
    ],
    "triggering_anchor": "CENTER",
    "remove_out_of_zone_detections": true,
    "reset_out_of_zone_detections": true
}

v2

Class: TimeInZoneBlockV2 (there are multiple versions of this block)

Source: inference.core.workflows.core_steps.analytics.time_in_zone.v2.TimeInZoneBlockV2

Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning

Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.

How This Block Works

This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:

  1. Receives tracked detection predictions with track IDs, an image with embedded video metadata, and a polygon zone definition
  2. Extracts video metadata from the image:
  3. Accesses video_metadata from the WorkflowImageData object
  4. Extracts fps, frame_number, frame_timestamp, video_identifier, and video source information
  5. Uses video_identifier to maintain separate tracking state for different videos
  6. Validates that detections have track IDs (tracker_id must be present):
  7. Requires detections to come from a tracking block (e.g., Byte Tracker)
  8. Each object must have a unique tracker_id that persists across frames
  9. Raises an error if tracker_id is missing
  10. Initializes or retrieves a polygon zone for the video:
  11. Creates a PolygonZone object from zone coordinates for each unique video
  12. Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
  13. Stores zone configuration per video using video_identifier
  14. Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
  15. Initializes or retrieves time tracking state for the video:
  16. Maintains a dictionary tracking when each track_id entered the zone
  17. Stores entry timestamps per video using video_identifier
  18. Maintains separate tracking state for each video
  19. Calculates current timestamp for time measurement:
  20. For video files: Calculates timestamp as frame_number / fps
  21. For streamed video: Uses frame_timestamp from metadata
  22. Provides accurate time measurement for duration calculation
  23. Checks which detections are in the zone:
  24. Uses polygon zone trigger to test if each detection's anchor point is inside the zone
  25. The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
  26. Returns boolean for each detection indicating zone membership
  27. Updates time tracking for each tracked object:
  28. For objects entering the zone: Records entry timestamp if not already tracked
  29. For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
  30. For objects leaving the zone:
    • If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
    • If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
  31. Handles out-of-zone detections:
  32. If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
  33. If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
  34. Adds time_in_zone information to each detection:
    • Attaches time_in_zone value (in seconds) to each detection as metadata
    • Objects in zone: Time represents duration spent in zone
    • Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
  35. Returns detections with time_in_zone information:
    • Outputs tracked detections enhanced with time_in_zone metadata
    • Filtered or unfiltered based on remove_out_of_zone_detections setting
    • Maintains all original detection properties plus time tracking information

The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.

Common Use Cases

  • Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
  • Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
  • Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
  • Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
  • Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
  • Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows

Connecting to Other Blocks

This block receives an image with embedded video metadata, tracked detections, and zone coordinates, and produces timed_detections with time_in_zone metadata:

  • After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
  • After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
  • Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
  • Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
  • Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
  • Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows

Version Differences

Enhanced from v1:

  • Simplified Input: Uses image input that contains embedded video metadata instead of requiring a separate metadata field, simplifying workflow connections and reducing input complexity
  • Improved Integration: Better integration with image-based workflows since video metadata is accessed directly from the image object rather than requiring separate metadata input
  • Streamlined Workflow: Reduces the number of inputs needed, making it easier to connect in workflows where image and metadata come from the same source

Requirements

This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The image's video_metadata should include frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.

Type identifier

Use the following identifier in step "type" field: roboflow_core/time_in_zone@v2to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
zone List[Any] Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..
triggering_anchor str Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
remove_out_of_zone_detections bool If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..
reset_out_of_zone_detections bool If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Time in Zone in version v2.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Time in Zone in version v2 has.

Bindings
  • input

    • image (image): Input image for the current video frame containing embedded video metadata (fps, frame_number, frame_timestamp, video_identifier, video source) required for time calculation and state management. The block extracts video_metadata from the WorkflowImageData object. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. Used for zone visualization and reference. The image dimensions are used to validate zone coordinates. This version simplifies input by embedding metadata in the image object rather than requiring a separate metadata field..
    • detections (Union[object_detection_prediction, instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..
    • zone (list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [(100, 100), (100, 200), (300, 200), (300, 100)] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..
    • triggering_anchor (string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
    • remove_out_of_zone_detections (boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..
    • reset_out_of_zone_detections (boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
  • output

    • timed_detections (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v2
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/time_in_zone@v2",
    "image": "$inputs.image",
    "detections": "$steps.object_detection_model.predictions",
    "zone": [
        [
            100,
            100
        ],
        [
            100,
            200
        ],
        [
            300,
            200
        ],
        [
            300,
            100
        ]
    ],
    "triggering_anchor": "CENTER",
    "remove_out_of_zone_detections": true,
    "reset_out_of_zone_detections": true
}

v1

Class: TimeInZoneBlockV1 (there are multiple versions of this block)

Source: inference.core.workflows.core_steps.analytics.time_in_zone.v1.TimeInZoneBlockV1

Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning

Calculate and track the time spent by tracked objects within a defined polygon zone, measure duration of object presence in specific areas, filter detections based on zone membership, reset time tracking when objects leave zones, and enable zone-based analytics, dwell time analysis, and presence monitoring workflows.

How This Block Works

This block measures how long each tracked object has been inside a defined polygon zone by tracking entry and exit times for each unique track ID. The block:

  1. Receives tracked detection predictions with track IDs, an image, video metadata, and a polygon zone definition
  2. Validates that detections have track IDs (tracker_id must be present):
  3. Requires detections to come from a tracking block (e.g., Byte Tracker)
  4. Each object must have a unique tracker_id that persists across frames
  5. Raises an error if tracker_id is missing
  6. Initializes or retrieves a polygon zone for the video:
  7. Creates a PolygonZone object from zone coordinates for each unique video
  8. Validates zone coordinates (must be a list of at least 3 points, each with 2 coordinates)
  9. Stores zone configuration per video using video_identifier
  10. Configures triggering anchor point (e.g., CENTER, BOTTOM_CENTER) for zone detection
  11. Initializes or retrieves time tracking state for the video:
  12. Maintains a dictionary tracking when each track_id entered the zone
  13. Stores entry timestamps per video using video_identifier
  14. Maintains separate tracking state for each video
  15. Calculates current timestamp for time measurement:
  16. For video files: Calculates timestamp as frame_number / fps
  17. For streamed video: Uses frame_timestamp from metadata
  18. Provides accurate time measurement for duration calculation
  19. Checks which detections are in the zone:
  20. Uses polygon zone trigger to test if each detection's anchor point is inside the zone
  21. The triggering_anchor determines which point on the bounding box is checked (CENTER, BOTTOM_CENTER, etc.)
  22. Returns boolean for each detection indicating zone membership
  23. Updates time tracking for each tracked object:
  24. For objects entering the zone: Records entry timestamp if not already tracked
  25. For objects in the zone: Calculates time spent as current_timestamp - entry_timestamp
  26. For objects leaving the zone:
    • If reset_out_of_zone_detections is True: Removes entry timestamp (resets to 0)
    • If reset_out_of_zone_detections is False: Keeps entry timestamp (continues tracking)
  27. Handles out-of-zone detections:
  28. If remove_out_of_zone_detections is True: Filters out detections outside the zone from output
  29. If remove_out_of_zone_detections is False: Includes out-of-zone detections with time = 0
  30. Adds time_in_zone information to each detection:
  31. Attaches time_in_zone value (in seconds) to each detection as metadata
  32. Objects in zone: Time represents duration spent in zone
  33. Objects outside zone: Time is 0 (if not reset) or undefined (if removed)
  34. Returns detections with time_in_zone information:
    • Outputs tracked detections enhanced with time_in_zone metadata
    • Filtered or unfiltered based on remove_out_of_zone_detections setting
    • Maintains all original detection properties plus time tracking information

The block maintains persistent tracking state across frames, allowing accurate cumulative time measurement for objects that remain in the zone over multiple frames. Time is measured from when an object first enters the zone (based on its track_id) until the current frame, providing real-time duration tracking. The zone is defined as a polygon with multiple points, allowing flexible area definitions. The triggering anchor determines which part of the bounding box is used for zone detection, enabling different zone entry/exit behaviors based on object position.

Common Use Cases

  • Dwell Time Analysis: Measure how long objects remain in specific areas for behavior analysis (e.g., measure customer dwell time in store sections, track time spent in parking spaces, analyze time in waiting areas), enabling dwell time analytics workflows
  • Zone-Based Monitoring: Monitor object presence in defined areas for security and safety (e.g., detect loitering in restricted areas, monitor time in danger zones, track presence in secure zones), enabling zone monitoring workflows
  • Retail Analytics: Track customer time in different store sections for retail insights (e.g., measure time in product aisles, analyze shopping patterns, track department engagement), enabling retail analytics workflows
  • Occupancy Management: Measure time objects spend in spaces for space utilization (e.g., track vehicle parking duration, measure table occupancy time, analyze space usage patterns), enabling occupancy management workflows
  • Safety Compliance: Monitor time violations in restricted or time-limited zones (e.g., detect extended stays in hazardous areas, monitor time limit violations, track safety compliance), enabling safety monitoring workflows
  • Traffic Analysis: Measure time vehicles spend in traffic zones or intersections (e.g., track time at intersections, measure queue waiting time, analyze traffic flow patterns), enabling traffic analytics workflows

Connecting to Other Blocks

This block receives tracked detections, image, video metadata, and zone coordinates, and produces timed_detections with time_in_zone metadata:

  • After Byte Tracker blocks to measure time for tracked objects (e.g., track time in zones for tracked objects, measure dwell time with consistent IDs, analyze tracked object presence), enabling tracking-to-time workflows
  • After zone definition blocks to apply time tracking to defined areas (e.g., measure time in polygon zones, track duration in custom zones, analyze zone-based presence), enabling zone-to-time workflows
  • Before logic blocks like Continue If to make decisions based on time in zone (e.g., continue if time exceeds threshold, filter based on dwell time, trigger actions on time violations), enabling time-based decision workflows
  • Before analysis blocks to analyze time-based metrics (e.g., analyze dwell time patterns, process time-in-zone data, work with duration metrics), enabling time analysis workflows
  • Before notification blocks to alert on time violations or thresholds (e.g., alert on extended stays, notify on time limit violations, trigger time-based alerts), enabling time-based notification workflows
  • Before data storage blocks to record time metrics (e.g., store dwell time data, log time-in-zone metrics, record duration measurements), enabling time metrics logging workflows

Requirements

This block requires tracked detections with tracker_id information (detections must come from a tracking block like Byte Tracker). The zone must be defined as a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates (x, y). The block requires video metadata with frame rate (fps) for video files or frame timestamps for streamed video to calculate accurate time measurements. The block maintains persistent tracking state across frames for each video, so it should be used in video workflows where frames are processed sequentially. For accurate time measurement, detections should be provided consistently across frames with valid track IDs.

Type identifier

Use the following identifier in step "type" field: roboflow_core/time_in_zone@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
zone List[Any] Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..
triggering_anchor str Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
remove_out_of_zone_detections bool If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..
reset_out_of_zone_detections bool If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Time in Zone in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Time in Zone in version v1 has.

Bindings
  • input

    • image (image): Input image for the current video frame. Used for zone visualization and reference. The block uses the image dimensions to validate zone coordinates. The image metadata may be used for time calculation if frame timestamps are needed..
    • metadata (video_metadata): Video metadata containing frame rate (fps), frame number, frame timestamp, video identifier, and video source information required for time calculation and state management. The fps and frame_number are used for video files to calculate timestamps (timestamp = frame_number / fps). For streamed video, frame_timestamp is used directly. The video_identifier is used to maintain separate tracking state and zone configurations for different videos. The metadata must include valid fps for video files or frame_timestamp for streams to enable accurate time measurement..
    • detections (Union[object_detection_prediction, instance_segmentation_prediction]): Tracked detection predictions (object detection or instance segmentation) with tracker_id information. Detections must come from a tracking block (e.g., Byte Tracker) that has assigned unique tracker_id values that persist across frames. Each detection must have a tracker_id to enable time tracking. The block calculates time_in_zone for each tracked object based on when its track_id first entered the zone. The output will include the same detections enhanced with time_in_zone metadata (duration in seconds). If remove_out_of_zone_detections is True, only detections inside the zone are included in the output..
    • zone (list_of_values): Polygon zone coordinates defining the area for time measurement. Must be a list of at least 3 points, where each point is a list or tuple of exactly 2 coordinates [x, y] or (x, y). Coordinates should be in pixel space matching the image dimensions. Example: [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] for a quadrilateral zone. The zone defines the polygon area where time tracking occurs. Objects are considered 'in zone' when their triggering_anchor point is inside this polygon. Zone coordinates are validated and a PolygonZone object is created for each video..
    • triggering_anchor (string): Point on the detection bounding box that must be inside the zone to consider the object 'in zone'. Options include: 'CENTER' (default, center of bounding box), 'BOTTOM_CENTER' (bottom center point), 'TOP_CENTER' (top center point), 'CENTER_LEFT' (center left point), 'CENTER_RIGHT' (center right point), and other Position enum values. The triggering anchor determines which part of the object's bounding box is checked against the zone polygon. Use CENTER for standard zone detection, BOTTOM_CENTER for ground-level zones (e.g., tracking feet/vehicle base), or other anchors based on detection needs. Default is 'CENTER'..
    • remove_out_of_zone_detections (boolean): If True (default), detections found outside the zone are filtered out and not included in the output. Only detections inside the zone are returned. If False, all detections are included in the output, with time_in_zone = 0 for objects outside the zone. Use True to focus analysis only on objects in the zone, or False to maintain all detections with zone status. Default is True for cleaner output focused on zone activity..
    • reset_out_of_zone_detections (boolean): If True (default), when a tracked object leaves the zone, its time tracking is reset (entry timestamp is cleared). When the object re-enters the zone, time tracking starts from 0 again. If False, time tracking continues even after leaving the zone, and re-entry maintains cumulative time. Use True to measure current continuous time in zone (resets on exit), or False to measure cumulative time across multiple entries. Default is True for measuring continuous presence duration..
  • output

    • timed_detections (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
Example JSON definition of step Time in Zone in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/time_in_zone@v1",
    "image": "$inputs.image",
    "metadata": "<block_does_not_provide_example>",
    "detections": "$steps.object_detection_model.predictions",
    "zone": "$inputs.zones",
    "triggering_anchor": "CENTER",
    "remove_out_of_zone_detections": true,
    "reset_out_of_zone_detections": true
}