Detection Event Log¶
Class: DetectionEventLogBlockV1
Source: inference.core.workflows.core_steps.analytics.detection_event_log.v1.DetectionEventLogBlockV1
This block maintains a log of detection events from tracked objects. It records when each object was first seen, its class, and the last time it was seen.Objects must be seen for a minimum number of frames (frame_threshold) before being logged. Stale events (not seen for stale_frames) are removed during periodic cleanup (every flush_interval frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_event_log@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
frame_threshold |
int |
Number of frames an object must be seen before being logged.. | ✅ |
flush_interval |
int |
How often (in frames) to run the cleanup operation for stale events.. | ✅ |
stale_frames |
int |
Remove events that haven't been seen for this many frames.. | ✅ |
reference_timestamp |
float |
Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame.. | ✅ |
fallback_fps |
float |
Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Event Log in version v1.
- inputs:
Byte Tracker,Google Vision OCR,Cosine Similarity,Detections Filter,Instance Segmentation Model,Path Deviation,SAM 3,VLM As Detector,EasyOCR,Bounding Rectangle,Time in Zone,Moondream2,Line Counter,Byte Tracker,VLM As Detector,Instance Segmentation Model,Camera Focus,Detections Stabilizer,OC-SORT Tracker,Velocity,Time in Zone,Detection Offset,Seg Preview,Gaze Detection,SORT Tracker,Detections Consensus,Mask Area Measurement,Dynamic Zone,Perspective Correction,YOLO-World Model,Detection Event Log,Byte Tracker,OCR Model,ByteTrack Tracker,Motion Detection,SAM 3,Detections Classes Replacement,Detections Stitch,PTZ Tracking (ONVIF),Detections Transformation,Detections List Roll-Up,Segment Anything 2 Model,Camera Focus,Overlap Filter,Time in Zone,Object Detection Model,Template Matching,Object Detection Model,Detections Merge,SAM 3,Identify Changes,Path Deviation,Detections Combine,Dynamic Crop - outputs:
Mask Visualization,Identify Outliers,Image Blur,Byte Tracker,Crop Visualization,Model Monitoring Inference Aggregator,Anthropic Claude,Keypoint Visualization,SORT Tracker,Stability AI Inpainting,Keypoint Detection Model,Perspective Correction,Reference Path Visualization,Trace Visualization,Grid Visualization,Polygon Visualization,Stitch OCR Detections,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison,Detections Combine,Dynamic Crop,Image Slicer,Byte Tracker,Distance Measurement,Image Threshold,Email Notification,Line Counter,Webhook Sink,Instance Segmentation Model,Pixel Color Count,Camera Focus,Roboflow Dataset Upload,Time in Zone,Detection Offset,Keypoint Detection Model,Blur Visualization,QR Code Generator,Dot Visualization,Background Subtraction,Roboflow Custom Metadata,Bounding Box Visualization,Polygon Visualization,Twilio SMS/MMS Notification,Detections Classes Replacement,Detections Transformation,Overlap Filter,Email Notification,Circle Visualization,Object Detection Model,Detections Merge,Anthropic Claude,Path Deviation,Pixelate Visualization,Detections Filter,Path Deviation,Florence-2 Model,Dominant Color,Florence-2 Model,Detections Stabilizer,Corner Visualization,OC-SORT Tracker,Detections Consensus,Triangle Visualization,Stitch Images,Halo Visualization,Motion Detection,Detections Stitch,Detections List Roll-Up,Time in Zone,Object Detection Model,Identify Changes,Size Measurement,Halo Visualization,Label Visualization,Bounding Rectangle,Anthropic Claude,Instance Segmentation Model,Line Counter Visualization,Slack Notification,Time in Zone,Stability AI Outpainting,Heatmap Visualization,Ellipse Visualization,Icon Visualization,Background Color Visualization,SIFT Comparison,Velocity,Text Display,Image Contours,Mask Area Measurement,Dynamic Zone,Detection Event Log,Roboflow Dataset Upload,Color Visualization,Twilio SMS Notification,Absolute Static Crop,Byte Tracker,ByteTrack Tracker,Segment Anything 2 Model,Model Comparison Visualization,Line Counter,Image Slicer,Classification Label Visualization,Image Preprocessing,Morphological Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Event Log in version v1 has.
Bindings
-
input
image(image): Reference to the image for video metadata (frame number, timestamp)..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detections from byte tracker (must have tracker_id)..frame_threshold(integer): Number of frames an object must be seen before being logged..flush_interval(integer): How often (in frames) to run the cleanup operation for stale events..stale_frames(integer): Remove events that haven't been seen for this many frames..reference_timestamp(float): Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame..fallback_fps(float): Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps..
-
output
event_log(dictionary): Dictionary.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.total_logged(integer): Integer value.total_pending(integer): Integer value.complete_events(dictionary): Dictionary.
Example JSON definition of step Detection Event Log in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_event_log@v1",
"image": "$inputs.image",
"detections": "$steps.byte_tracker.tracked_detections",
"frame_threshold": 5,
"flush_interval": 30,
"stale_frames": 150,
"reference_timestamp": 1726570875.0,
"fallback_fps": 1.0
}