Detection Event Log¶
Class: DetectionEventLogBlockV1
Source: inference.core.workflows.core_steps.analytics.detection_event_log.v1.DetectionEventLogBlockV1
This block maintains a log of detection events from tracked objects. It records when each object was first seen, its class, and the last time it was seen.Objects must be seen for a minimum number of frames (frame_threshold) before being logged. Stale events (not seen for stale_frames) are removed during periodic cleanup (every flush_interval frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_event_log@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
frame_threshold |
int |
Number of frames an object must be seen before being logged.. | ✅ |
flush_interval |
int |
How often (in frames) to run the cleanup operation for stale events.. | ✅ |
stale_frames |
int |
Remove events that haven't been seen for this many frames.. | ✅ |
reference_timestamp |
float |
Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame.. | ✅ |
fallback_fps |
float |
Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Event Log in version v1.
- inputs:
Detections Stitch,Camera Focus,Moondream2,Time in Zone,Byte Tracker,Cosine Similarity,Camera Focus,Detections Filter,Time in Zone,SAM 3,EasyOCR,Instance Segmentation Model,Path Deviation,Identify Changes,Detections Consensus,Template Matching,Line Counter,Seg Preview,Detections Transformation,SAM 3,Path Deviation,Detections Stabilizer,Detections Classes Replacement,Detections Combine,Segment Anything 2 Model,Dynamic Crop,VLM As Detector,Detection Offset,Object Detection Model,Overlap Filter,Bounding Rectangle,Detection Event Log,Object Detection Model,PTZ Tracking (ONVIF),Byte Tracker,Google Vision OCR,Gaze Detection,OCR Model,Mask Area Measurement,Byte Tracker,Motion Detection,Time in Zone,SAM 3,Detections Merge,Detections List Roll-Up,Instance Segmentation Model,YOLO-World Model,Dynamic Zone,Velocity,VLM As Detector,Perspective Correction - outputs:
Image Threshold,Stitch Images,Byte Tracker,Size Measurement,Keypoint Detection Model,Mask Visualization,Instance Segmentation Model,Path Deviation,Crop Visualization,QR Code Generator,Detections Stabilizer,Segment Anything 2 Model,Overlap Filter,Object Detection Model,Slack Notification,Dot Visualization,Motion Detection,Email Notification,Detections List Roll-Up,Instance Segmentation Model,Roboflow Dataset Upload,Label Visualization,Stitch OCR Detections,Detections Filter,Color Visualization,Florence-2 Model,Model Monitoring Inference Aggregator,Dynamic Crop,Background Color Visualization,Object Detection Model,SIFT Comparison,Line Counter Visualization,Image Preprocessing,PTZ Tracking (ONVIF),Blur Visualization,Triangle Visualization,Trace Visualization,Email Notification,Twilio SMS/MMS Notification,Byte Tracker,Reference Path Visualization,Florence-2 Model,Perspective Correction,Stitch OCR Detections,Time in Zone,Circle Visualization,Detections Consensus,Detections Transformation,Stability AI Outpainting,Text Display,Anthropic Claude,Line Counter,Path Deviation,Detections Combine,Image Slicer,Keypoint Detection Model,Bounding Rectangle,Distance Measurement,Ellipse Visualization,Byte Tracker,Halo Visualization,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Identify Outliers,Absolute Static Crop,Image Contours,Classification Label Visualization,Dominant Color,Image Slicer,Detections Stitch,Camera Focus,Time in Zone,Background Subtraction,Grid Visualization,Heatmap Visualization,Identify Changes,Line Counter,Bounding Box Visualization,Roboflow Dataset Upload,Polygon Visualization,Pixelate Visualization,Roboflow Custom Metadata,Pixel Color Count,Image Blur,SIFT Comparison,Detections Classes Replacement,Morphological Transformation,Stability AI Inpainting,Webhook Sink,Detection Offset,Detection Event Log,Icon Visualization,Twilio SMS Notification,Polygon Visualization,Anthropic Claude,Mask Area Measurement,Time in Zone,Detections Merge,Dynamic Zone,Halo Visualization,Velocity,Keypoint Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Event Log in version v1 has.
Bindings
-
input
image(image): Reference to the image for video metadata (frame number, timestamp)..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detections from byte tracker (must have tracker_id)..frame_threshold(integer): Number of frames an object must be seen before being logged..flush_interval(integer): How often (in frames) to run the cleanup operation for stale events..stale_frames(integer): Remove events that haven't been seen for this many frames..reference_timestamp(float): Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame..fallback_fps(float): Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps..
-
output
event_log(dictionary): Dictionary.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.total_logged(integer): Integer value.total_pending(integer): Integer value.complete_events(dictionary): Dictionary.
Example JSON definition of step Detection Event Log in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_event_log@v1",
"image": "$inputs.image",
"detections": "$steps.byte_tracker.tracked_detections",
"frame_threshold": 5,
"flush_interval": 30,
"stale_frames": 150,
"reference_timestamp": 1726570875.0,
"fallback_fps": 1.0
}