Detection Event Log¶
Class: DetectionEventLogBlockV1
Source: inference.core.workflows.core_steps.analytics.detection_event_log.v1.DetectionEventLogBlockV1
This block maintains a log of detection events from tracked objects. It records when each object was first seen, its class, and the last time it was seen.Objects must be seen for a minimum number of frames (frame_threshold) before being logged. Stale events (not seen for stale_frames) are removed during periodic cleanup (every flush_interval frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_event_log@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
frame_threshold |
int |
Number of frames an object must be seen before being logged.. | ✅ |
flush_interval |
int |
How often (in frames) to run the cleanup operation for stale events.. | ✅ |
stale_frames |
int |
Remove events that haven't been seen for this many frames.. | ✅ |
reference_timestamp |
float |
Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame.. | ✅ |
fallback_fps |
float |
Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Event Log in version v1.
- inputs:
Moondream2,Detections Consensus,Detections Merge,Seg Preview,Instance Segmentation Model,Dynamic Zone,Dynamic Crop,VLM As Detector,VLM As Detector,SAM 3,Path Deviation,Detection Offset,Byte Tracker,Line Counter,Byte Tracker,SAM 3,Segment Anything 2 Model,Detections List Roll-Up,Object Detection Model,Template Matching,Path Deviation,Cosine Similarity,Google Vision OCR,Time in Zone,Bounding Rectangle,Detections Stitch,Time in Zone,Instance Segmentation Model,Detections Filter,Gaze Detection,Detections Stabilizer,EasyOCR,PTZ Tracking (ONVIF).md),Camera Focus,Perspective Correction,Detections Combine,Object Detection Model,SAM 3,Byte Tracker,Detection Event Log,Identify Changes,Detections Classes Replacement,OCR Model,YOLO-World Model,Overlap Filter,Time in Zone,Camera Focus,Motion Detection,Velocity,Detections Transformation - outputs:
Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Email Notification,QR Code Generator,Detection Offset,Corner Visualization,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,Object Detection Model,Trace Visualization,Instance Segmentation Model,Text Display,Stitch Images,Slack Notification,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Polygon Visualization,Object Detection Model,Anthropic Claude,Byte Tracker,Identify Changes,Detections Classes Replacement,Velocity,SIFT Comparison,Halo Visualization,Florence-2 Model,Blur Visualization,Label Visualization,Twilio SMS/MMS Notification,Ellipse Visualization,Model Monitoring Inference Aggregator,Detections List Roll-Up,Model Comparison Visualization,Background Color Visualization,Image Threshold,Size Measurement,Keypoint Detection Model,Polygon Visualization,Twilio SMS Notification,Bounding Box Visualization,Overlap Filter,Icon Visualization,Time in Zone,Florence-2 Model,Roboflow Dataset Upload,Anthropic Claude,Dynamic Zone,Dynamic Crop,Path Deviation,Image Blur,Byte Tracker,Line Counter,Stability AI Inpainting,Image Contours,Path Deviation,Morphological Transformation,Triangle Visualization,Bounding Rectangle,Detections Stitch,Detections Filter,Grid Visualization,Detections Stabilizer,Camera Focus,Detections Combine,Image Slicer,Line Counter Visualization,Keypoint Detection Model,Distance Measurement,SIFT Comparison,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Identify Outliers,Email Notification,Byte Tracker,Image Preprocessing,Time in Zone,Line Counter,Absolute Static Crop,Roboflow Custom Metadata,Stitch OCR Detections,Perspective Correction,Anthropic Claude,Pixelate Visualization,Reference Path Visualization,Keypoint Visualization,Detection Event Log,Stitch OCR Detections,Crop Visualization,Pixel Color Count,Motion Detection,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Event Log in version v1 has.
Bindings
-
input
image(image): Reference to the image for video metadata (frame number, timestamp)..detections(Union[instance_segmentation_prediction,object_detection_prediction]): Tracked detections from byte tracker (must have tracker_id)..frame_threshold(integer): Number of frames an object must be seen before being logged..flush_interval(integer): How often (in frames) to run the cleanup operation for stale events..stale_frames(integer): Remove events that haven't been seen for this many frames..reference_timestamp(float): Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame..fallback_fps(float): Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps..
-
output
event_log(dictionary): Dictionary.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.total_logged(integer): Integer value.total_pending(integer): Integer value.complete_events(dictionary): Dictionary.
Example JSON definition of step Detection Event Log in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_event_log@v1",
"image": "$inputs.image",
"detections": "$steps.byte_tracker.tracked_detections",
"frame_threshold": 5,
"flush_interval": 30,
"stale_frames": 150,
"reference_timestamp": 1726570875.0,
"fallback_fps": 1.0
}