Detection Event Log¶
Class: DetectionEventLogBlockV1
Source: inference.core.workflows.core_steps.analytics.detection_event_log.v1.DetectionEventLogBlockV1
This block maintains a log of detection events from tracked objects. It records when each object was first seen, its class, and the last time it was seen.Objects must be seen for a minimum number of frames (frame_threshold) before being logged. Stale events (not seen for stale_frames) are removed during periodic cleanup (every flush_interval frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_event_log@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
frame_threshold |
int |
Number of frames an object must be seen before being logged.. | ✅ |
flush_interval |
int |
How often (in frames) to run the cleanup operation for stale events.. | ✅ |
stale_frames |
int |
Remove events that haven't been seen for this many frames.. | ✅ |
reference_timestamp |
float |
Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame.. | ✅ |
fallback_fps |
float |
Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Event Log in version v1.
- inputs:
Google Vision OCR,VLM as Detector,Detections Consensus,OCR Model,Camera Focus,Detections Combine,SAM 3,Byte Tracker,PTZ Tracking (ONVIF).md),Time in Zone,Seg Preview,Camera Focus,Detections Filter,Detection Offset,Line Counter,Object Detection Model,Dynamic Crop,Detection Event Log,Detections Stabilizer,Detections Transformation,SAM 3,Detections List Roll-Up,Path Deviation,Time in Zone,Detections Stitch,YOLO-World Model,Segment Anything 2 Model,Byte Tracker,Moondream2,Overlap Filter,Object Detection Model,Gaze Detection,Instance Segmentation Model,Detections Merge,Identify Changes,SAM 3,Perspective Correction,Motion Detection,Cosine Similarity,Time in Zone,Detections Classes Replacement,Path Deviation,EasyOCR,Template Matching,Bounding Rectangle,Byte Tracker,Instance Segmentation Model,Velocity,Dynamic Zone,VLM as Detector - outputs:
Image Slicer,Image Blur,Ellipse Visualization,Halo Visualization,Detection Offset,Line Counter,Detection Event Log,Stability AI Inpainting,Reference Path Visualization,Slack Notification,Circle Visualization,Background Subtraction,Roboflow Dataset Upload,Pixel Color Count,Detections Merge,Anthropic Claude,Line Counter,Pixelate Visualization,Byte Tracker,Email Notification,Image Contours,Stitch OCR Detections,Detections Consensus,Camera Focus,Byte Tracker,Dot Visualization,Anthropic Claude,Stitch OCR Detections,Dominant Color,Keypoint Visualization,Anthropic Claude,Trace Visualization,Detections Transformation,Crop Visualization,Absolute Static Crop,Segment Anything 2 Model,Byte Tracker,Overlap Filter,Image Preprocessing,Instance Segmentation Model,Identify Changes,Perspective Correction,Email Notification,Motion Detection,Halo Visualization,SIFT Comparison,Path Deviation,Polygon Visualization,QR Code Generator,Bounding Box Visualization,Size Measurement,Corner Visualization,Label Visualization,Florence-2 Model,SIFT Comparison,Webhook Sink,Stability AI Outpainting,Stitch Images,Model Comparison Visualization,Detections Filter,Distance Measurement,Polygon Visualization,Object Detection Model,Detections Stabilizer,Detections List Roll-Up,Path Deviation,Icon Visualization,Twilio SMS/MMS Notification,Model Monitoring Inference Aggregator,Object Detection Model,Color Visualization,Mask Visualization,Roboflow Dataset Upload,Time in Zone,Detections Classes Replacement,Image Slicer,Bounding Rectangle,Instance Segmentation Model,Keypoint Detection Model,Dynamic Zone,Text Display,Blur Visualization,Roboflow Custom Metadata,Triangle Visualization,Identify Outliers,Detections Combine,Classification Label Visualization,Image Threshold,PTZ Tracking (ONVIF).md),Time in Zone,Background Color Visualization,Grid Visualization,Dynamic Crop,Keypoint Detection Model,Line Counter Visualization,Florence-2 Model,Time in Zone,Detections Stitch,Twilio SMS Notification,Morphological Transformation,Velocity
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Event Log in version v1 has.
Bindings
-
input
image(image): Reference to the image for video metadata (frame number, timestamp)..detections(Union[instance_segmentation_prediction,object_detection_prediction]): Tracked detections from byte tracker (must have tracker_id)..frame_threshold(integer): Number of frames an object must be seen before being logged..flush_interval(integer): How often (in frames) to run the cleanup operation for stale events..stale_frames(integer): Remove events that haven't been seen for this many frames..reference_timestamp(float): Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame..fallback_fps(float): Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps..
-
output
event_log(dictionary): Dictionary.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.total_logged(integer): Integer value.total_pending(integer): Integer value.complete_events(dictionary): Dictionary.
Example JSON definition of step Detection Event Log in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_event_log@v1",
"image": "$inputs.image",
"detections": "$steps.byte_tracker.tracked_detections",
"frame_threshold": 5,
"flush_interval": 30,
"stale_frames": 150,
"reference_timestamp": 1726570875.0,
"fallback_fps": 1.0
}