Detection Event Log¶
Class: DetectionEventLogBlockV1
Source: inference.core.workflows.core_steps.analytics.detection_event_log.v1.DetectionEventLogBlockV1
This block maintains a log of detection events from tracked objects. It records when each object was first seen, its class, and the last time it was seen.Objects must be seen for a minimum number of frames (frame_threshold) before being logged. Stale events (not seen for stale_frames) are removed during periodic cleanup (every flush_interval frames).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_event_log@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
frame_threshold |
int |
Number of frames an object must be seen before being logged.. | ✅ |
flush_interval |
int |
How often (in frames) to run the cleanup operation for stale events.. | ✅ |
stale_frames |
int |
Remove events that haven't been seen for this many frames.. | ✅ |
reference_timestamp |
float |
Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame.. | ✅ |
fallback_fps |
float |
Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Event Log in version v1.
- inputs:
Motion Detection,Detections Stabilizer,PTZ Tracking (ONVIF).md),SAM 3,Byte Tracker,EasyOCR,Detections Classes Replacement,OCR Model,Time in Zone,Path Deviation,Detections Transformation,SAM 3,Seg Preview,Moondream2,YOLO-World Model,Time in Zone,Detections Combine,Dynamic Crop,Detections Stitch,Identify Changes,Template Matching,Instance Segmentation Model,SAM 3,Overlap Filter,Mask Area Measurement,Object Detection Model,Line Counter,Google Vision OCR,Camera Focus,Detections Filter,Detections Consensus,Gaze Detection,Velocity,Bounding Rectangle,Object Detection Model,VLM As Detector,Instance Segmentation Model,Segment Anything 2 Model,Path Deviation,Time in Zone,Camera Focus,VLM As Detector,Detection Offset,Detections List Roll-Up,Byte Tracker,Byte Tracker,Cosine Similarity,Perspective Correction,Detections Merge,Detection Event Log,Dynamic Zone - outputs:
Motion Detection,PTZ Tracking (ONVIF).md),Image Preprocessing,Webhook Sink,Detections Classes Replacement,Path Deviation,Stitch OCR Detections,Triangle Visualization,Slack Notification,Detections Stitch,Bounding Box Visualization,Stitch Images,Email Notification,Morphological Transformation,Icon Visualization,Mask Area Measurement,Anthropic Claude,Heatmap Visualization,Grid Visualization,Detections Filter,Bounding Rectangle,Model Comparison Visualization,Path Deviation,Distance Measurement,Email Notification,Twilio SMS Notification,Absolute Static Crop,Byte Tracker,Detections List Roll-Up,Halo Visualization,Detections Merge,Florence-2 Model,Stability AI Outpainting,Detection Event Log,Dynamic Zone,Time in Zone,Keypoint Visualization,Roboflow Custom Metadata,Background Color Visualization,Camera Focus,Instance Segmentation Model,Line Counter,Detections Consensus,Velocity,Polygon Visualization,Anthropic Claude,Time in Zone,Keypoint Detection Model,Mask Visualization,Line Counter,Detection Offset,Byte Tracker,Classification Label Visualization,Line Counter Visualization,Keypoint Detection Model,Identify Outliers,SIFT Comparison,Detections Stabilizer,Stitch OCR Detections,Byte Tracker,Detections Transformation,Roboflow Dataset Upload,Corner Visualization,Text Display,Detections Combine,Time in Zone,Dynamic Crop,Image Slicer,SIFT Comparison,Identify Changes,Halo Visualization,Overlap Filter,Object Detection Model,Size Measurement,QR Code Generator,Florence-2 Model,Image Blur,Instance Segmentation Model,Blur Visualization,Roboflow Dataset Upload,Dot Visualization,Dominant Color,Image Contours,Ellipse Visualization,Pixelate Visualization,Pixel Color Count,Reference Path Visualization,Crop Visualization,Twilio SMS/MMS Notification,Circle Visualization,Trace Visualization,Color Visualization,Object Detection Model,Segment Anything 2 Model,Polygon Visualization,Image Threshold,Model Monitoring Inference Aggregator,Stability AI Inpainting,Background Subtraction,Image Slicer,Perspective Correction,Label Visualization,Anthropic Claude
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Event Log in version v1 has.
Bindings
-
input
image(image): Reference to the image for video metadata (frame number, timestamp)..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Tracked detections from byte tracker (must have tracker_id)..frame_threshold(integer): Number of frames an object must be seen before being logged..flush_interval(integer): How often (in frames) to run the cleanup operation for stale events..stale_frames(integer): Remove events that haven't been seen for this many frames..reference_timestamp(float): Unix timestamp when the video started. When provided, absolute timestamps (first_seen_timestamp, last_seen_timestamp) are included in output, calculated as relative time + reference_timestamp. If not provided and the video metadata contains frame_timestamp, the reference timestamp will be automatically extracted from the first frame..fallback_fps(float): Fallback FPS to use when video metadata does not provide FPS information. Used to calculate relative timestamps..
-
output
event_log(dictionary): Dictionary.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.total_logged(integer): Integer value.total_pending(integer): Integer value.complete_events(dictionary): Dictionary.
Example JSON definition of step Detection Event Log in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_event_log@v1",
"image": "$inputs.image",
"detections": "$steps.byte_tracker.tracked_detections",
"frame_threshold": 5,
"flush_interval": 30,
"stale_frames": 150,
"reference_timestamp": 1726570875.0,
"fallback_fps": 1.0
}