Roboflow Vision Events¶
Class: RoboflowVisionEventsBlockV1
Source: inference.core.workflows.core_steps.sinks.roboflow.vision_events.v1.RoboflowVisionEventsBlockV1
Send images, model predictions, and event metadata to the Roboflow Vision Events API for monitoring, quality control, safety alerting, and custom event tracking.
How This Block Works¶
This block uploads workflow images and model predictions to the Roboflow Vision Events API, creating structured events that can be queried, filtered, and visualized in the Roboflow dashboard.
- Optionally uploads an input image and/or output image (visualization) to the Vision Events image storage via the public API
- Converts model predictions (object detection, classification, instance segmentation, or keypoint detection) into the Vision Events annotation format and attaches them to the input image
- Creates a vision event with the specified event type, use case, event data, and custom metadata
- Supports fire-and-forget mode for non-blocking execution
Event Types¶
- quality_check: Manufacturing/inspection QA with pass/fail result and optional confidence
- inventory_count: Inventory tracking with location, item count, and item type
- safety_alert: Safety violations with alert type, severity (low/medium/high), and description
- custom: User-defined events with a free-form value string
- operator_feedback: Operator review/correction of previous events (correct/incorrect/inconclusive)
Requirements¶
API Key Required: This block requires a valid Roboflow API key with vision-events:write
scope. The API key must be configured in your environment or workflow configuration.
Common Use Cases¶
- Quality Control: Automatically log inspection results with images and detection overlays
- Safety Monitoring: Send safety alerts when violations are detected in video streams
- Production Analytics: Track inventory counts and production metrics with visual evidence
- Active Monitoring: Fire-and-forget event logging from real-time video processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/roboflow_vision_events@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
event_type |
str |
The type of vision event to create.. | ✅ |
solution |
str |
The use case to associate the event with. Events are namespaced by use case within a workspace.. | ✅ |
external_id |
str |
External identifier for correlation with other systems (max 1000 chars).. | ✅ |
qc_result |
str |
Quality check result: pass or fail.. | ✅ |
location |
str |
Location identifier for inventory count.. | ✅ |
item_count |
int |
Number of items counted.. | ✅ |
item_type |
str |
Type of item being counted.. | ✅ |
alert_type |
str |
Alert type identifier (e.g. no_hardhat, spill_detected).. | ✅ |
severity |
str |
Severity level for the safety alert.. | ✅ |
alert_description |
str |
Description of the safety alert.. | ✅ |
custom_value |
str |
Arbitrary value for custom events.. | ✅ |
related_event_id |
str |
The event ID of the event being reviewed.. | ✅ |
feedback |
str |
Operator feedback on the related event.. | ✅ |
custom_metadata |
Dict[str, Union[bool, float, int, str]] |
Flat key-value metadata to attach to the event. Keys must match pattern [a-zA-Z0-9_ -]+ (max 100 chars). String values max 1000 chars.. | ✅ |
fire_and_forget |
bool |
If True, the event is sent asynchronously and the workflow continues without waiting. If False, the block waits for the API response.. | ✅ |
disable_sink |
bool |
If True, the block is disabled and no events are sent.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Roboflow Vision Events in version v1.
- inputs:
Single-Label Classification Model,OpenAI,Multi-Label Classification Model,Halo Visualization,SAM2 Video Tracker,Camera Focus,SIFT Comparison,Template Matching,Detections List Roll-Up,Qwen3-VL,OC-SORT Tracker,Cache Get,LMM,Email Notification,Camera Calibration,Background Subtraction,Image Contours,Time in Zone,PTZ Tracking (ONVIF),Label Visualization,Line Counter,Florence-2 Model,YOLO-World Model,Segment Anything 2 Model,SAM 3,Llama 3.2 Vision,Object Detection Model,EasyOCR,Roboflow Dataset Upload,Bounding Box Visualization,Icon Visualization,Anthropic Claude,Qwen3.5-VL,Background Color Visualization,Buffer,Dynamic Zone,Blur Visualization,Stitch Images,Contrast Equalization,Time in Zone,Perspective Correction,Triangle Visualization,Image Threshold,Property Definition,Environment Secrets Store,Trace Visualization,Florence-2 Model,Detections Merge,Byte Tracker,OpenAI,VLM As Classifier,Seg Preview,Keypoint Detection Model,Velocity,QR Code Generator,Single-Label Classification Model,Cosine Similarity,Expression,Gaze Detection,SORT Tracker,Image Blur,Overlap Filter,Detections Filter,Detections Transformation,Identify Changes,First Non Empty Or Default,Multi-Label Classification Model,Qwen2.5-VL,Byte Tracker,Grid Visualization,Moondream2,VLM As Detector,Webhook Sink,Twilio SMS/MMS Notification,Circle Visualization,Object Detection Model,Path Deviation,Local File Sink,JSON Parser,Stability AI Outpainting,CLIP Embedding Model,Reference Path Visualization,Bounding Rectangle,Continue If,CogVLM,Dimension Collapse,Size Measurement,Cache Set,Model Monitoring Inference Aggregator,Pixel Color Count,Stability AI Image Generation,CSV Formatter,Semantic Segmentation Model,Morphological Transformation,Clip Comparison,Time in Zone,Model Comparison Visualization,Path Deviation,Perception Encoder Embedding Model,Google Vision OCR,Multi-Label Classification Model,Image Slicer,SAM 3,S3 Sink,Delta Filter,Depth Estimation,Anthropic Claude,SIFT,Google Gemini,Barcode Detection,Image Convert Grayscale,VLM As Detector,LMM For Classification,Google Gemini,Data Aggregator,Color Visualization,SIFT Comparison,Polygon Visualization,Identify Outliers,Polygon Zone Visualization,Twilio SMS Notification,Detection Offset,Roboflow Vision Events,Email Notification,Absolute Static Crop,Detections Consensus,SAM 3,Instance Segmentation Model,Crop Visualization,Stitch OCR Detections,Single-Label Classification Model,Heatmap Visualization,Pixelate Visualization,Relative Static Crop,Camera Focus,Ellipse Visualization,Object Detection Model,Dominant Color,GLM-OCR,Keypoint Detection Model,Halo Visualization,Classification Label Visualization,SmolVLM2,Line Counter Visualization,Detection Event Log,Instance Segmentation Model,Text Display,Slack Notification,OpenAI,Detections Classes Replacement,ByteTrack Tracker,Instance Segmentation Model,VLM As Classifier,Mask Visualization,Anthropic Claude,Polygon Visualization,Rate Limiter,Dynamic Crop,Semantic Segmentation Model,Byte Tracker,Roboflow Custom Metadata,OpenAI,Image Slicer,Keypoint Detection Model,Dot Visualization,Image Preprocessing,Clip Comparison,Stability AI Inpainting,OCR Model,Stitch OCR Detections,Keypoint Visualization,Detections Stabilizer,Corner Visualization,Motion Detection,Distance Measurement,Mask Area Measurement,Detections Combine,QR Code Detection,Roboflow Dataset Upload,Inner Workflow,Line Counter,Google Gemini,Detections Stitch - outputs:
Single-Label Classification Model,OpenAI,Anthropic Claude,Multi-Label Classification Model,Halo Visualization,Google Gemini,LMM For Classification,Template Matching,Google Gemini,Cache Get,Email Notification,LMM,Color Visualization,Camera Calibration,SIFT Comparison,Polygon Visualization,Twilio SMS Notification,Polygon Zone Visualization,Time in Zone,Roboflow Vision Events,Email Notification,Detections Consensus,PTZ Tracking (ONVIF),SAM 3,Label Visualization,Line Counter,Florence-2 Model,YOLO-World Model,Segment Anything 2 Model,SAM 3,Llama 3.2 Vision,Object Detection Model,Instance Segmentation Model,Crop Visualization,Roboflow Dataset Upload,Stitch OCR Detections,Bounding Box Visualization,Single-Label Classification Model,Heatmap Visualization,Icon Visualization,Anthropic Claude,Pixelate Visualization,Ellipse Visualization,Background Color Visualization,Dynamic Zone,Blur Visualization,Object Detection Model,GLM-OCR,Contrast Equalization,Time in Zone,Perspective Correction,Triangle Visualization,Keypoint Detection Model,Image Threshold,Halo Visualization,Trace Visualization,Classification Label Visualization,Florence-2 Model,Line Counter Visualization,OpenAI,Seg Preview,Instance Segmentation Model,Keypoint Detection Model,QR Code Generator,Text Display,Single-Label Classification Model,Slack Notification,OpenAI,Detections Classes Replacement,Gaze Detection,Image Blur,Instance Segmentation Model,Multi-Label Classification Model,Mask Visualization,Anthropic Claude,Polygon Visualization,Dynamic Crop,Moondream2,Webhook Sink,Roboflow Custom Metadata,OpenAI,Twilio SMS/MMS Notification,Circle Visualization,Object Detection Model,Path Deviation,Local File Sink,Stability AI Outpainting,CLIP Embedding Model,Keypoint Detection Model,Reference Path Visualization,CogVLM,Dot Visualization,Cache Set,Size Measurement,Model Monitoring Inference Aggregator,Pixel Color Count,Image Preprocessing,Stability AI Image Generation,Stability AI Inpainting,Stitch OCR Detections,Keypoint Visualization,Semantic Segmentation Model,Corner Visualization,Motion Detection,Time in Zone,Morphological Transformation,Distance Measurement,Model Comparison Visualization,Clip Comparison,Path Deviation,Roboflow Dataset Upload,Perception Encoder Embedding Model,Google Vision OCR,Google Gemini,Multi-Label Classification Model,Line Counter,Detections Stitch,SAM 3,S3 Sink,Depth Estimation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Roboflow Vision Events in version v1 has.
Bindings
-
input
input_image(image): The original input image. Uploaded to the Vision Events API and used as the base image for detection annotations..output_image(image): An optional output/visualized image (e.g., from a visualization block). Displayed as the primary image in the Vision Events dashboard..predictions(Union[keypoint_detection_prediction,instance_segmentation_prediction,object_detection_prediction,classification_prediction]): Optional model predictions to include as detection annotations on the input image. Supports object detection, instance segmentation, keypoint detection, and classification predictions..event_type(string): The type of vision event to create..solution(Union[roboflow_solution,string]): The use case to associate the event with. Events are namespaced by use case within a workspace..external_id(string): External identifier for correlation with other systems (max 1000 chars)..qc_result(string): Quality check result: pass or fail..location(string): Location identifier for inventory count..item_count(integer): Number of items counted..item_type(string): Type of item being counted..alert_type(string): Alert type identifier (e.g. no_hardhat, spill_detected)..severity(string): Severity level for the safety alert..alert_description(string): Description of the safety alert..custom_value(string): Arbitrary value for custom events..related_event_id(string): The event ID of the event being reviewed..feedback(string): Operator feedback on the related event..custom_metadata(*): Flat key-value metadata to attach to the event. Keys must match pattern [a-zA-Z0-9_ -]+ (max 100 chars). String values max 1000 chars..fire_and_forget(boolean): If True, the event is sent asynchronously and the workflow continues without waiting. If False, the block waits for the API response..disable_sink(boolean): If True, the block is disabled and no events are sent..
-
output
Example JSON definition of step Roboflow Vision Events in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/roboflow_vision_events@v1",
"input_image": "$inputs.image",
"output_image": "$steps.visualization.image",
"predictions": "$steps.object_detection_model.predictions",
"event_type": "quality_check",
"solution": "my-use-case",
"external_id": "batch-2025-001",
"qc_result": "pass",
"location": "warehouse-A",
"item_count": 42,
"item_type": "widget",
"alert_type": "no_hardhat",
"severity": "high",
"alert_description": "Worker detected without hardhat in zone B",
"custom_value": "anomaly detected at 14:32",
"related_event_id": "evt_abc123",
"feedback": "correct",
"custom_metadata": {
"camera_id": "cam_01",
"location": "$inputs.location"
},
"fire_and_forget": true,
"disable_sink": false
}