Buffer¶
Class: BufferBlockV1
Source: inference.core.workflows.core_steps.fusion.buffer.v1.BufferBlockV1
Maintain a sliding window buffer of the last N values by storing recent inputs in a FIFO (First-In-First-Out) queue, with newest elements added to the beginning and oldest elements automatically removed when the buffer exceeds the specified length, enabling temporal data collection, frame history tracking, batch processing preparation, and sliding window analysis workflows.
How This Block Works¶
This block maintains a rolling buffer that stores the most recent values passed to it, creating a sliding window of data over time. The block:
- Receives input data of any type (images, detections, values, etc.) and configuration parameters (buffer length and padding option)
- Maintains an internal buffer that persists across workflow executions:
- Buffer is initialized as an empty list when the block is first created
- Buffer state persists for the lifetime of the workflow execution
- Each buffer block instance maintains its own separate buffer
- Adds new data to the buffer:
- Inserts the newest value at the beginning (index 0) of the buffer array
- Most recent values appear first in the buffer
- Older values are shifted to later positions in the array
- Manages buffer size:
- When buffer length exceeds the specified
lengthparameter, removes the oldest elements - Keeps only the most recent
lengthvalues - Automatically maintains the sliding window size
- Applies optional padding:
- If
padis True: Fills the buffer withNonevalues until it reaches exactlylengthelements - Ensures consistent buffer size even when fewer than
lengthvalues have been received - If
padis False: Buffer size grows from 0 tolengthas values are added, then stays atlength - Returns the buffered array:
- Outputs a list containing the buffered values in order (newest first)
- List length equals
length(if padding enabled) or current buffer size (if padding disabled) - Values are ordered from most recent (index 0) to oldest (last index)
The buffer implements a sliding window pattern where new data enters at the front and old data exits at the back when capacity is reached. This creates a temporal history of recent values, useful for operations that need to look back at previous frames, detections, or measurements. The buffer works with any data type, making it flexible for images, detections, numeric values, or other workflow outputs.
Common Use Cases¶
- Frame History Tracking: Maintain a history of recent video frames for temporal analysis (e.g., track frame sequences, maintain recent image history, collect frames for comparison), enabling temporal frame analysis workflows
- Detection History: Buffer recent detections for trend analysis or comparison (e.g., track detection changes over time, compare current vs previous detections, analyze detection patterns), enabling detection history workflows
- Batch Processing Preparation: Collect multiple values before processing them together (e.g., batch process recent images, aggregate multiple detections, prepare data for batch operations), enabling batch processing workflows
- Sliding Window Analysis: Perform analysis on a rolling window of data (e.g., analyze trends over recent frames, calculate moving averages, detect changes in sequences), enabling sliding window analysis workflows
- Visualization Sequences: Maintain recent data for animation or sequence visualization (e.g., create frame sequences, visualize temporal changes, display recent history), enabling temporal visualization workflows
- Temporal Comparison: Compare current values with recent historical values (e.g., compare current frame with previous frames, detect changes over time, analyze temporal patterns), enabling temporal comparison workflows
Connecting to Other Blocks¶
This block receives data of any type and produces a buffered output array:
- After any block that produces values to buffer (e.g., buffer images from image sources, buffer detections from detection models, buffer values from analytics blocks), enabling data buffering workflows
- Before blocks that process arrays to provide batched or historical data (e.g., process buffered images, analyze detection arrays, work with value sequences), enabling array processing workflows
- Before visualization blocks to display sequences or temporal data (e.g., visualize frame sequences, display detection history, show temporal patterns), enabling temporal visualization workflows
- Before analysis blocks that require historical data (e.g., analyze trends over time, compare current vs historical, process temporal sequences), enabling temporal analysis workflows
- Before aggregation blocks to provide multiple values for aggregation (e.g., aggregate buffered values, process multiple detections, combine recent data), enabling aggregation workflows
- In temporal processing pipelines where maintaining recent history is required (e.g., track changes over time, maintain frame sequences, collect data for temporal analysis), enabling temporal processing workflows
Requirements¶
This block works with any data type (images, detections, values, etc.). The buffer maintains state across workflow executions within the same workflow instance. The length parameter determines the maximum number of values to keep in the buffer. When pad is enabled, the buffer will always return exactly length elements (padded with None if needed). When pad is disabled, the buffer grows from 0 to length elements as values are added, then maintains length elements by removing oldest values. The buffer persists for the lifetime of the workflow execution and resets when the workflow is restarted.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/buffer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
length |
int |
Maximum number of elements to keep in the buffer. When the buffer exceeds this length, the oldest elements are automatically removed. Determines the size of the sliding window. Must be greater than 0. Typical values range from 2-10 for frame sequences, or higher for longer histories.. | โ |
pad |
bool |
Enable padding to maintain consistent buffer size. If True, the buffer is padded with None values until it reaches exactly length elements, ensuring the output always has length items even when fewer values have been received. If False, the buffer grows from 0 to length as values are added, then maintains length by removing oldest values. Use padding when downstream blocks require a fixed-size array.. |
โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Buffer in version v1.
- inputs:
Image Threshold,Email Notification,Corner Visualization,Roboflow Dataset Upload,Object Detection Model,Stitch OCR Detections,Dimension Collapse,Gaze Detection,Stability AI Image Generation,Time in Zone,Grid Visualization,Dynamic Crop,Image Slicer,Image Preprocessing,Instance Segmentation Model,SIFT,Line Counter Visualization,Detections Combine,Trace Visualization,Halo Visualization,ByteTrack Tracker,Cache Get,Roboflow Custom Metadata,Pixelate Visualization,Circle Visualization,Semantic Segmentation Model,S3 Sink,Detections Classes Replacement,Keypoint Detection Model,Twilio SMS Notification,Data Aggregator,Halo Visualization,SIFT Comparison,Anthropic Claude,OC-SORT Tracker,Detections Consensus,Polygon Visualization,Cosine Similarity,Identify Changes,Qwen3-VL,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Detection Offset,CLIP Embedding Model,Heatmap Visualization,Webhook Sink,Detections List Roll-Up,Cache Set,Google Vision OCR,Florence-2 Model,Florence-2 Model,Environment Secrets Store,VLM As Classifier,Overlap Filter,Anthropic Claude,OpenAI,VLM As Detector,OpenAI,PTZ Tracking (ONVIF),Bounding Rectangle,Background Color Visualization,Template Matching,Anthropic Claude,Background Subtraction,SIFT Comparison,Multi-Label Classification Model,Keypoint Visualization,Time in Zone,Detections Filter,Stitch OCR Detections,LMM,Detections Merge,Detections Transformation,Identify Outliers,Perception Encoder Embedding Model,SAM 3,Motion Detection,Dynamic Zone,Single-Label Classification Model,Seg Preview,Object Detection Model,Roboflow Vision Events,VLM As Classifier,Detections Stitch,Triangle Visualization,Distance Measurement,Google Gemini,Expression,Path Deviation,Image Contours,Model Comparison Visualization,Stability AI Outpainting,Image Slicer,Stitch Images,Image Blur,Barcode Detection,Ellipse Visualization,OpenAI,Time in Zone,Depth Estimation,EasyOCR,Absolute Static Crop,JSON Parser,Multi-Label Classification Model,CogVLM,Google Gemini,Continue If,Velocity,Relative Static Crop,Morphological Transformation,LMM For Classification,Detection Event Log,Dot Visualization,GLM-OCR,Model Monitoring Inference Aggregator,Keypoint Detection Model,Pixel Color Count,Image Convert Grayscale,Icon Visualization,QR Code Generator,First Non Empty Or Default,Detections Stabilizer,Camera Focus,SAM 3,OCR Model,Text Display,Qwen2.5-VL,Reference Path Visualization,Instance Segmentation Model,Llama 3.2 Vision,CSV Formatter,SORT Tracker,Byte Tracker,Label Visualization,Classification Label Visualization,Byte Tracker,Segment Anything 2 Model,Polygon Zone Visualization,Stability AI Inpainting,Rate Limiter,Google Gemini,Perspective Correction,SAM 3,Camera Calibration,Qwen3.5-VL,Size Measurement,Email Notification,Contrast Equalization,Line Counter,Path Deviation,SmolVLM2,Single-Label Classification Model,Delta Filter,Byte Tracker,Property Definition,Line Counter,Color Visualization,OpenAI,Dominant Color,QR Code Detection,Local File Sink,Mask Area Measurement,Clip Comparison,YOLO-World Model,Buffer,Clip Comparison,Twilio SMS/MMS Notification,Blur Visualization,Bounding Box Visualization,Camera Focus,Polygon Visualization,Moondream2,VLM As Detector,Slack Notification - outputs:
Email Notification,Corner Visualization,Ellipse Visualization,OpenAI,Time in Zone,Object Detection Model,Roboflow Dataset Upload,Google Gemini,Time in Zone,Grid Visualization,Instance Segmentation Model,Line Counter Visualization,Trace Visualization,LMM For Classification,Halo Visualization,Dot Visualization,Keypoint Detection Model,Circle Visualization,Detections Classes Replacement,Keypoint Detection Model,Halo Visualization,Anthropic Claude,SAM 3,Detections Consensus,Polygon Visualization,Reference Path Visualization,Instance Segmentation Model,Llama 3.2 Vision,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Webhook Sink,Label Visualization,Cache Set,Classification Label Visualization,Detections List Roll-Up,Florence-2 Model,Florence-2 Model,Polygon Zone Visualization,VLM As Classifier,Google Gemini,SAM 3,Perspective Correction,OpenAI,Anthropic Claude,VLM As Detector,OpenAI,Size Measurement,Anthropic Claude,Email Notification,Keypoint Visualization,Line Counter,Time in Zone,Path Deviation,Line Counter,SAM 3,Motion Detection,Seg Preview,Color Visualization,Object Detection Model,VLM As Classifier,Clip Comparison,YOLO-World Model,Buffer,Triangle Visualization,Clip Comparison,Twilio SMS/MMS Notification,Bounding Box Visualization,Polygon Visualization,Google Gemini,Path Deviation,VLM As Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Buffer in version v1 has.
Bindings
-
input
data(Union[*,list_of_values,image]): Input data of any type to add to the buffer. Can be images, detections, values, or any other workflow output. Newest values are added to the beginning of the buffer array. The buffer maintains a sliding window of the most recent values..
-
output
output(list_of_values): List of values of any type.
Example JSON definition of step Buffer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/buffer@v1",
"data": "$steps.visualization",
"length": 5,
"pad": true
}