Buffer¶
Class: BufferBlockV1
Source: inference.core.workflows.core_steps.fusion.buffer.v1.BufferBlockV1
Maintain a sliding window buffer of the last N values by storing recent inputs in a FIFO (First-In-First-Out) queue, with newest elements added to the beginning and oldest elements automatically removed when the buffer exceeds the specified length, enabling temporal data collection, frame history tracking, batch processing preparation, and sliding window analysis workflows.
How This Block Works¶
This block maintains a rolling buffer that stores the most recent values passed to it, creating a sliding window of data over time. The block:
- Receives input data of any type (images, detections, values, etc.) and configuration parameters (buffer length and padding option)
- Maintains an internal buffer that persists across workflow executions:
- Buffer is initialized as an empty list when the block is first created
- Buffer state persists for the lifetime of the workflow execution
- Each buffer block instance maintains its own separate buffer
- Adds new data to the buffer:
- Inserts the newest value at the beginning (index 0) of the buffer array
- Most recent values appear first in the buffer
- Older values are shifted to later positions in the array
- Manages buffer size:
- When buffer length exceeds the specified
lengthparameter, removes the oldest elements - Keeps only the most recent
lengthvalues - Automatically maintains the sliding window size
- Applies optional padding:
- If
padis True: Fills the buffer withNonevalues until it reaches exactlylengthelements - Ensures consistent buffer size even when fewer than
lengthvalues have been received - If
padis False: Buffer size grows from 0 tolengthas values are added, then stays atlength - Returns the buffered array:
- Outputs a list containing the buffered values in order (newest first)
- List length equals
length(if padding enabled) or current buffer size (if padding disabled) - Values are ordered from most recent (index 0) to oldest (last index)
The buffer implements a sliding window pattern where new data enters at the front and old data exits at the back when capacity is reached. This creates a temporal history of recent values, useful for operations that need to look back at previous frames, detections, or measurements. The buffer works with any data type, making it flexible for images, detections, numeric values, or other workflow outputs.
Common Use Cases¶
- Frame History Tracking: Maintain a history of recent video frames for temporal analysis (e.g., track frame sequences, maintain recent image history, collect frames for comparison), enabling temporal frame analysis workflows
- Detection History: Buffer recent detections for trend analysis or comparison (e.g., track detection changes over time, compare current vs previous detections, analyze detection patterns), enabling detection history workflows
- Batch Processing Preparation: Collect multiple values before processing them together (e.g., batch process recent images, aggregate multiple detections, prepare data for batch operations), enabling batch processing workflows
- Sliding Window Analysis: Perform analysis on a rolling window of data (e.g., analyze trends over recent frames, calculate moving averages, detect changes in sequences), enabling sliding window analysis workflows
- Visualization Sequences: Maintain recent data for animation or sequence visualization (e.g., create frame sequences, visualize temporal changes, display recent history), enabling temporal visualization workflows
- Temporal Comparison: Compare current values with recent historical values (e.g., compare current frame with previous frames, detect changes over time, analyze temporal patterns), enabling temporal comparison workflows
Connecting to Other Blocks¶
This block receives data of any type and produces a buffered output array:
- After any block that produces values to buffer (e.g., buffer images from image sources, buffer detections from detection models, buffer values from analytics blocks), enabling data buffering workflows
- Before blocks that process arrays to provide batched or historical data (e.g., process buffered images, analyze detection arrays, work with value sequences), enabling array processing workflows
- Before visualization blocks to display sequences or temporal data (e.g., visualize frame sequences, display detection history, show temporal patterns), enabling temporal visualization workflows
- Before analysis blocks that require historical data (e.g., analyze trends over time, compare current vs historical, process temporal sequences), enabling temporal analysis workflows
- Before aggregation blocks to provide multiple values for aggregation (e.g., aggregate buffered values, process multiple detections, combine recent data), enabling aggregation workflows
- In temporal processing pipelines where maintaining recent history is required (e.g., track changes over time, maintain frame sequences, collect data for temporal analysis), enabling temporal processing workflows
Requirements¶
This block works with any data type (images, detections, values, etc.). The buffer maintains state across workflow executions within the same workflow instance. The length parameter determines the maximum number of values to keep in the buffer. When pad is enabled, the buffer will always return exactly length elements (padded with None if needed). When pad is disabled, the buffer grows from 0 to length elements as values are added, then maintains length elements by removing oldest values. The buffer persists for the lifetime of the workflow execution and resets when the workflow is restarted.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/buffer@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
length |
int |
Maximum number of elements to keep in the buffer. When the buffer exceeds this length, the oldest elements are automatically removed. Determines the size of the sliding window. Must be greater than 0. Typical values range from 2-10 for frame sequences, or higher for longer histories.. | ❌ |
pad |
bool |
Enable padding to maintain consistent buffer size. If True, the buffer is padded with None values until it reaches exactly length elements, ensuring the output always has length items even when fewer values have been received. If False, the buffer grows from 0 to length as values are added, then maintains length by removing oldest values. Use padding when downstream blocks require a fixed-size array.. |
❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Buffer in version v1.
- inputs:
Contrast Equalization,Clip Comparison,Detections Transformation,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,First Non Empty Or Default,Text Display,SIFT,Moondream2,Qwen3-VL,Google Vision OCR,Pixelate Visualization,Time in Zone,VLM as Classifier,Detection Offset,Detections Filter,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Path Deviation,Keypoint Detection Model,CSV Formatter,Florence-2 Model,Detections Stabilizer,Twilio SMS Notification,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Dynamic Zone,Identify Changes,Icon Visualization,Expression,SAM 3,Qwen2.5-VL,Detections Consensus,Multi-Label Classification Model,Detections Stitch,Dynamic Crop,QR Code Detection,Continue If,Bounding Box Visualization,YOLO-World Model,Detection Event Log,Detections Classes Replacement,Blur Visualization,Camera Calibration,Line Counter,Dominant Color,Path Deviation,OpenAI,Camera Focus,CogVLM,Trace Visualization,Image Slicer,Absolute Static Crop,Dot Visualization,Label Visualization,Slack Notification,Google Gemini,Object Detection Model,LMM For Classification,Stitch OCR Detections,OpenAI,Classification Label Visualization,Stitch OCR Detections,Byte Tracker,Velocity,Twilio SMS/MMS Notification,Anthropic Claude,Clip Comparison,Gaze Detection,VLM as Detector,Webhook Sink,Llama 3.2 Vision,SIFT Comparison,Anthropic Claude,Delta Filter,Time in Zone,Local File Sink,QR Code Generator,SmolVLM2,Email Notification,CLIP Embedding Model,Roboflow Dataset Upload,Motion Detection,Model Comparison Visualization,Camera Focus,PTZ Tracking (ONVIF).md),LMM,Byte Tracker,Single-Label Classification Model,Mask Visualization,Anthropic Claude,Relative Static Crop,Cosine Similarity,SAM 3,Detections Merge,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,Property Definition,EasyOCR,Stability AI Inpainting,Multi-Label Classification Model,Reference Path Visualization,Time in Zone,Detections Combine,Crop Visualization,Ellipse Visualization,Overlap Filter,Line Counter,Image Preprocessing,Barcode Detection,Environment Secrets Store,Detections List Roll-Up,Background Subtraction,Segment Anything 2 Model,Image Contours,Image Slicer,Cache Set,Depth Estimation,Pixel Color Count,Stitch Images,VLM as Classifier,Cache Get,Model Monitoring Inference Aggregator,Instance Segmentation Model,Line Counter Visualization,Morphological Transformation,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,OCR Model,Distance Measurement,Roboflow Custom Metadata,Google Gemini,Keypoint Visualization,OpenAI,Size Measurement,Color Visualization,Data Aggregator,Byte Tracker,Identify Outliers,Buffer,Florence-2 Model,Google Gemini,JSON Parser,Grid Visualization,Rate Limiter,OpenAI,Template Matching,Dimension Collapse,Bounding Rectangle,Background Color Visualization,Roboflow Dataset Upload,SAM 3,Stability AI Outpainting,Triangle Visualization,Stability AI Image Generation - outputs:
Llama 3.2 Vision,Clip Comparison,Anthropic Claude,VLM as Detector,Time in Zone,Polygon Visualization,Email Notification,Roboflow Dataset Upload,Motion Detection,SAM 3,Mask Visualization,Anthropic Claude,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,Time in Zone,VLM as Classifier,Reference Path Visualization,Time in Zone,Instance Segmentation Model,Perspective Correction,Halo Visualization,Crop Visualization,Ellipse Visualization,Path Deviation,Keypoint Detection Model,Florence-2 Model,Corner Visualization,Line Counter,Detections List Roll-Up,SAM 3,Detections Consensus,Cache Set,VLM as Classifier,Bounding Box Visualization,YOLO-World Model,Instance Segmentation Model,Line Counter Visualization,Polygon Zone Visualization,Line Counter,Email Notification,Keypoint Visualization,Path Deviation,Google Gemini,OpenAI,OpenAI,Trace Visualization,Size Measurement,Color Visualization,Dot Visualization,Label Visualization,Buffer,Florence-2 Model,Google Gemini,Google Gemini,Grid Visualization,Object Detection Model,LMM For Classification,OpenAI,Classification Label Visualization,SAM 3,Roboflow Dataset Upload,Twilio SMS/MMS Notification,Anthropic Claude,Clip Comparison,Triangle Visualization,VLM as Detector,Webhook Sink
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Buffer in version v1 has.
Bindings
-
input
data(Union[*,image,list_of_values]): Input data of any type to add to the buffer. Can be images, detections, values, or any other workflow output. Newest values are added to the beginning of the buffer array. The buffer maintains a sliding window of the most recent values..
-
output
output(list_of_values): List of values of any type.
Example JSON definition of step Buffer in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/buffer@v1",
"data": "$steps.visualization",
"length": 5,
"pad": true
}