Skip to content

Data Aggregator

Class: DataAggregatorBlockV1

Source: inference.core.workflows.core_steps.analytics.data_aggregator.v1.DataAggregatorBlockV1

Collect and process data from workflow steps over configurable time-based or run-based intervals to generate statistical summaries and analytics reports, supporting multiple aggregation operations (sum, average, max, min, count, distinct values, value counts) with optional UQL-based data transformations for comprehensive data stream analytics.

How This Block Works

This block collects and aggregates data from workflow steps over specified intervals to produce statistical summaries. Unlike most blocks that output data for every input, this block maintains internal state and outputs aggregated results only when the configured interval is reached. The block:

  1. Receives data inputs from other workflow steps (via data field mapping variable names to workflow step outputs)
  2. Optionally applies UQL (Query Language) operations to transform the data before aggregation (e.g., extract class names from detections, calculate sequence lengths, filter or transform values) using data_operations for each input variable
  3. Accumulates data into internal aggregation states based on the specified aggregation_mode for each variable
  4. Tracks time elapsed or number of runs based on interval_unit (seconds, minutes, hours, or runs)
  5. Most of the time, returns empty outputs (terminating downstream processing) while collecting data internally
  6. When the interval threshold is reached (based on time elapsed or run count), computes and outputs aggregated statistics
  7. Flushes internal state after outputting aggregated results and starts collecting data for the next interval
  8. Produces output fields dynamically named as {variable_name}_{aggregation_mode} (e.g., predictions_avg, classes_distinct, count_values_counts)

The block supports multiple aggregation modes for numeric data (sum, avg, max, min, values_difference), counting operations (count, count_distinct), and value analysis (distinct, values_counts). For list-like data, operations automatically process each element (e.g., count adds list length, distinct adds each element to the distinct set). The interval can be time-based (useful for video streams where wall-clock time matters) or run-based (useful for video file processing where frame count matters more than elapsed time).

Common Use Cases

  • Video Stream Analytics: Aggregate detection results over time intervals from live video streams (e.g., calculate average object counts per minute, track distinct classes seen per hour, compute min/max detection counts over 30-second windows), enabling real-time analytics and monitoring for continuous video processing workflows
  • Batch Video Processing: Aggregate statistics across video frames using run-based intervals (e.g., calculate average detections per 100 frames, count distinct objects across 500-frame windows, sum total detections per batch), enabling meaningful analytics for pre-recorded video files where frame count matters more than elapsed time
  • Time-Series Metrics Collection: Collect and summarize workflow metrics over time (e.g., aggregate detection counts, calculate average confidence scores, track distinct class occurrences, compute value distributions), enabling statistical analysis and reporting for production workflows
  • Model Performance Analysis: Analyze model predictions across multiple inputs (e.g., calculate average prediction counts, track distinct predicted classes, compute min/max confidence scores, count occurrences of each class), enabling comprehensive model performance evaluation and insights
  • Data Stream Summarization: Summarize high-frequency data streams into periodic reports (e.g., aggregate every 60 seconds of detections into summary statistics, compute hourly averages, generate per-run summaries), enabling efficient data reduction and analysis for high-volume workflows
  • Multi-Model Comparison: Aggregate results from multiple models for comparison (e.g., compare average detection counts across models, track distinct classes per model, compute aggregate statistics for model ensembles), enabling comparative analytics across different inference pipelines

Connecting to Other Blocks

This block receives data from workflow steps and outputs aggregated statistics periodically:

  • After detection or analysis blocks (e.g., Object Detection, Instance Segmentation, Classification) to aggregate prediction results over time or across frames, enabling statistical analysis of model outputs and detection patterns
  • After data processing blocks (e.g., Expression, Property Definition, Detections Filter) that produce numeric or list outputs to aggregate computed values, metrics, or transformed data over intervals
  • Before sink blocks (e.g., CSV Formatter, Local File Sink, Webhook Sink) to save periodic aggregated reports, enabling efficient storage and export of summarized analytics data instead of individual data points
  • In video processing workflows to generate time-based or frame-based analytics reports, enabling comprehensive video analysis with periodic statistical summaries rather than per-frame outputs
  • Before visualization or reporting blocks that need aggregated data to create dashboards, charts, or summaries from time-series data, enabling visualization of trends and statistics
  • In analytics pipelines where high-frequency data needs to be reduced to periodic summaries, enabling efficient downstream processing and storage of statistical insights rather than raw high-volume data streams

Type identifier

Use the following identifier in step "type" field: roboflow_core/data_aggregator@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
data_operations Dict[str, List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]]] Optional dictionary mapping variable names (from data) to UQL (Query Language) operation chains that transform data before aggregation. Operations are applied in sequence to extract, filter, or transform values (e.g., extract class names from detections using DetectionsPropertyExtract, calculate sequence length using SequenceLength, filter values, perform calculations). Keys must match variable names in data. Leave empty or omit variables that don't need transformation. Example: {'predictions': [{'type': 'DetectionsPropertyExtract', 'property_name': 'class_name'}]}..
aggregation_mode Dict[str, List[str]] Dictionary mapping variable names (from data) to lists of aggregation operations to compute. Each aggregation produces an output field named '{variable_name}_{aggregation_mode}'. Supported operations: 'sum' (sum of numeric values), 'avg' (average of numeric values), 'max'/'min' (maximum/minimum numeric values), 'count' (count values, adds list length for lists), 'distinct' (list of unique values), 'count_distinct' (number of unique values), 'values_counts' (dictionary of value occurrence counts), 'values_difference' (difference between max and min numeric values). For lists, operations process each element. Multiple aggregations per variable are supported. Example: {'predictions': ['distinct', 'count_distinct', 'avg']}..
interval_unit str Unit for measuring the aggregation interval: 'seconds', 'minutes', 'hours' (time-based, uses wall-clock time elapsed since last output - useful for video streams), or 'runs' (run-based, counts number of workflow executions - useful for video file processing where frame count matters more than time). Time-based intervals track elapsed time between aggregated outputs. Run-based intervals count the number of times the block receives data. The block outputs aggregated results and flushes state when the interval threshold is reached..
interval int Length of the aggregation interval in the units specified by interval_unit. Must be greater than 0. The block accumulates data internally and outputs aggregated results when this interval threshold is reached. For time-based units (seconds, minutes, hours), this is the duration elapsed since the last output. For 'runs', this is the number of workflow executions (e.g., frames processed) since the last output. After outputting results, the block resets its internal state and starts a new aggregation window. Most of the time, the block returns empty outputs while collecting data..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Data Aggregator in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Data Aggregator in version v1 has.

Bindings
  • input

    • data (*): Dictionary mapping variable names to data sources from workflow steps. Each key becomes a variable name for aggregation, and each value is a selector referencing workflow step outputs (e.g., predictions, metrics, computed values). These variables are used in aggregation_mode to specify which aggregations to compute. Example: {'predictions': '$steps.model.predictions', 'count': '$steps.counter.total'}..
  • output

    • * (*): Equivalent of any element.
Example JSON definition of step Data Aggregator in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/data_aggregator@v1",
    "data": {
        "predictions": "$steps.model.predictions",
        "reference": "$inputs.reference_class_names"
    },
    "data_operations": {
        "predictions": [
            {
                "property_name": "class_name",
                "type": "DetectionsPropertyExtract"
            }
        ]
    },
    "aggregation_mode": {
        "predictions": [
            "distinct",
            "count_distinct"
        ]
    },
    "interval_unit": "seconds",
    "interval": 10
}