Skip to content

Data Aggregator

The Data Aggregator block collects and processes data from Workflows to generate time-based statistical summaries. It allows users to define custom aggregation strategies over specified intervals, making it suitable for creating analytics on data streams.

The block enables:

  • feeding it with data from other Workflow blocks and applying in-place operations (for instance to extract desired values out of model predictions)

  • using multiple aggregation modes, including sum, avg, max, min, count and others

  • specifying aggregation interval flexibly

Feeding Data Aggregator

You can specify the data to aggregate by referencing input sources using the data field. Optionally, for each specified data input you can apply chain of UQL operations with data_operations property.

For example, the following configuration:

data = {
    "predictions_model_a": "$steps.model_a.predictions",
    "predictions_model_b": "$steps.model_b.predictions",
}
data_operations = { 
    "predictions_model_a": [
        {"type": "DetectionsPropertyExtract", "property_name": "class_name"}
    ],
    "predictions_model_b": [{"type": "SequenceLength"}]
}

on each step run will at first take predictions_model_a to extract list of detected classes and calculate the number of predicted bounding boxes for predictions_model_b.

Specifying data aggregations

For each input data referenced by data property you can specify list of aggregation operations, that include:

  • sum: Taking the sum of values (requires data to be numeric)

  • avg: Taking the average of values (requires data to be numeric)

  • max: Taking the max of values (requires data to be numeric)

  • min: Taking the min of values (requires data to be numeric)

  • count: Counting the values - if provided value is list - operation will add length of the list into aggregated state

  • distinct: deduplication of encountered values - providing list of unique values in the output. If aggregation data is list - operation will add each element of the list into aggregated state.

  • count_distinct: counting occurrences of distinct values - providing number of different values that were encountered. If aggregation data is list - operation will add each element of the list into aggregated state.

  • count_distinct: counting distinct values - providing number of different values that were encountered. If aggregation data is list - operation will add each element of the list into aggregated state.

  • values_counts: counting occurrences of each distinct value - providing dictionary mapping each unique value encountered into the number of observations. If aggregation data is list - operation will add each element of the list into aggregated state.

  • values_difference: calculates the difference between max and min observed value (requires data to be numeric)

If we take the data and data_operations from the example above and specify aggregation_mode in the following way:

aggregation_mode = {
    "predictions_model_a": ["distinct", "count_distinct"],
    "predictions_model_b": ["avg"],
}

Our aggregation report will contain the following values:

{
    "predictions_model_a_distinct": ["car", "person", "dog"],
    "predictions_model_a_count_distinct": {"car": 378, "person": 128, "dog": 37},
    "predictions_model_b_avg": 7.35,
}

where:

  • predictions_model_a_distinct provides distinct classes predicted by model A in aggregation window

  • predictions_model_a_count_distinct provides number of classes instances predicted by model A in aggregation window

  • predictions_model_b_avg provides average number of bounding boxes predicted by model B in aggregation window

Interval nature of the block

Block behaviour is dictated by internal 'clock'

Behaviour of this block differs from other, more classical blocks which output the data for each input. Data Aggregator block maintains its internal state that dictates when the data will be produced, flushing internal aggregation state of the block.

You can expect that most of the times, once fed with data, the block will produce empty outputs, effectively terminating downstream processing:

--- input_batch[0] ----> ┌───────────────────────┐ ---->  <Empty>
--- input_batch[1] ----> │                       │ ---->  <Empty>
        ...              │     Data Aggregator   │ ---->  <Empty>
        ...              │                       │ ---->  <Empty>           
--- input_batch[n] ----> └───────────────────────┘ ---->  <Empty>

But once for a while, the block will yield aggregated data and flush its internal state:

--- input_batch[0] ----> ┌───────────────────────┐ ---->  <Empty>
--- input_batch[1] ----> │                       │ ---->  <Empty>
        ...              │     Data Aggregator   │ ---->  {<aggregated_report>}
        ...              │                       │ ---->  <Empty> # first datapoint added to new state          
--- input_batch[n] ----> └───────────────────────┘ ---->  <Empty>

Setting the aggregation interval is possible with interval and interval_unit property. interval specifies the length of aggregation window and interval_unit bounds the interval value into units. You can specify the interval based on:

  • time elapse: using ["seconds", "minutes", "hours"] as interval_unit will make the Data Aggregator to yield the aggregated report based on time that elapsed since last report was released - this setting is relevant for processing of video streams.

  • number of runs: using runs as interval_unit - this setting is relevant for processing of video files, as in this context wall-clock time elapse is not the proper way of getting meaningful reports.

Type identifier

Use the following identifier in step "type" field: roboflow_core/data_aggregator@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
data_operations Dict[str, List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, RandomNumber, SequenceAggregate, SequenceApply, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, ToBoolean, ToNumber, ToString]]] UQL definitions of operations to be performed on defined data w.r.t. element of the data.
aggregation_mode Dict[str, List[str]] Lists of aggregation operations to apply on each input data.
interval_unit str Unit to measure interval.
interval int Length of aggregation interval.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Check what blocks you can connect to Data Aggregator in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Data Aggregator in version v1 has.

Bindings
  • input

    • data (*): References data to be used to construct each and every column.
  • output

    • * (*): Equivalent of any element.
Example JSON definition of step Data Aggregator in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/data_aggregator@v1",
    "data": {
        "predictions": "$steps.model.predictions",
        "reference": "$inputs.reference_class_names"
    },
    "data_operations": {
        "predictions": [
            {
                "property_name": "class_name",
                "type": "DetectionsPropertyExtract"
            }
        ]
    },
    "aggregation_mode": {
        "predictions": [
            "distinct",
            "count_distinct"
        ]
    },
    "interval_unit": "seconds",
    "interval": 10
}