Skip to content

Model Monitoring Inference Aggregator

Class: ModelMonitoringInferenceAggregatorBlockV1

Source: inference.core.workflows.core_steps.sinks.roboflow.model_monitoring_inference_aggregator.v1.ModelMonitoringInferenceAggregatorBlockV1

This block 📊 transforms inference data reporting to a whole new level by periodically aggregating and sending a curated sample of predictions to Roboflow Model Monitoring.

✨ Key Features

  • Effortless Aggregation: Collects and organizes predictions in-memory, ensuring only the most relevant and confident predictions are reported.

  • Customizable Reporting Intervals: Choose how frequently (in seconds) data should be sent—ensuring optimal balance between granularity and resource efficiency.

  • Debug-Friendly Mode: Fine-tune operations by enabling or disabling asynchronous background execution.

🔍 Why Use This Block?

This block is a game-changer for projects relying on video processing in Workflows. With its aggregation process, it identifies the most confident predictions across classes and sends them at regular intervals in small messages to Roboflow backend - ensuring that video processing performance is impacted to the least extent.

Perfect for:

  • Monitoring production line performance in real-time 🏭.

  • Debugging and validating your model’s performance over time ⏱️.

  • Providing actionable insights from inference workflows with minimal overhead 🔧.

🚨 Limitations

  • The block is should not be relied on when running Workflow in inference server or via HTTP request to Roboflow hosted platform, as the internal state is not persisted in a memory that would be accessible for all requests to the server, causing aggregation to only have a scope of single request. We will solve that problem in future releases if proven to be serious limitation for clients.

  • This block do not have ability to separate aggregations for multiple videos processed by InferencePipeline - effectively aggregating data for all video feeds connected to single process running InferencePipeline.

Type identifier

Use the following identifier in step "type" field: roboflow_core/model_monitoring_inference_aggregator@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
frequency int Frequency of reporting (in seconds). For example, if 5 is provided, the block will report an aggregated sample of predictions every 5 seconds..
unique_aggregator_key str Unique key used internally to track the session of inference results reporting. Must be unique for each step in your Workflow..
fire_and_forget bool Boolean flag dictating if sink is supposed to be executed in the background, not waiting on status of registration before end of workflow run. Use True if best-effort registration is needed, use False while debugging and if error handling is needed.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Model Monitoring Inference Aggregator in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Model Monitoring Inference Aggregator in version v1 has.

Bindings
  • input

  • output

    • error_status (boolean): Boolean flag.
    • message (string): String value.
Example JSON definition of step Model Monitoring Inference Aggregator in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/model_monitoring_inference_aggregator@v1",
    "predictions": "$steps.my_step.predictions",
    "model_id": "my_project/3",
    "frequency": "3",
    "unique_aggregator_key": "session-1v73kdhfse",
    "fire_and_forget": true
}