Skip to content

Model Comparison Visualization

Class: ModelComparisonVisualizationBlockV1

Source: inference.core.workflows.core_steps.visualizations.model_comparison.v1.ModelComparisonVisualizationBlockV1

Compare predictions from two different models by color-coding areas where only one model detected objects, highlighting model differences while leaving overlapping predictions unchanged to visualize model agreement and disagreement.

How This Block Works

This block takes an image and predictions from two models (Model A and Model B) and creates a visual comparison overlay that highlights differences between the models. The block:

  1. Takes an image and two sets of predictions (predictions_a and predictions_b) as input
  2. Creates masks for areas predicted by each model (using bounding boxes or segmentation masks if available)
  3. Identifies four distinct regions:
  4. Areas predicted only by Model A (colored with color_a, default green)
  5. Areas predicted only by Model B (colored with color_b, default red)
  6. Areas predicted by both models (left unchanged, allowing the original image to show through)
  7. Areas predicted by neither model (colored with background_color, default black)
  8. Applies colored overlays to the identified regions using the specified opacity
  9. Returns an annotated image where model differences are visually distinguished with color coding

The block creates a side-by-side comparison visualization that makes it easy to see where models agree (unchanged areas) and where they disagree (color-coded areas). Areas where both models made predictions are left unchanged, allowing the original image to "shine through" and clearly showing model consensus. This visualization helps identify model strengths, weaknesses, and differences in detection behavior. The block works with object detection predictions (using bounding boxes) or instance segmentation predictions (using masks), making it versatile for comparing different model types.

Common Use Cases

  • Model Evaluation and Comparison: Compare two models' detection performance side-by-side to identify where models agree, disagree, or have different detection behaviors for model evaluation, benchmarking, or selection workflows
  • Model Development and Debugging: Visualize differences between model versions, architectures, or configurations to understand how changes affect detection behavior, identify improvement opportunities, or debug model performance issues
  • Ensemble Model Analysis: Compare predictions from different models in ensemble workflows to understand model agreement patterns, identify complementary strengths, or analyze consensus areas for ensemble decision-making
  • Training Data Analysis: Compare model predictions to ground truth annotations or between training runs to identify patterns in detection differences, validate training improvements, or analyze model behavior across datasets
  • A/B Testing and Model Selection: Visually compare candidate models to evaluate relative performance, identify detection differences, or make informed model selection decisions for deployment
  • Quality Assurance and Validation: Validate model consistency, compare model performance on edge cases, or identify systematic differences between models for quality assurance, validation, or compliance workflows

Connecting to Other Blocks

The annotated image from this block can be connected to:

  • Model blocks (e.g., Object Detection Model, Instance Segmentation Model) to receive predictions_a and predictions_b from different models for comparison
  • Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save comparison visualizations for documentation, reporting, or analysis
  • Webhook blocks to send comparison visualizations to external systems, APIs, or web applications for display in dashboards, model monitoring tools, or evaluation interfaces
  • Notification blocks (e.g., Email Notification, Slack Notification) to send comparison visualizations as visual evidence in alerts or reports for model performance monitoring
  • Video output blocks to create annotated video streams or recordings with model comparison visualizations for live model evaluation, performance monitoring, or post-processing analysis

Type identifier

Use the following identifier in step "type" field: roboflow_core/model_comparison_visualization@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
copy_image bool Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..
color_a str Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions..
color_b str Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions..
background_color str Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections..
opacity float Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Model Comparison Visualization in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Model Comparison Visualization in version v1 has.

Bindings
  • input

    • image (image): The image to visualize on..
    • copy_image (boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..
    • predictions_a (Union[rle_instance_segmentation_prediction, object_detection_prediction, keypoint_detection_prediction, instance_segmentation_prediction]): Predictions from Model A (the first model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model A (and not by Model B) will be colored with color_a. Works with bounding boxes or masks depending on prediction type..
    • color_a (string): Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions..
    • predictions_b (Union[rle_instance_segmentation_prediction, object_detection_prediction, keypoint_detection_prediction, instance_segmentation_prediction]): Predictions from Model B (the second model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model B (and not by Model A) will be colored with color_b. Works with bounding boxes or masks depending on prediction type..
    • color_b (string): Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions..
    • background_color (string): Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections..
    • opacity (float_zero_to_one): Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility..
  • output

    • image (image): Image in workflows.
Example JSON definition of step Model Comparison Visualization in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/model_comparison_visualization@v1",
    "image": "$inputs.image",
    "copy_image": true,
    "predictions_a": "$steps.object_detection_model.predictions",
    "color_a": "GREEN",
    "predictions_b": "$steps.object_detection_model.predictions",
    "color_b": "RED",
    "background_color": "BLACK",
    "opacity": 0.7
}