Model Comparison Visualization¶
Class: ModelComparisonVisualizationBlockV1
Compare predictions from two different models by color-coding areas where only one model detected objects, highlighting model differences while leaving overlapping predictions unchanged to visualize model agreement and disagreement.
How This Block Works¶
This block takes an image and predictions from two models (Model A and Model B) and creates a visual comparison overlay that highlights differences between the models. The block:
- Takes an image and two sets of predictions (predictions_a and predictions_b) as input
- Creates masks for areas predicted by each model (using bounding boxes or segmentation masks if available)
- Identifies four distinct regions:
- Areas predicted only by Model A (colored with color_a, default green)
- Areas predicted only by Model B (colored with color_b, default red)
- Areas predicted by both models (left unchanged, allowing the original image to show through)
- Areas predicted by neither model (colored with background_color, default black)
- Applies colored overlays to the identified regions using the specified opacity
- Returns an annotated image where model differences are visually distinguished with color coding
The block creates a side-by-side comparison visualization that makes it easy to see where models agree (unchanged areas) and where they disagree (color-coded areas). Areas where both models made predictions are left unchanged, allowing the original image to "shine through" and clearly showing model consensus. This visualization helps identify model strengths, weaknesses, and differences in detection behavior. The block works with object detection predictions (using bounding boxes) or instance segmentation predictions (using masks), making it versatile for comparing different model types.
Common Use Cases¶
- Model Evaluation and Comparison: Compare two models' detection performance side-by-side to identify where models agree, disagree, or have different detection behaviors for model evaluation, benchmarking, or selection workflows
- Model Development and Debugging: Visualize differences between model versions, architectures, or configurations to understand how changes affect detection behavior, identify improvement opportunities, or debug model performance issues
- Ensemble Model Analysis: Compare predictions from different models in ensemble workflows to understand model agreement patterns, identify complementary strengths, or analyze consensus areas for ensemble decision-making
- Training Data Analysis: Compare model predictions to ground truth annotations or between training runs to identify patterns in detection differences, validate training improvements, or analyze model behavior across datasets
- A/B Testing and Model Selection: Visually compare candidate models to evaluate relative performance, identify detection differences, or make informed model selection decisions for deployment
- Quality Assurance and Validation: Validate model consistency, compare model performance on edge cases, or identify systematic differences between models for quality assurance, validation, or compliance workflows
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Model blocks (e.g., Object Detection Model, Instance Segmentation Model) to receive predictions_a and predictions_b from different models for comparison
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save comparison visualizations for documentation, reporting, or analysis
- Webhook blocks to send comparison visualizations to external systems, APIs, or web applications for display in dashboards, model monitoring tools, or evaluation interfaces
- Notification blocks (e.g., Email Notification, Slack Notification) to send comparison visualizations as visual evidence in alerts or reports for model performance monitoring
- Video output blocks to create annotated video streams or recordings with model comparison visualizations for live model evaluation, performance monitoring, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/model_comparison_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_a |
str |
Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions.. | ✅ |
color_b |
str |
Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions.. | ✅ |
background_color |
str |
Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections.. | ✅ |
opacity |
float |
Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Model Comparison Visualization in version v1.
- inputs:
Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Multi-Label Classification Model,Email Notification,QR Code Generator,VLM As Detector,LMM,SAM 3,Detection Offset,Corner Visualization,Image Convert Grayscale,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,JSON Parser,Object Detection Model,Trace Visualization,Google Vision OCR,Instance Segmentation Model,CSV Formatter,Text Display,Stitch Images,Google Gemini,Local File Sink,Slack Notification,VLM As Classifier,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Polygon Visualization,Object Detection Model,Anthropic Claude,Byte Tracker,Contrast Equalization,Identify Changes,Detections Classes Replacement,Velocity,Moondream2,SIFT Comparison,Halo Visualization,Florence-2 Model,Blur Visualization,Label Visualization,Twilio SMS/MMS Notification,Ellipse Visualization,OpenAI,SIFT,Model Monitoring Inference Aggregator,Single-Label Classification Model,Detections List Roll-Up,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,OpenAI,Keypoint Detection Model,Gaze Detection,Polygon Visualization,Twilio SMS Notification,SAM 3,Bounding Box Visualization,OCR Model,Overlap Filter,Icon Visualization,Time in Zone,Google Gemini,Florence-2 Model,Roboflow Dataset Upload,Anthropic Claude,Dynamic Zone,Dynamic Crop,VLM As Detector,Google Gemini,Path Deviation,Image Blur,Line Counter,Byte Tracker,Stability AI Inpainting,Template Matching,Image Contours,Path Deviation,Morphological Transformation,Triangle Visualization,Bounding Rectangle,Detections Stitch,Relative Static Crop,Detections Filter,Camera Calibration,Grid Visualization,Detections Stabilizer,Camera Focus,Image Slicer,Detections Combine,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Llama 3.2 Vision,SIFT Comparison,Camera Focus,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Seg Preview,Identify Outliers,Clip Comparison,Email Notification,Byte Tracker,Image Preprocessing,SAM 3,Depth Estimation,Time in Zone,CogVLM,Absolute Static Crop,Roboflow Custom Metadata,EasyOCR,Stitch OCR Detections,Perspective Correction,Anthropic Claude,Pixelate Visualization,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,VLM As Classifier,Detection Event Log,Polygon Zone Visualization,YOLO-World Model,Stitch OCR Detections,Crop Visualization,Motion Detection,OpenAI,Detections Transformation - outputs:
Anthropic Claude,Mask Visualization,Classification Label Visualization,Instance Segmentation Model,Multi-Label Classification Model,Email Notification,Dynamic Crop,CLIP Embedding Model,VLM As Detector,VLM As Detector,Google Gemini,Multi-Label Classification Model,LMM,SAM 3,Image Blur,Corner Visualization,Image Convert Grayscale,Byte Tracker,Stability AI Outpainting,SmolVLM2,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Object Detection Model,Template Matching,Single-Label Classification Model,Image Contours,Trace Visualization,Google Vision OCR,Morphological Transformation,Triangle Visualization,Instance Segmentation Model,Clip Comparison,Detections Stitch,Relative Static Crop,Text Display,Stitch Images,Google Gemini,Camera Calibration,Detections Stabilizer,VLM As Classifier,Roboflow Dataset Upload,Camera Focus,Color Visualization,Dot Visualization,Image Slicer,Polygon Visualization,Object Detection Model,Anthropic Claude,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Buffer,Llama 3.2 Vision,Contrast Equalization,SIFT Comparison,Camera Focus,Perception Encoder Embedding Model,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Moondream2,Seg Preview,Halo Visualization,Florence-2 Model,Blur Visualization,Qwen3-VL,Twilio SMS/MMS Notification,Label Visualization,Barcode Detection,Clip Comparison,Ellipse Visualization,OpenAI,QR Code Detection,SIFT,Image Preprocessing,SAM 3,Single-Label Classification Model,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Depth Estimation,OpenAI,Motion Detection,Keypoint Detection Model,CogVLM,Absolute Static Crop,Gaze Detection,EasyOCR,Perspective Correction,Qwen2.5-VL,Anthropic Claude,Pixelate Visualization,Reference Path Visualization,Stability AI Image Generation,Keypoint Visualization,SAM 3,Polygon Visualization,VLM As Classifier,Bounding Box Visualization,Polygon Zone Visualization,OCR Model,YOLO-World Model,Icon Visualization,Crop Visualization,Pixel Color Count,Google Gemini,OpenAI,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Model Comparison Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions_a(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Predictions from Model A (the first model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model A (and not by Model B) will be colored with color_a. Works with bounding boxes or masks depending on prediction type..color_a(string): Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions..predictions_b(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Predictions from Model B (the second model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model B (and not by Model A) will be colored with color_b. Works with bounding boxes or masks depending on prediction type..color_b(string): Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions..background_color(string): Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections..opacity(float_zero_to_one): Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility..
-
output
image(image): Image in workflows.
Example JSON definition of step Model Comparison Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/model_comparison_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions_a": "$steps.object_detection_model.predictions",
"color_a": "GREEN",
"predictions_b": "$steps.object_detection_model.predictions",
"color_b": "RED",
"background_color": "BLACK",
"opacity": 0.7
}