Model Comparison Visualization¶
Class: ModelComparisonVisualizationBlockV1
Compare predictions from two different models by color-coding areas where only one model detected objects, highlighting model differences while leaving overlapping predictions unchanged to visualize model agreement and disagreement.
How This Block Works¶
This block takes an image and predictions from two models (Model A and Model B) and creates a visual comparison overlay that highlights differences between the models. The block:
- Takes an image and two sets of predictions (predictions_a and predictions_b) as input
- Creates masks for areas predicted by each model (using bounding boxes or segmentation masks if available)
- Identifies four distinct regions:
- Areas predicted only by Model A (colored with color_a, default green)
- Areas predicted only by Model B (colored with color_b, default red)
- Areas predicted by both models (left unchanged, allowing the original image to show through)
- Areas predicted by neither model (colored with background_color, default black)
- Applies colored overlays to the identified regions using the specified opacity
- Returns an annotated image where model differences are visually distinguished with color coding
The block creates a side-by-side comparison visualization that makes it easy to see where models agree (unchanged areas) and where they disagree (color-coded areas). Areas where both models made predictions are left unchanged, allowing the original image to "shine through" and clearly showing model consensus. This visualization helps identify model strengths, weaknesses, and differences in detection behavior. The block works with object detection predictions (using bounding boxes) or instance segmentation predictions (using masks), making it versatile for comparing different model types.
Common Use Cases¶
- Model Evaluation and Comparison: Compare two models' detection performance side-by-side to identify where models agree, disagree, or have different detection behaviors for model evaluation, benchmarking, or selection workflows
- Model Development and Debugging: Visualize differences between model versions, architectures, or configurations to understand how changes affect detection behavior, identify improvement opportunities, or debug model performance issues
- Ensemble Model Analysis: Compare predictions from different models in ensemble workflows to understand model agreement patterns, identify complementary strengths, or analyze consensus areas for ensemble decision-making
- Training Data Analysis: Compare model predictions to ground truth annotations or between training runs to identify patterns in detection differences, validate training improvements, or analyze model behavior across datasets
- A/B Testing and Model Selection: Visually compare candidate models to evaluate relative performance, identify detection differences, or make informed model selection decisions for deployment
- Quality Assurance and Validation: Validate model consistency, compare model performance on edge cases, or identify systematic differences between models for quality assurance, validation, or compliance workflows
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Model blocks (e.g., Object Detection Model, Instance Segmentation Model) to receive predictions_a and predictions_b from different models for comparison
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save comparison visualizations for documentation, reporting, or analysis
- Webhook blocks to send comparison visualizations to external systems, APIs, or web applications for display in dashboards, model monitoring tools, or evaluation interfaces
- Notification blocks (e.g., Email Notification, Slack Notification) to send comparison visualizations as visual evidence in alerts or reports for model performance monitoring
- Video output blocks to create annotated video streams or recordings with model comparison visualizations for live model evaluation, performance monitoring, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/model_comparison_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_a |
str |
Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions.. | ✅ |
color_b |
str |
Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions.. | ✅ |
background_color |
str |
Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections.. | ✅ |
opacity |
float |
Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Model Comparison Visualization in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Instance Segmentation Model,Color Visualization,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Byte Tracker,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Object Detection Model,Camera Focus,Stitch OCR Detections,Qwen 3.5 API,OpenAI,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Detections Merge,Perspective Correction,Overlap Filter,Anthropic Claude,Bounding Box Visualization,Velocity,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Detections Combine,Object Detection Model,SAM2 Video Tracker,Detection Event Log,Byte Tracker,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,YOLO-World Model,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Template Matching,Mask Area Measurement,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Keypoint Detection Model,Time in Zone,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Byte Tracker,Multi-Label Classification Model,Image Convert Grayscale,SAM 3,OpenAI,Corner Visualization,Dynamic Crop,Moondream2,Keypoint Visualization,Keypoint Detection Model,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Model Comparison Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions_a(Union[object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Predictions from Model A (the first model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model A (and not by Model B) will be colored with color_a. Works with bounding boxes or masks depending on prediction type..color_a(string): Color used to highlight areas predicted only by Model A (that Model B did not predict). Can be specified as a color name (e.g., 'GREEN', 'BLUE'), hex color code (e.g., '#00FF00', '#FFFFFF'), or RGB format (e.g., 'rgb(0, 255, 0)'). Default is GREEN to indicate Model A's unique predictions..predictions_b(Union[object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Predictions from Model B (the second model being compared). Can be object detection, instance segmentation, or keypoint detection predictions. Areas predicted only by Model B (and not by Model A) will be colored with color_b. Works with bounding boxes or masks depending on prediction type..color_b(string): Color used to highlight areas predicted only by Model B (that Model A did not predict). Can be specified as a color name (e.g., 'RED', 'BLUE'), hex color code (e.g., '#FF0000', '#FFFFFF'), or RGB format (e.g., 'rgb(255, 0, 0)'). Default is RED to indicate Model B's unique predictions..background_color(string): Color used for areas predicted by neither model. Can be specified as a color name (e.g., 'BLACK', 'GRAY'), hex color code (e.g., '#000000', '#808080'), or RGB format (e.g., 'rgb(0, 0, 0)'). Default is BLACK to indicate areas where both models missed detections..opacity(float_zero_to_one): Opacity of the comparison overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls how transparent the color-coded overlays appear over the original image. Lower values create more transparent overlays where original image details remain more visible, while higher values create more opaque overlays with stronger color emphasis. Typical values range from 0.5 to 0.8 for balanced visibility..
-
output
image(image): Image in workflows.
Example JSON definition of step Model Comparison Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/model_comparison_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions_a": "$steps.object_detection_model.predictions",
"color_a": "GREEN",
"predictions_b": "$steps.object_detection_model.predictions",
"color_b": "RED",
"background_color": "BLACK",
"opacity": 0.7
}