Mask Visualization¶
Class: MaskVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.mask.v1.MaskVisualizationBlockV1
Fill segmentation masks with semi-transparent color overlays, creating solid color fills that precisely follow the shape of detected objects from instance segmentation predictions.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and fills the mask regions with colored overlays. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object from the predictions
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Fills the mask regions with solid colors using Supervision's MaskAnnotator
- Blends the colored mask overlays with the original image using the specified opacity level
- Returns an annotated image where mask regions are filled with semi-transparent colors, while non-masked areas remain unchanged
The block fills the exact shape of each object's segmentation mask with colored overlays, creating solid color fills that precisely follow object boundaries. Unlike polygon visualization (which draws outlines) or bounding box visualizations (which use rectangular regions), mask visualization fills the entire mask area with color, providing clear visual indication of the segmented regions. The opacity parameter controls how transparent the mask overlay is, allowing you to see the original image details through the colored mask (lower opacity) or create more opaque fills (higher opacity) that better obscure background details. This block requires instance segmentation predictions with mask data, as it specifically works with segmentation masks to create precise, shape-following color fills.
Common Use Cases¶
- Instance Segmentation Visualization: Visualize instance segmentation results by filling mask regions with colors to clearly show segmented objects, validate segmentation quality, or highlight detected regions in analysis workflows
- Precise Shape-Following Overlays: Fill objects with colors that exactly match their segmented shapes, useful for applications requiring accurate region visualization such as medical imaging, quality control, or precise object identification
- Mask-Based Object Highlighting: Highlight segmented objects with colored overlays that follow exact object boundaries, providing clear visual distinction between different objects or object classes
- Segmentation Model Validation: Visualize segmentation predictions with colored mask fills to verify model performance, identify segmentation errors, or validate mask accuracy in model development and debugging workflows
- Medical and Scientific Imaging: Display segmented regions in medical imaging, microscopy, or scientific analysis applications where colored mask overlays help visualize tissue boundaries, cell regions, or measured areas
- Mask Quality Inspection: Use colored mask fills to inspect segmentation quality, verify mask boundaries, or identify areas where segmentation may need improvement in training data or model outputs
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Polygon Visualization, Bounding Box Visualization) to combine mask fills with additional annotations (labels, outlines) for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with mask overlays for documentation, reporting, or analysis
- Webhook blocks to send visualized results with mask fills to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with mask overlays as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with mask fills for live monitoring, segmentation visualization, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/mask_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the mask overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the transparency of the colored mask fill. Lower values (e.g., 0.3-0.5) create semi-transparent overlays that allow original image details to show through, while higher values (e.g., 0.7-1.0) create more opaque fills that better obscure background details. Typical values range from 0.4 to 0.7 for balanced visualization where both the mask and underlying image are visible.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Mask Visualization in version v1.
- inputs:
Dynamic Crop,OCR Model,Image Blur,Background Subtraction,Google Vision OCR,Google Gemini,Image Preprocessing,Local File Sink,Single-Label Classification Model,Bounding Box Visualization,Model Monitoring Inference Aggregator,Keypoint Detection Model,Camera Focus,Identify Outliers,Dot Visualization,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Depth Estimation,Polygon Visualization,OpenAI,Line Counter,Image Slicer,Detections List Roll-Up,Line Counter Visualization,Heatmap Visualization,Morphological Transformation,Stability AI Image Generation,Google Gemini,Distance Measurement,Keypoint Visualization,Background Color Visualization,Label Visualization,Polygon Visualization,LMM,CogVLM,Time in Zone,Triangle Visualization,Stability AI Outpainting,Mask Visualization,Color Visualization,Detections Combine,Text Display,Bounding Rectangle,Reference Path Visualization,Llama 3.2 Vision,OpenAI,Image Threshold,Clip Comparison,Classification Label Visualization,Clip Comparison,Polygon Zone Visualization,Image Contours,VLM As Classifier,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Velocity,Halo Visualization,Semantic Segmentation Model,Blur Visualization,Path Deviation,Absolute Static Crop,Anthropic Claude,SAM 3,Detections Transformation,Ellipse Visualization,Identify Changes,Crop Visualization,SIFT Comparison,Path Deviation,Trace Visualization,Twilio SMS Notification,Stitch Images,Detections Stabilizer,Size Measurement,Time in Zone,Motion Detection,Email Notification,SIFT Comparison,OpenAI,Seg Preview,Time in Zone,Instance Segmentation Model,Anthropic Claude,Multi-Label Classification Model,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Detections Stitch,VLM As Detector,Camera Focus,SAM 3,Stitch OCR Detections,Perspective Correction,PTZ Tracking (ONVIF),Camera Calibration,Corner Visualization,Icon Visualization,Qwen3.5-VL,VLM As Detector,Halo Visualization,JSON Parser,Detection Event Log,Pixelate Visualization,Contrast Equalization,Dimension Collapse,VLM As Classifier,Instance Segmentation Model,Detections Classes Replacement,Relative Static Crop,Line Counter,Stitch OCR Detections,Webhook Sink,Circle Visualization,Image Convert Grayscale,Grid Visualization,Mask Area Measurement,Florence-2 Model,Buffer,SAM 3,SIFT,Object Detection Model,Template Matching,Detections Consensus,Anthropic Claude,Google Gemini,Model Comparison Visualization,Detection Offset,QR Code Generator,EasyOCR,Image Slicer,S3 Sink,Stability AI Inpainting,Segment Anything 2 Model,Detections Filter,OpenAI,Pixel Color Count,Roboflow Dataset Upload - outputs:
Dynamic Crop,OCR Model,Barcode Detection,Motion Detection,Email Notification,Image Blur,Background Subtraction,Google Vision OCR,SIFT Comparison,Google Gemini,OpenAI,Image Preprocessing,Qwen2.5-VL,Seg Preview,Object Detection Model,Instance Segmentation Model,Single-Label Classification Model,Bounding Box Visualization,Multi-Label Classification Model,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Detections Stitch,Twilio SMS/MMS Notification,Camera Focus,VLM As Detector,Gaze Detection,Florence-2 Model,Dot Visualization,Roboflow Dataset Upload,Camera Focus,SAM 3,Depth Estimation,Polygon Visualization,Moondream2,OpenAI,Perspective Correction,Image Slicer,Icon Visualization,Corner Visualization,Camera Calibration,Qwen3.5-VL,Line Counter Visualization,Heatmap Visualization,Google Gemini,Morphological Transformation,Stability AI Image Generation,Keypoint Visualization,VLM As Detector,Keypoint Detection Model,Halo Visualization,Background Color Visualization,Label Visualization,QR Code Detection,Polygon Visualization,Pixelate Visualization,LMM,CogVLM,Time in Zone,Single-Label Classification Model,Qwen3-VL,Contrast Equalization,Triangle Visualization,Stability AI Outpainting,Mask Visualization,VLM As Classifier,Color Visualization,Instance Segmentation Model,Dominant Color,Text Display,Relative Static Crop,Reference Path Visualization,OpenAI,Llama 3.2 Vision,Clip Comparison,Clip Comparison,Classification Label Visualization,Image Threshold,Circle Visualization,Polygon Zone Visualization,Image Contours,Image Convert Grayscale,VLM As Classifier,Byte Tracker,Buffer,Florence-2 Model,SmolVLM2,SAM 3,Perception Encoder Embedding Model,LMM For Classification,SIFT,YOLO-World Model,Halo Visualization,Template Matching,Object Detection Model,Semantic Segmentation Model,Anthropic Claude,Google Gemini,Model Comparison Visualization,Blur Visualization,EasyOCR,Absolute Static Crop,Image Slicer,Anthropic Claude,SAM 3,CLIP Embedding Model,Stability AI Inpainting,Ellipse Visualization,Crop Visualization,Trace Visualization,Segment Anything 2 Model,Stitch Images,Detections Stabilizer,OpenAI,Pixel Color Count,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Mask Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[semantic_segmentation_prediction,instance_segmentation_prediction,rle_instance_segmentation_prediction]): Segmentation predictions containing masks for detected objects. The block uses segmentation masks to create colored fills that precisely follow object or class boundaries. Requires segmentation model outputs with mask data, which may be RLE-encoded..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the mask overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the transparency of the colored mask fill. Lower values (e.g., 0.3-0.5) create semi-transparent overlays that allow original image details to show through, while higher values (e.g., 0.7-1.0) create more opaque fills that better obscure background details. Typical values range from 0.4 to 0.7 for balanced visualization where both the mask and underlying image are visible..
-
output
image(image): Image in workflows.
Example JSON definition of step Mask Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/mask_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.5
}