Halo Visualization¶
v2¶
Class: HaloVisualizationBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v2.HaloVisualizationBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v2.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Instance Segmentation Model,Distance Measurement,Color Visualization,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Stitch OCR Detections,Camera Focus,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Line Counter,Bounding Box Visualization,Velocity,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Detections Combine,SAM2 Video Tracker,Detection Event Log,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Mask Area Measurement,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Time in Zone,Reference Path Visualization,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Multi-Label Classification Model,Image Convert Grayscale,SAM 3,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Visualization,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v2 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[rle_instance_segmentation_prediction,instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v2",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}
v1¶
Class: HaloVisualizationBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v1.HaloVisualizationBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Instance Segmentation Model,Distance Measurement,Color Visualization,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Stitch OCR Detections,Camera Focus,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Line Counter,Bounding Box Visualization,Velocity,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Detections Combine,SAM2 Video Tracker,Detection Event Log,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Mask Area Measurement,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Time in Zone,Reference Path Visualization,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Multi-Label Classification Model,Image Convert Grayscale,SAM 3,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Visualization,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[rle_instance_segmentation_prediction,instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}