Halo Visualization¶
Class: HaloVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.halo.v1.HaloVisualizationBlockV1
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v1.
- inputs:
Google Vision OCR,VLM as Detector,SIFT,Identify Changes,Dynamic Zone,EasyOCR,Icon Visualization,OpenAI,Circle Visualization,Pixelate Visualization,Twilio SMS/MMS Notification,Line Counter Visualization,Detection Event Log,Camera Calibration,Halo Visualization,Relative Static Crop,QR Code Generator,Seg Preview,Identify Outliers,Detections Stabilizer,Bounding Rectangle,Image Slicer,Florence-2 Model,Camera Focus,Anthropic Claude,OCR Model,Email Notification,Mask Visualization,Anthropic Claude,Contrast Equalization,PTZ Tracking (ONVIF).md),Clip Comparison,SIFT Comparison,Detections Stitch,Instance Segmentation Model,Depth Estimation,Google Gemini,Stitch Images,Path Deviation,Template Matching,Florence-2 Model,Triangle Visualization,Label Visualization,LMM,SIFT Comparison,Detection Offset,Text Display,CSV Formatter,Keypoint Visualization,Pixel Color Count,Detections Combine,JSON Parser,VLM as Classifier,Distance Measurement,Color Visualization,Image Contours,Path Deviation,Roboflow Dataset Upload,Bounding Box Visualization,Morphological Transformation,Line Counter,Detections Classes Replacement,OpenAI,Single-Label Classification Model,Llama 3.2 Vision,Multi-Label Classification Model,Roboflow Dataset Upload,Velocity,Model Comparison Visualization,Time in Zone,Google Gemini,Polygon Visualization,Dimension Collapse,Image Slicer,Dynamic Crop,Image Threshold,Local File Sink,Reference Path Visualization,Trace Visualization,Email Notification,Time in Zone,Webhook Sink,Blur Visualization,OpenAI,Ellipse Visualization,Crop Visualization,Polygon Zone Visualization,Buffer,OpenAI,Grid Visualization,Stability AI Inpainting,Detections Transformation,Classification Label Visualization,VLM as Classifier,SAM 3,Instance Segmentation Model,Roboflow Custom Metadata,Stitch OCR Detections,Size Measurement,Model Monitoring Inference Aggregator,Image Blur,Corner Visualization,LMM For Classification,Stability AI Outpainting,Time in Zone,SAM 3,Absolute Static Crop,SAM 3,Detections List Roll-Up,VLM as Detector,Background Subtraction,Segment Anything 2 Model,Keypoint Detection Model,Stability AI Image Generation,Line Counter,Detections Consensus,Camera Focus,Detections Filter,Clip Comparison,Twilio SMS Notification,Dot Visualization,CogVLM,Perspective Correction,Image Convert Grayscale,Background Color Visualization,Slack Notification,Motion Detection,Image Preprocessing,Object Detection Model,Anthropic Claude,Google Gemini - outputs:
Byte Tracker,Google Vision OCR,VLM as Detector,Roboflow Dataset Upload,Single-Label Classification Model,SIFT,Qwen3-VL,Model Comparison Visualization,Google Gemini,Polygon Visualization,EasyOCR,Image Slicer,Dynamic Crop,Image Threshold,Icon Visualization,Reference Path Visualization,OpenAI,Circle Visualization,Trace Visualization,Email Notification,Pixelate Visualization,Twilio SMS/MMS Notification,Line Counter Visualization,Camera Calibration,Time in Zone,Halo Visualization,Blur Visualization,Relative Static Crop,OpenAI,QR Code Detection,Ellipse Visualization,Dominant Color,Seg Preview,Crop Visualization,Polygon Zone Visualization,Buffer,Multi-Label Classification Model,Barcode Detection,Perception Encoder Embedding Model,Detections Stabilizer,Moondream2,OpenAI,Stability AI Inpainting,Image Slicer,CLIP Embedding Model,Keypoint Detection Model,Florence-2 Model,VLM as Classifier,SAM 3,Classification Label Visualization,Instance Segmentation Model,Camera Focus,Image Blur,Corner Visualization,Anthropic Claude,Gaze Detection,OCR Model,LMM For Classification,Object Detection Model,Mask Visualization,Anthropic Claude,Stability AI Outpainting,Contrast Equalization,SAM 3,Absolute Static Crop,SAM 3,Qwen2.5-VL,Clip Comparison,VLM as Detector,Background Subtraction,Detections Stitch,Instance Segmentation Model,SmolVLM2,Depth Estimation,Google Gemini,Segment Anything 2 Model,Stitch Images,Keypoint Detection Model,Template Matching,Stability AI Image Generation,YOLO-World Model,Florence-2 Model,Triangle Visualization,Label Visualization,LMM,Camera Focus,SIFT Comparison,Text Display,Clip Comparison,Keypoint Visualization,Dot Visualization,Pixel Color Count,CogVLM,Perspective Correction,Image Convert Grayscale,VLM as Classifier,Color Visualization,Background Color Visualization,Image Contours,Motion Detection,Object Detection Model,Image Preprocessing,Roboflow Dataset Upload,Anthropic Claude,Bounding Box Visualization,Google Gemini,Morphological Transformation,OpenAI,Llama 3.2 Vision,Multi-Label Classification Model,Single-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,rle_instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}