Halo Visualization¶
v2¶
Class: HaloVisualizationBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v2.HaloVisualizationBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v2.
- inputs:
Dynamic Crop,OCR Model,Image Blur,Background Subtraction,Google Vision OCR,Google Gemini,Image Preprocessing,Local File Sink,Single-Label Classification Model,Bounding Box Visualization,Model Monitoring Inference Aggregator,Keypoint Detection Model,Camera Focus,Identify Outliers,Dot Visualization,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Depth Estimation,Polygon Visualization,OpenAI,Line Counter,Image Slicer,Detections List Roll-Up,Line Counter Visualization,Heatmap Visualization,Morphological Transformation,Stability AI Image Generation,Google Gemini,Distance Measurement,Keypoint Visualization,Background Color Visualization,Label Visualization,Polygon Visualization,LMM,CogVLM,Time in Zone,Triangle Visualization,Stability AI Outpainting,Mask Visualization,Color Visualization,Detections Combine,Text Display,Bounding Rectangle,Reference Path Visualization,Llama 3.2 Vision,OpenAI,Image Threshold,Clip Comparison,Classification Label Visualization,Clip Comparison,Polygon Zone Visualization,Image Contours,VLM As Classifier,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Velocity,Halo Visualization,Blur Visualization,Path Deviation,Absolute Static Crop,Anthropic Claude,SAM 3,Detections Transformation,Ellipse Visualization,Identify Changes,Crop Visualization,SIFT Comparison,Path Deviation,Trace Visualization,Twilio SMS Notification,Stitch Images,Detections Stabilizer,Size Measurement,Time in Zone,Motion Detection,Email Notification,SIFT Comparison,OpenAI,Seg Preview,Time in Zone,Instance Segmentation Model,Anthropic Claude,Multi-Label Classification Model,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Detections Stitch,VLM As Detector,Camera Focus,SAM 3,Stitch OCR Detections,Perspective Correction,PTZ Tracking (ONVIF),Camera Calibration,Corner Visualization,Icon Visualization,Qwen3.5-VL,VLM As Detector,Halo Visualization,JSON Parser,Detection Event Log,Pixelate Visualization,Contrast Equalization,Dimension Collapse,VLM As Classifier,Instance Segmentation Model,Detections Classes Replacement,Relative Static Crop,Line Counter,Stitch OCR Detections,Webhook Sink,Circle Visualization,Image Convert Grayscale,Grid Visualization,Mask Area Measurement,Florence-2 Model,Buffer,SAM 3,SIFT,Object Detection Model,Template Matching,Detections Consensus,Anthropic Claude,Google Gemini,Model Comparison Visualization,Detection Offset,QR Code Generator,EasyOCR,Image Slicer,S3 Sink,Stability AI Inpainting,Segment Anything 2 Model,Detections Filter,OpenAI,Pixel Color Count,Roboflow Dataset Upload - outputs:
Dynamic Crop,OCR Model,Barcode Detection,Motion Detection,Email Notification,Image Blur,Background Subtraction,Google Vision OCR,SIFT Comparison,Google Gemini,OpenAI,Image Preprocessing,Qwen2.5-VL,Seg Preview,Object Detection Model,Instance Segmentation Model,Single-Label Classification Model,Bounding Box Visualization,Multi-Label Classification Model,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Detections Stitch,Twilio SMS/MMS Notification,Camera Focus,VLM As Detector,Gaze Detection,Florence-2 Model,Dot Visualization,Roboflow Dataset Upload,Camera Focus,SAM 3,Depth Estimation,Polygon Visualization,Moondream2,OpenAI,Perspective Correction,Image Slicer,Icon Visualization,Corner Visualization,Camera Calibration,Qwen3.5-VL,Line Counter Visualization,Heatmap Visualization,Google Gemini,Morphological Transformation,Stability AI Image Generation,Keypoint Visualization,VLM As Detector,Keypoint Detection Model,Halo Visualization,Background Color Visualization,Label Visualization,QR Code Detection,Polygon Visualization,Pixelate Visualization,LMM,CogVLM,Time in Zone,Single-Label Classification Model,Qwen3-VL,Contrast Equalization,Triangle Visualization,Stability AI Outpainting,Mask Visualization,VLM As Classifier,Color Visualization,Instance Segmentation Model,Dominant Color,Text Display,Relative Static Crop,Reference Path Visualization,OpenAI,Llama 3.2 Vision,Clip Comparison,Clip Comparison,Classification Label Visualization,Image Threshold,Circle Visualization,Polygon Zone Visualization,Image Contours,Image Convert Grayscale,VLM As Classifier,Byte Tracker,Buffer,Florence-2 Model,SmolVLM2,SAM 3,Perception Encoder Embedding Model,LMM For Classification,SIFT,YOLO-World Model,Halo Visualization,Template Matching,Object Detection Model,Semantic Segmentation Model,Anthropic Claude,Google Gemini,Model Comparison Visualization,Blur Visualization,EasyOCR,Absolute Static Crop,Image Slicer,Anthropic Claude,SAM 3,CLIP Embedding Model,Stability AI Inpainting,Ellipse Visualization,Crop Visualization,Trace Visualization,Segment Anything 2 Model,Stitch Images,Detections Stabilizer,OpenAI,Pixel Color Count,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v2 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,rle_instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v2",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}
v1¶
Class: HaloVisualizationBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v1.HaloVisualizationBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v1.
- inputs:
Dynamic Crop,OCR Model,Image Blur,Background Subtraction,Google Vision OCR,Google Gemini,Image Preprocessing,Local File Sink,Single-Label Classification Model,Bounding Box Visualization,Model Monitoring Inference Aggregator,Keypoint Detection Model,Camera Focus,Identify Outliers,Dot Visualization,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Depth Estimation,Polygon Visualization,OpenAI,Line Counter,Image Slicer,Detections List Roll-Up,Line Counter Visualization,Heatmap Visualization,Morphological Transformation,Stability AI Image Generation,Google Gemini,Distance Measurement,Keypoint Visualization,Background Color Visualization,Label Visualization,Polygon Visualization,LMM,CogVLM,Time in Zone,Triangle Visualization,Stability AI Outpainting,Mask Visualization,Color Visualization,Detections Combine,Text Display,Bounding Rectangle,Reference Path Visualization,Llama 3.2 Vision,OpenAI,Image Threshold,Clip Comparison,Classification Label Visualization,Clip Comparison,Polygon Zone Visualization,Image Contours,VLM As Classifier,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Velocity,Halo Visualization,Blur Visualization,Path Deviation,Absolute Static Crop,Anthropic Claude,SAM 3,Detections Transformation,Ellipse Visualization,Identify Changes,Crop Visualization,SIFT Comparison,Path Deviation,Trace Visualization,Twilio SMS Notification,Stitch Images,Detections Stabilizer,Size Measurement,Time in Zone,Motion Detection,Email Notification,SIFT Comparison,OpenAI,Seg Preview,Time in Zone,Instance Segmentation Model,Anthropic Claude,Multi-Label Classification Model,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Detections Stitch,VLM As Detector,Camera Focus,SAM 3,Stitch OCR Detections,Perspective Correction,PTZ Tracking (ONVIF),Camera Calibration,Corner Visualization,Icon Visualization,Qwen3.5-VL,VLM As Detector,Halo Visualization,JSON Parser,Detection Event Log,Pixelate Visualization,Contrast Equalization,Dimension Collapse,VLM As Classifier,Instance Segmentation Model,Detections Classes Replacement,Relative Static Crop,Line Counter,Stitch OCR Detections,Webhook Sink,Circle Visualization,Image Convert Grayscale,Grid Visualization,Mask Area Measurement,Florence-2 Model,Buffer,SAM 3,SIFT,Object Detection Model,Template Matching,Detections Consensus,Anthropic Claude,Google Gemini,Model Comparison Visualization,Detection Offset,QR Code Generator,EasyOCR,Image Slicer,S3 Sink,Stability AI Inpainting,Segment Anything 2 Model,Detections Filter,OpenAI,Pixel Color Count,Roboflow Dataset Upload - outputs:
Dynamic Crop,OCR Model,Barcode Detection,Motion Detection,Email Notification,Image Blur,Background Subtraction,Google Vision OCR,SIFT Comparison,Google Gemini,OpenAI,Image Preprocessing,Qwen2.5-VL,Seg Preview,Object Detection Model,Instance Segmentation Model,Single-Label Classification Model,Bounding Box Visualization,Multi-Label Classification Model,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Detections Stitch,Twilio SMS/MMS Notification,Camera Focus,VLM As Detector,Gaze Detection,Florence-2 Model,Dot Visualization,Roboflow Dataset Upload,Camera Focus,SAM 3,Depth Estimation,Polygon Visualization,Moondream2,OpenAI,Perspective Correction,Image Slicer,Icon Visualization,Corner Visualization,Camera Calibration,Qwen3.5-VL,Line Counter Visualization,Heatmap Visualization,Google Gemini,Morphological Transformation,Stability AI Image Generation,Keypoint Visualization,VLM As Detector,Keypoint Detection Model,Halo Visualization,Background Color Visualization,Label Visualization,QR Code Detection,Polygon Visualization,Pixelate Visualization,LMM,CogVLM,Time in Zone,Single-Label Classification Model,Qwen3-VL,Contrast Equalization,Triangle Visualization,Stability AI Outpainting,Mask Visualization,VLM As Classifier,Color Visualization,Instance Segmentation Model,Dominant Color,Text Display,Relative Static Crop,Reference Path Visualization,OpenAI,Llama 3.2 Vision,Clip Comparison,Clip Comparison,Classification Label Visualization,Image Threshold,Circle Visualization,Polygon Zone Visualization,Image Contours,Image Convert Grayscale,VLM As Classifier,Byte Tracker,Buffer,Florence-2 Model,SmolVLM2,SAM 3,Perception Encoder Embedding Model,LMM For Classification,SIFT,YOLO-World Model,Halo Visualization,Template Matching,Object Detection Model,Semantic Segmentation Model,Anthropic Claude,Google Gemini,Model Comparison Visualization,Blur Visualization,EasyOCR,Absolute Static Crop,Image Slicer,Anthropic Claude,SAM 3,CLIP Embedding Model,Stability AI Inpainting,Ellipse Visualization,Crop Visualization,Trace Visualization,Segment Anything 2 Model,Stitch Images,Detections Stabilizer,OpenAI,Pixel Color Count,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,rle_instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}