Halo Visualization¶
v2¶
Class: HaloVisualizationBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v2.HaloVisualizationBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v2.
- inputs:
Pixelate Visualization,Roboflow Custom Metadata,Halo Visualization,Anthropic Claude,OpenAI,Halo Visualization,Ellipse Visualization,Webhook Sink,Dynamic Crop,Image Convert Grayscale,Circle Visualization,Florence-2 Model,Image Slicer,Stability AI Outpainting,Dynamic Zone,Detections Combine,EasyOCR,OCR Model,Anthropic Claude,Size Measurement,Clip Comparison,Twilio SMS Notification,Stability AI Inpainting,SIFT Comparison,Stitch OCR Detections,Image Blur,VLM As Classifier,QR Code Generator,JSON Parser,SIFT,OpenAI,Path Deviation,Line Counter,Detections Stitch,PTZ Tracking (ONVIF),Slack Notification,Detections Filter,Detection Offset,Dimension Collapse,Detections List Roll-Up,Keypoint Detection Model,Segment Anything 2 Model,LMM,Image Threshold,Relative Static Crop,Identify Changes,Crop Visualization,Template Matching,Stitch Images,Perspective Correction,Motion Detection,Camera Focus,Line Counter Visualization,Color Visualization,Morphological Transformation,Llama 3.2 Vision,Line Counter,Google Gemini,SAM 3,Time in Zone,Pixel Color Count,Identify Outliers,Buffer,OpenAI,Roboflow Dataset Upload,Google Gemini,Object Detection Model,Polygon Visualization,Heatmap Visualization,Time in Zone,Distance Measurement,Contrast Equalization,Trace Visualization,Grid Visualization,Model Monitoring Inference Aggregator,Local File Sink,Detection Event Log,Corner Visualization,Polygon Zone Visualization,Model Comparison Visualization,Keypoint Visualization,Text Display,Google Vision OCR,Detections Stabilizer,Roboflow Dataset Upload,Reference Path Visualization,CogVLM,Instance Segmentation Model,LMM For Classification,VLM As Detector,Image Slicer,Icon Visualization,Background Color Visualization,Absolute Static Crop,Google Gemini,Label Visualization,Image Preprocessing,Classification Label Visualization,Mask Visualization,Single-Label Classification Model,VLM As Classifier,Detections Consensus,Bounding Box Visualization,Mask Area Measurement,OpenAI,Triangle Visualization,Dot Visualization,Email Notification,Twilio SMS/MMS Notification,Image Contours,Anthropic Claude,Instance Segmentation Model,Email Notification,Seg Preview,Stitch OCR Detections,Florence-2 Model,Background Subtraction,Bounding Rectangle,SAM 3,Camera Calibration,VLM As Detector,Clip Comparison,Detections Classes Replacement,Velocity,Blur Visualization,Path Deviation,CSV Formatter,Camera Focus,Detections Transformation,SAM 3,SIFT Comparison,Multi-Label Classification Model,Qwen3.5-VL,Time in Zone,Stability AI Image Generation,Depth Estimation,Polygon Visualization - outputs:
Barcode Detection,Heatmap Visualization,Pixelate Visualization,Halo Visualization,Anthropic Claude,OpenAI,Halo Visualization,Contrast Equalization,Trace Visualization,Ellipse Visualization,Dynamic Crop,Image Convert Grayscale,Corner Visualization,Polygon Zone Visualization,Circle Visualization,Model Comparison Visualization,Florence-2 Model,Image Slicer,Stability AI Outpainting,Keypoint Visualization,Text Display,EasyOCR,Google Vision OCR,Moondream2,OCR Model,Anthropic Claude,Qwen2.5-VL,Clip Comparison,Detections Stabilizer,Stability AI Inpainting,Roboflow Dataset Upload,SIFT Comparison,Reference Path Visualization,VLM As Classifier,Image Blur,CogVLM,Instance Segmentation Model,LMM For Classification,VLM As Detector,Object Detection Model,SmolVLM2,Image Slicer,Qwen3-VL,Icon Visualization,Background Color Visualization,Google Gemini,Absolute Static Crop,OpenAI,SIFT,Label Visualization,Classification Label Visualization,Image Preprocessing,Mask Visualization,Stability AI Image Generation,Single-Label Classification Model,VLM As Classifier,Dominant Color,Detections Stitch,YOLO-World Model,Bounding Box Visualization,Byte Tracker,OpenAI,Roboflow Dataset Upload,Triangle Visualization,Keypoint Detection Model,Perception Encoder Embedding Model,Dot Visualization,Email Notification,Twilio SMS/MMS Notification,Segment Anything 2 Model,QR Code Detection,Image Contours,Anthropic Claude,Instance Segmentation Model,Seg Preview,LMM,Florence-2 Model,Background Subtraction,SAM 3,Camera Calibration,Image Threshold,Multi-Label Classification Model,VLM As Detector,Single-Label Classification Model,Clip Comparison,Relative Static Crop,Gaze Detection,Keypoint Detection Model,Crop Visualization,Template Matching,Stitch Images,Blur Visualization,Perspective Correction,Motion Detection,Camera Focus,Camera Focus,Line Counter Visualization,Color Visualization,Llama 3.2 Vision,Google Gemini,Morphological Transformation,SAM 3,SAM 3,Buffer,Pixel Color Count,Multi-Label Classification Model,CLIP Embedding Model,OpenAI,Qwen3.5-VL,Time in Zone,Google Gemini,Depth Estimation,Polygon Visualization,Object Detection Model,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v2 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[rle_instance_segmentation_prediction,instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v2",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}
v1¶
Class: HaloVisualizationBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.visualizations.halo.v1.HaloVisualizationBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Create a soft, glowing halo effect around detected objects by blurring and overlaying colored masks, providing a distinctive visual style that highlights object boundaries with a smooth, illuminated appearance.
How This Block Works¶
This block takes an image and instance segmentation predictions (with masks) and creates a glowing halo effect around each detected object. The block:
- Takes an image and instance segmentation predictions (with masks) as input
- Extracts segmentation masks for each detected object (uses masks from predictions, or creates bounding box masks if masks are not available)
- Applies color styling to each mask based on the selected color palette, with colors assigned by class, index, or track ID
- Creates colored mask overlays for each detection, combining masks from largest to smallest area (to handle overlapping objects correctly)
- Applies a blur filter (average pooling with specified kernel size) to the colored masks, creating a soft, diffused halo effect around object edges
- Blends the blurred halo overlay with the original image using the specified opacity level, creating a glowing appearance around detected objects
- Returns an annotated image with soft halo effects overlaid around each detected object
The block creates halos by blurring the colored masks, which produces a soft, glowing effect that extends beyond the object boundaries. Unlike hard-edged visualizations (like bounding boxes or polygons), halos provide a smooth, illuminated appearance that makes objects stand out while maintaining a visually appealing aesthetic. The blur kernel size controls how far the halo extends beyond the object (larger kernel = wider halo), and the opacity controls the intensity of the glow effect. This block requires instance segmentation predictions with masks, as it uses mask shapes to create the halo effect around object perimeters.
Common Use Cases¶
- Artistic and Aesthetic Visualizations: Create visually appealing, glowing effects around detected objects for artistic presentations, design applications, or user interfaces where soft, illuminated halos provide a modern, polished appearance
- Soft Object Highlighting: Highlight detected objects with gentle, diffused halos when hard edges would be too harsh or distracting, useful for presentations, marketing materials, or consumer-facing applications
- Overlapping Object Visualization: Use halos to visualize overlapping or closely-spaced objects where hard boundaries would create visual clutter, allowing multiple objects to be distinguished while maintaining visual clarity
- Brand and Design Applications: Integrate halo effects into brand visuals, promotional materials, or design systems where soft, glowing annotations match design aesthetics better than angular bounding boxes
- Visual Emphasis and Focus: Draw attention to detected objects with glowing halos that create a natural visual focus point, useful in dashboards, monitoring interfaces, or interactive applications
- Mask-Based Object Highlighting: Visualize instance segmentation results with soft halo effects, providing an alternative to solid mask overlays when you want to show object boundaries without obscuring image details
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Dot Visualization, Bounding Box Visualization) to combine halo effects with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with halo effects for documentation, reporting, or analysis
- Webhook blocks to send visualized results with halo effects to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with halo effects as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with halo effects for live monitoring, artistic visualizations, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/halo_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
opacity |
float |
Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects.. | ✅ |
kernel_size |
int |
Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Halo Visualization in version v1.
- inputs:
Pixelate Visualization,Roboflow Custom Metadata,Halo Visualization,Anthropic Claude,OpenAI,Halo Visualization,Ellipse Visualization,Webhook Sink,Dynamic Crop,Image Convert Grayscale,Circle Visualization,Florence-2 Model,Image Slicer,Stability AI Outpainting,Dynamic Zone,Detections Combine,EasyOCR,OCR Model,Anthropic Claude,Size Measurement,Clip Comparison,Twilio SMS Notification,Stability AI Inpainting,SIFT Comparison,Stitch OCR Detections,Image Blur,VLM As Classifier,QR Code Generator,JSON Parser,SIFT,OpenAI,Path Deviation,Line Counter,Detections Stitch,PTZ Tracking (ONVIF),Slack Notification,Detections Filter,Detection Offset,Dimension Collapse,Detections List Roll-Up,Keypoint Detection Model,Segment Anything 2 Model,LMM,Image Threshold,Relative Static Crop,Identify Changes,Crop Visualization,Template Matching,Stitch Images,Perspective Correction,Motion Detection,Camera Focus,Line Counter Visualization,Color Visualization,Morphological Transformation,Llama 3.2 Vision,Line Counter,Google Gemini,SAM 3,Time in Zone,Pixel Color Count,Identify Outliers,Buffer,OpenAI,Roboflow Dataset Upload,Google Gemini,Object Detection Model,Polygon Visualization,Heatmap Visualization,Time in Zone,Distance Measurement,Contrast Equalization,Trace Visualization,Grid Visualization,Model Monitoring Inference Aggregator,Local File Sink,Detection Event Log,Corner Visualization,Polygon Zone Visualization,Model Comparison Visualization,Keypoint Visualization,Text Display,Google Vision OCR,Detections Stabilizer,Roboflow Dataset Upload,Reference Path Visualization,CogVLM,Instance Segmentation Model,LMM For Classification,VLM As Detector,Image Slicer,Icon Visualization,Background Color Visualization,Absolute Static Crop,Google Gemini,Label Visualization,Image Preprocessing,Classification Label Visualization,Mask Visualization,Single-Label Classification Model,VLM As Classifier,Detections Consensus,Bounding Box Visualization,Mask Area Measurement,OpenAI,Triangle Visualization,Dot Visualization,Email Notification,Twilio SMS/MMS Notification,Image Contours,Anthropic Claude,Instance Segmentation Model,Email Notification,Seg Preview,Stitch OCR Detections,Florence-2 Model,Background Subtraction,Bounding Rectangle,SAM 3,Camera Calibration,VLM As Detector,Clip Comparison,Detections Classes Replacement,Velocity,Blur Visualization,Path Deviation,CSV Formatter,Camera Focus,Detections Transformation,SAM 3,SIFT Comparison,Multi-Label Classification Model,Qwen3.5-VL,Time in Zone,Stability AI Image Generation,Depth Estimation,Polygon Visualization - outputs:
Barcode Detection,Heatmap Visualization,Pixelate Visualization,Halo Visualization,Anthropic Claude,OpenAI,Halo Visualization,Contrast Equalization,Trace Visualization,Ellipse Visualization,Dynamic Crop,Image Convert Grayscale,Corner Visualization,Polygon Zone Visualization,Circle Visualization,Model Comparison Visualization,Florence-2 Model,Image Slicer,Stability AI Outpainting,Keypoint Visualization,Text Display,EasyOCR,Google Vision OCR,Moondream2,OCR Model,Anthropic Claude,Qwen2.5-VL,Clip Comparison,Detections Stabilizer,Stability AI Inpainting,Roboflow Dataset Upload,SIFT Comparison,Reference Path Visualization,VLM As Classifier,Image Blur,CogVLM,Instance Segmentation Model,LMM For Classification,VLM As Detector,Object Detection Model,SmolVLM2,Image Slicer,Qwen3-VL,Icon Visualization,Background Color Visualization,Google Gemini,Absolute Static Crop,OpenAI,SIFT,Label Visualization,Classification Label Visualization,Image Preprocessing,Mask Visualization,Stability AI Image Generation,Single-Label Classification Model,VLM As Classifier,Dominant Color,Detections Stitch,YOLO-World Model,Bounding Box Visualization,Byte Tracker,OpenAI,Roboflow Dataset Upload,Triangle Visualization,Keypoint Detection Model,Perception Encoder Embedding Model,Dot Visualization,Email Notification,Twilio SMS/MMS Notification,Segment Anything 2 Model,QR Code Detection,Image Contours,Anthropic Claude,Instance Segmentation Model,Seg Preview,LMM,Florence-2 Model,Background Subtraction,SAM 3,Camera Calibration,Image Threshold,Multi-Label Classification Model,VLM As Detector,Single-Label Classification Model,Clip Comparison,Relative Static Crop,Gaze Detection,Keypoint Detection Model,Crop Visualization,Template Matching,Stitch Images,Blur Visualization,Perspective Correction,Motion Detection,Camera Focus,Camera Focus,Line Counter Visualization,Color Visualization,Llama 3.2 Vision,Google Gemini,Morphological Transformation,SAM 3,SAM 3,Buffer,Pixel Color Count,Multi-Label Classification Model,CLIP Embedding Model,OpenAI,Qwen3.5-VL,Time in Zone,Google Gemini,Depth Estimation,Polygon Visualization,Object Detection Model,Polygon Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Halo Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[rle_instance_segmentation_prediction,instance_segmentation_prediction]): Instance segmentation predictions containing masks for detected objects. The block uses segmentation masks to create halo effects around object boundaries. If masks are not available, it will create masks from bounding boxes. Requires instance segmentation model outputs with mask data..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..opacity(float_zero_to_one): Opacity of the halo overlay, ranging from 0.0 (fully transparent) to 1.0 (fully opaque). Controls the intensity of the glowing halo effect. Lower values create more subtle, softer halos that blend with the background, while higher values create more intense, visible glows. Typical values range from 0.5 to 0.9 for balanced visual effects..kernel_size(integer): Size of the blur kernel (in pixels) used for creating the halo effect. This controls how far the halo extends beyond the object boundaries and how soft/diffused the glow appears. Larger values create wider, more spread-out halos with smoother gradients, while smaller values create tighter, more concentrated glows. Values typically range from 20 to 80 pixels, with 40 being a good default for most use cases..
-
output
image(image): Image in workflows.
Example JSON definition of step Halo Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/halo_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.instance_segmentation_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"opacity": 0.8,
"kernel_size": 40
}