Blur Visualization¶
Class: BlurVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.blur.v1.BlurVisualizationBlockV1
Apply blur effects to detected objects in an image, obscuring their details while preserving the background, useful for privacy protection, content filtering, or visual emphasis.
How This Block Works¶
This block takes an image and detection predictions and applies a blur effect to the detected objects, leaving the background unchanged. The block:
- Takes an image and predictions as input
- Identifies detected regions from bounding boxes or segmentation masks
- Applies a blur effect (using average pooling) to the detected object regions
- Preserves the background and areas outside detected objects unchanged
- Returns an annotated image where detected objects are blurred, while the rest of the image remains sharp
The block works with both object detection predictions (using bounding boxes) and instance segmentation predictions (using masks). When masks are available, it blurs the exact shape of detected objects; otherwise, it blurs rectangular bounding box regions. The blur intensity is controlled by the kernel size parameter, where larger kernel sizes create stronger blur effects. This creates a visual effect that obscures or anonymizes detected objects while maintaining context from the surrounding image, making it ideal for privacy protection, content filtering, or focusing attention on the background.
Common Use Cases¶
- Privacy Protection and Anonymization: Blur faces, people, license plates, or other sensitive information in images or videos to protect privacy, comply with data protection regulations, or anonymize content before sharing or publishing
- Content Filtering and Moderation: Obscure inappropriate or sensitive content in images or videos for content moderation workflows, safe content previews, or user-generated content filtering
- Visual Emphasis and Focus: Blur detected objects to draw attention to other parts of the image, create visual contrast between blurred foreground objects and sharp backgrounds, or emphasize specific elements in composition
- Product Photography and E-commerce: Blur detected distracting elements or secondary products in images to keep the main subject sharp and prominent for product photography, catalog creation, or e-commerce image preparation
- Security and Surveillance: Anonymize people, vehicles, or other identifiable elements in security footage or surveillance images while preserving scene context for analysis, reporting, or public sharing
- Documentation and Reporting: Create anonymized or censored versions of images for reports, documentation, or case studies where sensitive information needs to be obscured but overall context should remain visible
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Bounding Box Visualization, Polygon Visualization) to add additional annotations on top of blurred objects for comprehensive visualization or to indicate what was blurred
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save blurred images for documentation, reporting, or archiving privacy-protected content
- Webhook blocks to send blurred images to external systems, APIs, or web applications for content moderation, privacy-compliant sharing, or anonymized analysis
- Notification blocks (e.g., Email Notification, Slack Notification) to send blurred images as privacy-protected visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with blurred objects for live monitoring, privacy-compliant video processing, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/blur_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
kernel_size |
int |
Size of the blur kernel used for average pooling. Larger values create stronger blur effects, making objects more obscured. Smaller values create subtle blur effects. Typical values range from 5 (light blur) to 51 (strong blur). Must be an odd number for optimal blurring performance.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Blur Visualization in version v1.
- inputs:
Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Dynamic Zone,Email Notification,QR Code Generator,Dynamic Crop,VLM As Detector,VLM As Detector,SAM 3,Path Deviation,Image Blur,Detection Offset,Corner Visualization,Image Convert Grayscale,Line Counter,Byte Tracker,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,JSON Parser,Object Detection Model,Template Matching,Image Contours,Path Deviation,Trace Visualization,Google Vision OCR,Morphological Transformation,Triangle Visualization,Bounding Rectangle,Detections Stitch,Instance Segmentation Model,Relative Static Crop,Text Display,Stitch Images,Detections Filter,Camera Calibration,Grid Visualization,Local File Sink,Slack Notification,Detections Stabilizer,VLM As Classifier,Roboflow Dataset Upload,Camera Focus,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Image Slicer,Polygon Visualization,Detections Combine,Object Detection Model,Line Counter Visualization,Keypoint Detection Model,Byte Tracker,Contrast Equalization,Distance Measurement,Identify Changes,Detections Classes Replacement,SIFT Comparison,Camera Focus,Time in Zone,Background Subtraction,Velocity,Image Slicer,Circle Visualization,Moondream2,SIFT Comparison,Identify Outliers,Halo Visualization,Seg Preview,Blur Visualization,Label Visualization,Twilio SMS/MMS Notification,Email Notification,Ellipse Visualization,SIFT,Byte Tracker,Image Preprocessing,Model Monitoring Inference Aggregator,SAM 3,Detections List Roll-Up,Image Threshold,Background Color Visualization,Model Comparison Visualization,Depth Estimation,Time in Zone,Line Counter,Keypoint Detection Model,Absolute Static Crop,Roboflow Custom Metadata,Gaze Detection,EasyOCR,Perspective Correction,Pixelate Visualization,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,Polygon Visualization,Twilio SMS Notification,SAM 3,VLM As Classifier,Bounding Box Visualization,Detection Event Log,Polygon Zone Visualization,OCR Model,YOLO-World Model,Overlap Filter,Icon Visualization,Crop Visualization,Time in Zone,Pixel Color Count,Motion Detection,Detections Transformation,Roboflow Dataset Upload - outputs:
Anthropic Claude,Mask Visualization,Classification Label Visualization,Instance Segmentation Model,Multi-Label Classification Model,Email Notification,Dynamic Crop,CLIP Embedding Model,VLM As Detector,VLM As Detector,Google Gemini,Multi-Label Classification Model,LMM,SAM 3,Image Blur,Corner Visualization,Image Convert Grayscale,Byte Tracker,Stability AI Outpainting,SmolVLM2,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Object Detection Model,Template Matching,Single-Label Classification Model,Image Contours,Trace Visualization,Google Vision OCR,Morphological Transformation,Triangle Visualization,Instance Segmentation Model,Clip Comparison,Detections Stitch,Relative Static Crop,Text Display,Stitch Images,Google Gemini,Camera Calibration,Detections Stabilizer,VLM As Classifier,Roboflow Dataset Upload,Camera Focus,Color Visualization,Dot Visualization,Image Slicer,Polygon Visualization,Object Detection Model,Anthropic Claude,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Buffer,Llama 3.2 Vision,Contrast Equalization,SIFT Comparison,Camera Focus,Perception Encoder Embedding Model,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Moondream2,Seg Preview,Halo Visualization,Florence-2 Model,Blur Visualization,Qwen3-VL,Twilio SMS/MMS Notification,Label Visualization,Barcode Detection,Clip Comparison,Ellipse Visualization,OpenAI,QR Code Detection,SIFT,Image Preprocessing,SAM 3,Single-Label Classification Model,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Depth Estimation,OpenAI,Motion Detection,Keypoint Detection Model,CogVLM,Absolute Static Crop,Gaze Detection,EasyOCR,Perspective Correction,Qwen2.5-VL,Anthropic Claude,Pixelate Visualization,Reference Path Visualization,Stability AI Image Generation,Keypoint Visualization,SAM 3,Polygon Visualization,VLM As Classifier,Bounding Box Visualization,Polygon Zone Visualization,OCR Model,YOLO-World Model,Icon Visualization,Crop Visualization,Pixel Color Count,Google Gemini,OpenAI,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Blur Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction]): Model predictions to visualize..kernel_size(integer): Size of the blur kernel used for average pooling. Larger values create stronger blur effects, making objects more obscured. Smaller values create subtle blur effects. Typical values range from 5 (light blur) to 51 (strong blur). Must be an odd number for optimal blurring performance..
-
output
image(image): Image in workflows.
Example JSON definition of step Blur Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/blur_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"kernel_size": 15
}