Blur Visualization¶
Class: BlurVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.blur.v1.BlurVisualizationBlockV1
Apply blur effects to detected objects in an image, obscuring their details while preserving the background, useful for privacy protection, content filtering, or visual emphasis.
How This Block Works¶
This block takes an image and detection predictions and applies a blur effect to the detected objects, leaving the background unchanged. The block:
- Takes an image and predictions as input
- Identifies detected regions from bounding boxes or segmentation masks
- Applies a blur effect (using average pooling) to the detected object regions
- Preserves the background and areas outside detected objects unchanged
- Returns an annotated image where detected objects are blurred, while the rest of the image remains sharp
The block works with both object detection predictions (using bounding boxes) and instance segmentation predictions (using masks). When masks are available, it blurs the exact shape of detected objects; otherwise, it blurs rectangular bounding box regions. The blur intensity is controlled by the kernel size parameter, where larger kernel sizes create stronger blur effects. This creates a visual effect that obscures or anonymizes detected objects while maintaining context from the surrounding image, making it ideal for privacy protection, content filtering, or focusing attention on the background.
Common Use Cases¶
- Privacy Protection and Anonymization: Blur faces, people, license plates, or other sensitive information in images or videos to protect privacy, comply with data protection regulations, or anonymize content before sharing or publishing
- Content Filtering and Moderation: Obscure inappropriate or sensitive content in images or videos for content moderation workflows, safe content previews, or user-generated content filtering
- Visual Emphasis and Focus: Blur detected objects to draw attention to other parts of the image, create visual contrast between blurred foreground objects and sharp backgrounds, or emphasize specific elements in composition
- Product Photography and E-commerce: Blur detected distracting elements or secondary products in images to keep the main subject sharp and prominent for product photography, catalog creation, or e-commerce image preparation
- Security and Surveillance: Anonymize people, vehicles, or other identifiable elements in security footage or surveillance images while preserving scene context for analysis, reporting, or public sharing
- Documentation and Reporting: Create anonymized or censored versions of images for reports, documentation, or case studies where sensitive information needs to be obscured but overall context should remain visible
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Bounding Box Visualization, Polygon Visualization) to add additional annotations on top of blurred objects for comprehensive visualization or to indicate what was blurred
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save blurred images for documentation, reporting, or archiving privacy-protected content
- Webhook blocks to send blurred images to external systems, APIs, or web applications for content moderation, privacy-compliant sharing, or anonymized analysis
- Notification blocks (e.g., Email Notification, Slack Notification) to send blurred images as privacy-protected visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with blurred objects for live monitoring, privacy-compliant video processing, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/blur_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
kernel_size |
int |
Size of the blur kernel used for average pooling. Larger values create stronger blur effects, making objects more obscured. Smaller values create subtle blur effects. Typical values range from 5 (light blur) to 51 (strong blur). Must be an odd number for optimal blurring performance.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Blur Visualization in version v1.
- inputs:
Triangle Visualization,Morphological Transformation,Roboflow Dataset Upload,Ellipse Visualization,Detections Stitch,Detections Classes Replacement,Blur Visualization,Halo Visualization,Camera Focus,Motion Detection,Model Comparison Visualization,VLM As Detector,Keypoint Visualization,Pixelate Visualization,Image Slicer,SIFT Comparison,Roboflow Dataset Upload,Line Counter Visualization,Line Counter,Label Visualization,SIFT Comparison,QR Code Generator,Keypoint Detection Model,Dynamic Zone,Distance Measurement,Email Notification,SAM 3,Slack Notification,Detections Stabilizer,Object Detection Model,Path Deviation,Corner Visualization,Image Slicer,EasyOCR,SAM 3,Object Detection Model,Detections Combine,Bounding Box Visualization,Keypoint Detection Model,Background Subtraction,Pixel Color Count,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Polygon Visualization,Image Blur,VLM As Classifier,Time in Zone,Relative Static Crop,Detections Merge,Heatmap Visualization,Mask Visualization,Image Preprocessing,Byte Tracker,Twilio SMS Notification,VLM As Detector,PTZ Tracking (ONVIF).md),Instance Segmentation Model,Detections Transformation,OCR Model,SIFT,Stitch Images,Stability AI Outpainting,Moondream2,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Detection Offset,Byte Tracker,Color Visualization,Trace Visualization,YOLO-World Model,Icon Visualization,Dot Visualization,Time in Zone,Email Notification,Velocity,Instance Segmentation Model,Path Deviation,Camera Focus,Twilio SMS/MMS Notification,Depth Estimation,Contrast Equalization,Roboflow Custom Metadata,Segment Anything 2 Model,Grid Visualization,Text Display,Bounding Rectangle,Reference Path Visualization,Image Threshold,Detections Filter,Perspective Correction,Image Contours,Polygon Zone Visualization,Detection Event Log,Polygon Visualization,Local File Sink,Identify Changes,Halo Visualization,JSON Parser,Byte Tracker,Google Vision OCR,Stability AI Inpainting,Time in Zone,SAM 3,Identify Outliers,Crop Visualization,Detections Consensus,Template Matching,Detections List Roll-Up,Webhook Sink,Absolute Static Crop,Classification Label Visualization,VLM As Classifier,Overlap Filter,Seg Preview,Line Counter,Gaze Detection,Stability AI Image Generation - outputs:
Triangle Visualization,Detections Stitch,Roboflow Dataset Upload,Ellipse Visualization,Morphological Transformation,LMM,Florence-2 Model,Blur Visualization,CLIP Embedding Model,Anthropic Claude,Halo Visualization,Google Gemini,Camera Focus,Llama 3.2 Vision,Motion Detection,Model Comparison Visualization,VLM As Detector,Keypoint Visualization,Qwen3-VL,Pixelate Visualization,Image Slicer,Roboflow Dataset Upload,Line Counter Visualization,Keypoint Detection Model,SmolVLM2,Label Visualization,SIFT Comparison,Clip Comparison,Buffer,SAM 3,Detections Stabilizer,Object Detection Model,EasyOCR,Image Slicer,Corner Visualization,Florence-2 Model,Object Detection Model,SAM 3,QR Code Detection,Anthropic Claude,OpenAI,Google Gemini,Perception Encoder Embedding Model,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Pixel Color Count,Background Subtraction,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Polygon Visualization,Image Blur,VLM As Classifier,Relative Static Crop,Clip Comparison,Heatmap Visualization,CogVLM,Mask Visualization,Image Preprocessing,VLM As Detector,Instance Segmentation Model,OpenAI,OCR Model,SIFT,Stitch Images,Single-Label Classification Model,Moondream2,Stability AI Outpainting,Dynamic Crop,Circle Visualization,Byte Tracker,OpenAI,Trace Visualization,Color Visualization,Qwen2.5-VL,Icon Visualization,YOLO-World Model,Dot Visualization,Time in Zone,Email Notification,Instance Segmentation Model,Camera Focus,Twilio SMS/MMS Notification,Contrast Equalization,Depth Estimation,Segment Anything 2 Model,LMM For Classification,Text Display,Dominant Color,Reference Path Visualization,Image Threshold,Perspective Correction,Multi-Label Classification Model,Image Contours,Polygon Zone Visualization,Polygon Visualization,Halo Visualization,Google Vision OCR,SAM 3,Stability AI Inpainting,Crop Visualization,Template Matching,Google Gemini,Barcode Detection,VLM As Classifier,Classification Label Visualization,Seg Preview,OpenAI,Absolute Static Crop,Multi-Label Classification Model,Single-Label Classification Model,Gaze Detection,Stability AI Image Generation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Blur Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Model predictions to visualize..kernel_size(integer): Size of the blur kernel used for average pooling. Larger values create stronger blur effects, making objects more obscured. Smaller values create subtle blur effects. Typical values range from 5 (light blur) to 51 (strong blur). Must be an odd number for optimal blurring performance..
-
output
image(image): Image in workflows.
Example JSON definition of step Blur Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/blur_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"kernel_size": 15
}