Background Subtraction¶
Class: BackgroundSubtractionBlockV1
Create motion masks from video streams using OpenCV's background subtraction algorithm.
How This Block Works¶
This block uses background subtraction (specifically the MOG2 algorithm) to identify pixels that differ from a learned background model and outputs a mask image highlighting motion areas. The block maintains state across frames to build and update the background model:
- Initializes background model - on the first frame, creates a background subtractor using the specified history and threshold parameters
- Processes each frame - applies background subtraction to identify pixels that differ from the learned background model
- Creates motion mask - generates a foreground mask where white pixels represent motion areas and black pixels represent the background
- Converts to image format - converts the single-channel mask to a 3-channel image format required by workflows
- Returns mask image - outputs the motion mask as an image that can be visualized or processed further
The output mask image shows motion areas as white pixels against a black background, making it easy to visualize where motion occurred in the frame. This mask can be used for further analysis, visualization, or as input to other processing steps.
Common Use Cases¶
- Motion Visualization: Create visual motion masks to see where movement occurs in video streams for monitoring, analysis, or debugging purposes
- Preprocessing for Motion Models: Generate motion masks as input data for training or inference with motion-based models that require mask data
- Motion Area Extraction: Extract regions of motion from video frames for further processing, analysis, or feature extraction
- Video Analysis: Analyze motion patterns by processing mask images to identify movement trends, activity levels, or motion characteristics
- Background Removal: Use motion masks to separate foreground (moving) objects from static background for segmentation or isolation tasks
- Motion-based Filtering: Use motion masks to filter or focus processing on areas where motion occurs, ignoring static background regions
Connecting to Other Blocks¶
The motion mask image from this block can be connected to:
- Visualization blocks to display the motion mask overlayed on original images or as standalone visualizations
- Object detection blocks to run detection models only on motion regions identified by the mask
- Image processing blocks to apply additional transformations, filters, or analysis to motion mask images
- Data storage blocks (e.g., Local File Sink, Roboflow Dataset Upload) to save motion masks for training data, analysis, or documentation
- Conditional logic blocks to route workflow execution based on the presence or absence of motion in mask images
- Model training blocks to use motion masks as training data for motion-based models or segmentation tasks
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/background_subtraction@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
threshold |
int |
Threshold value for the squared Mahalanobis distance used by the MOG2 background subtraction algorithm. Controls sensitivity to motion - smaller values increase sensitivity (detect smaller changes) but may produce more false positives, larger values decrease sensitivity (only detect significant changes) but may miss subtle motion. Recommended range is 8-32. Default is 16.. | โ |
history |
int |
Number of previous frames used to build the background model. Controls how quickly the background adapts to changes - larger values (e.g., 50-100) create a more stable background model that's less sensitive to temporary changes but adapts slowly to permanent background changes. Smaller values (e.g., 10-20) allow faster adaptation but may treat moving objects as background if they stop moving. Default is 30 frames.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Background Subtraction in version v1.
- inputs:
Icon Visualization,Stability AI Inpainting,Triangle Visualization,Background Color Visualization,Pixel Color Count,Depth Estimation,Label Visualization,Line Counter,Model Comparison Visualization,Dot Visualization,Image Blur,Camera Calibration,Polygon Zone Visualization,SIFT,Stitch Images,Trace Visualization,Contrast Equalization,Morphological Transformation,Corner Visualization,Perspective Correction,Halo Visualization,Stability AI Image Generation,Reference Path Visualization,Relative Static Crop,Dynamic Crop,Line Counter Visualization,Image Slicer,Image Threshold,SIFT Comparison,Keypoint Visualization,Template Matching,Absolute Static Crop,Heatmap Visualization,Blur Visualization,Circle Visualization,Text Display,Camera Focus,Crop Visualization,Grid Visualization,Classification Label Visualization,Polygon Visualization,Camera Focus,Polygon Visualization,Image Slicer,Image Preprocessing,Color Visualization,Line Counter,SIFT Comparison,Bounding Box Visualization,Image Contours,Halo Visualization,Stability AI Outpainting,Detection Event Log,Background Subtraction,Image Convert Grayscale,QR Code Generator,Pixelate Visualization,Ellipse Visualization,Distance Measurement,Mask Visualization - outputs:
Icon Visualization,Moondream2,Instance Segmentation Model,Label Visualization,Multi-Label Classification Model,Dot Visualization,Camera Calibration,Trace Visualization,SAM 3,Time in Zone,Semantic Segmentation Model,Relative Static Crop,Image Threshold,Keypoint Visualization,Template Matching,Single-Label Classification Model,Blur Visualization,Circle Visualization,Keypoint Detection Model,Crop Visualization,Email Notification,Classification Label Visualization,OpenAI,Google Gemini,OpenAI,YOLO-World Model,Twilio SMS/MMS Notification,Keypoint Detection Model,Anthropic Claude,Google Gemini,Image Convert Grayscale,Stability AI Inpainting,Depth Estimation,Model Comparison Visualization,SAM2 Video Tracker,Motion Detection,Google Gemini,CLIP Embedding Model,Florence-2 Model,LMM,Dynamic Crop,SAM 3,Barcode Detection,Clip Comparison,Perception Encoder Embedding Model,Detections Stabilizer,Byte Tracker,VLM As Detector,Multi-Label Classification Model,Polygon Visualization,Mask Visualization,Semantic Segmentation Model,Seg Preview,Image Slicer,SORT Tracker,OC-SORT Tracker,OCR Model,Qwen3-VL,Object Detection Model,Object Detection Model,GLM-OCR,Roboflow Dataset Upload,Object Detection Model,Polygon Zone Visualization,Detections Stitch,SIFT,Morphological Transformation,Instance Segmentation Model,Perspective Correction,Anthropic Claude,Image Slicer,Absolute Static Crop,SmolVLM2,EasyOCR,Camera Focus,SAM 3,Gaze Detection,Polygon Visualization,OpenAI,VLM As Classifier,Color Visualization,Image Contours,Roboflow Vision Events,OpenAI,Single-Label Classification Model,Llama 3.2 Vision,VLM As Detector,Instance Segmentation Model,Clip Comparison,LMM For Classification,Triangle Visualization,Pixel Color Count,Background Color Visualization,Qwen3.5-VL,CogVLM,Qwen2.5-VL,Image Blur,Anthropic Claude,Stitch Images,Dominant Color,Contrast Equalization,Corner Visualization,Stability AI Image Generation,Halo Visualization,Reference Path Visualization,QR Code Detection,Buffer,Line Counter Visualization,ByteTrack Tracker,Multi-Label Classification Model,Keypoint Detection Model,Roboflow Dataset Upload,Heatmap Visualization,Text Display,VLM As Classifier,Segment Anything 2 Model,Camera Focus,Single-Label Classification Model,Image Preprocessing,SIFT Comparison,Stability AI Outpainting,Bounding Box Visualization,Halo Visualization,Background Subtraction,Pixelate Visualization,Google Vision OCR,Ellipse Visualization,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Background Subtraction in version v1 has.
Bindings
-
input
image(image): The input image or video frame to process for background subtraction. The block processes frames sequentially to build a background model - each frame updates the background model and creates a motion mask showing areas that differ from the learned background. Can be connected from workflow inputs or previous steps..threshold(integer): Threshold value for the squared Mahalanobis distance used by the MOG2 background subtraction algorithm. Controls sensitivity to motion - smaller values increase sensitivity (detect smaller changes) but may produce more false positives, larger values decrease sensitivity (only detect significant changes) but may miss subtle motion. Recommended range is 8-32. Default is 16..history(integer): Number of previous frames used to build the background model. Controls how quickly the background adapts to changes - larger values (e.g., 50-100) create a more stable background model that's less sensitive to temporary changes but adapts slowly to permanent background changes. Smaller values (e.g., 10-20) allow faster adaptation but may treat moving objects as background if they stop moving. Default is 30 frames..
-
output
image(image): Image in workflows.
Example JSON definition of step Background Subtraction in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/background_subtraction@v1",
"image": "$inputs.image",
"threshold": 16,
"history": 30
}