Skip to content

Background Subtraction

Class: BackgroundSubtractionBlockV1

Source: inference.core.workflows.core_steps.classical_cv.background_subtraction.v1.BackgroundSubtractionBlockV1

Create motion masks from video streams using OpenCV's background subtraction algorithm.

How This Block Works

This block uses background subtraction (specifically the MOG2 algorithm) to identify pixels that differ from a learned background model and outputs a mask image highlighting motion areas. The block maintains state across frames to build and update the background model:

  1. Initializes background model - on the first frame, creates a background subtractor using the specified history and threshold parameters
  2. Processes each frame - applies background subtraction to identify pixels that differ from the learned background model
  3. Creates motion mask - generates a foreground mask where white pixels represent motion areas and black pixels represent the background
  4. Converts to image format - converts the single-channel mask to a 3-channel image format required by workflows
  5. Returns mask image - outputs the motion mask as an image that can be visualized or processed further

The output mask image shows motion areas as white pixels against a black background, making it easy to visualize where motion occurred in the frame. This mask can be used for further analysis, visualization, or as input to other processing steps.

Common Use Cases

  • Motion Visualization: Create visual motion masks to see where movement occurs in video streams for monitoring, analysis, or debugging purposes
  • Preprocessing for Motion Models: Generate motion masks as input data for training or inference with motion-based models that require mask data
  • Motion Area Extraction: Extract regions of motion from video frames for further processing, analysis, or feature extraction
  • Video Analysis: Analyze motion patterns by processing mask images to identify movement trends, activity levels, or motion characteristics
  • Background Removal: Use motion masks to separate foreground (moving) objects from static background for segmentation or isolation tasks
  • Motion-based Filtering: Use motion masks to filter or focus processing on areas where motion occurs, ignoring static background regions

Connecting to Other Blocks

The motion mask image from this block can be connected to:

  • Visualization blocks to display the motion mask overlayed on original images or as standalone visualizations
  • Object detection blocks to run detection models only on motion regions identified by the mask
  • Image processing blocks to apply additional transformations, filters, or analysis to motion mask images
  • Data storage blocks (e.g., Local File Sink, Roboflow Dataset Upload) to save motion masks for training data, analysis, or documentation
  • Conditional logic blocks to route workflow execution based on the presence or absence of motion in mask images
  • Model training blocks to use motion masks as training data for motion-based models or segmentation tasks

Type identifier

Use the following identifier in step "type" field: roboflow_core/background_subtraction@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
threshold int Threshold value for the squared Mahalanobis distance used by the MOG2 background subtraction algorithm. Controls sensitivity to motion - smaller values increase sensitivity (detect smaller changes) but may produce more false positives, larger values decrease sensitivity (only detect significant changes) but may miss subtle motion. Recommended range is 8-32. Default is 16..
history int Number of previous frames used to build the background model. Controls how quickly the background adapts to changes - larger values (e.g., 50-100) create a more stable background model that's less sensitive to temporary changes but adapts slowly to permanent background changes. Smaller values (e.g., 10-20) allow faster adaptation but may treat moving objects as background if they stop moving. Default is 30 frames..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Background Subtraction in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Background Subtraction in version v1 has.

Bindings
  • input

    • image (image): The input image or video frame to process for background subtraction. The block processes frames sequentially to build a background model - each frame updates the background model and creates a motion mask showing areas that differ from the learned background. Can be connected from workflow inputs or previous steps..
    • threshold (integer): Threshold value for the squared Mahalanobis distance used by the MOG2 background subtraction algorithm. Controls sensitivity to motion - smaller values increase sensitivity (detect smaller changes) but may produce more false positives, larger values decrease sensitivity (only detect significant changes) but may miss subtle motion. Recommended range is 8-32. Default is 16..
    • history (integer): Number of previous frames used to build the background model. Controls how quickly the background adapts to changes - larger values (e.g., 50-100) create a more stable background model that's less sensitive to temporary changes but adapts slowly to permanent background changes. Smaller values (e.g., 10-20) allow faster adaptation but may treat moving objects as background if they stop moving. Default is 30 frames..
  • output

    • image (image): Image in workflows.
Example JSON definition of step Background Subtraction in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/background_subtraction@v1",
    "image": "$inputs.image",
    "threshold": 16,
    "history": 30
}