Skip to content

Mask Area Measurement

Class: MaskAreaMeasurementBlockV1

Source: inference.core.workflows.core_steps.classical_cv.mask_area_measurement.v1.MaskAreaMeasurementBlockV1

Measure the area of detected objects. For instance segmentation masks, the area is computed by counting non-zero mask pixels (correctly handling holes). For bounding-box-only detections, the area is width multiplied by height. Optionally converts pixel areas to real-world units using a pixels_per_unit calibration value.

How This Block Works

This block calculates the area of each detected object and stores two values per detection:

  • area_px — area in square pixels (always computed)
  • area_converted — area in real-world units: area_px / (pixels_per_unit ** 2) (equals area_px when pixels_per_unit is 1.0)

Both values are attached to each detection and included in the serialized JSON output. The block returns the input detections with these fields added, so downstream blocks (e.g., label visualization) can display the area values.

Area Computation

The block operates in two modes depending on the type of predictions it receives:

  1. Mask Pixel Area (Instance Segmentation): When the input detections include segmentation masks, the block counts the non-zero pixels in each mask using cv2.countNonZero. This correctly handles masks with holes — hole pixels are zero and are excluded from the count.

  2. Bounding Box Area (Object Detection): When no segmentation mask is available, the block falls back to computing the area as the bounding box width multiplied by height (w * h).

Unit Conversion

Set the pixels_per_unit input to convert pixel areas to real-world units (e.g., cm², in², mm²). Because area is two-dimensional, the conversion squares the ratio:

area_converted = area_px / (pixels_per_unit ** 2)

For example, if your calibration is 130 pixels/cm, a detection with area_px = 16900 would have area_converted = 16900 / 16900 = 1.0 cm².

How to determine pixels_per_unit: Place an object of known size in the camera's field of view (e.g., a ruler or calibration target). Measure its length in pixels in the image and divide by its real-world length. For instance, if a 10 cm reference object spans 1300 pixels, then pixels_per_cm = 1300 / 10 = 130. If you are using perspective correction, the calibration object must be placed on the same plane from which the perspective correction was calculated.

Common Use Cases

  • Size-Based Filtering: Filter out small noise detections by chaining with a filtering block to keep only detections above a minimum area threshold.
  • Quality Control: Verify that manufactured components meet size specifications by comparing measured areas against expected ranges.
  • Agricultural Analysis: Measure leaf area, crop coverage, or canopy extent from aerial or close-up imagery.
  • Medical Imaging: Quantify the area of wounds, lesions, or anatomical structures. Use pixels_per_unit to get real-world measurements for clinical documentation.

Connecting to Other Blocks

  • Upstream -- Detection and Segmentation Models: Connect the output of an object detection or instance segmentation model to the predictions input. Instance segmentation models (which produce masks) yield more accurate area measurements than bounding-box-only detections.
  • Upstream -- Camera Calibration Block: Use roboflow_core/camera_calibration@v1 upstream to correct lens distortion before detection.
  • Upstream -- Perspective Correction Block: Use roboflow_core/perspective_correction@v1 upstream to transform angled images to a top-down view so that area measurements reflect true object footprints.
  • Downstream -- Visualization: Pass the output predictions to label or polygon visualization blocks. The area_px and area_converted fields are available for display as labels.
  • Downstream -- Filtering Blocks: Use the enriched detections with a filtering block to keep only detections whose area meets a threshold.

Requirements

This block requires detection predictions from an object detection or instance segmentation model. No additional environment variables, API keys, or external dependencies are needed beyond OpenCV and NumPy (included with inference). For the most accurate area measurements, use instance segmentation models that produce per-object masks.

Type identifier

Use the following identifier in step "type" field: roboflow_core/mask_area_measurement@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
pixels_per_unit float Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px)..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Mask Area Measurement in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Mask Area Measurement in version v1 has.

Bindings
  • input

    • predictions (Union[object_detection_prediction, instance_segmentation_prediction]): Detection predictions to measure areas for..
    • pixels_per_unit (float): Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px)..
  • output

    • predictions (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
Example JSON definition of step Mask Area Measurement in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/mask_area_measurement@v1",
    "predictions": "$steps.model.predictions",
    "pixels_per_unit": 1.0
}