Mask Area Measurement¶
Class: MaskAreaMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.mask_area_measurement.v1.MaskAreaMeasurementBlockV1
Measure the area of detected objects. For instance segmentation masks, the area is computed by counting non-zero mask pixels (correctly handling holes). For bounding-box-only detections, the area is width multiplied by height. Optionally converts pixel areas to real-world units using a pixels_per_unit calibration value.
How This Block Works¶
This block calculates the area of each detected object and stores two values per detection:
area_px— area in square pixels (always computed)area_converted— area in real-world units:area_px / (pixels_per_unit ** 2)(equalsarea_pxwhenpixels_per_unitis 1.0)
Both values are attached to each detection and included in the serialized JSON output. The block returns the input detections with these fields added, so downstream blocks (e.g., label visualization) can display the area values.
Area Computation¶
The block operates in two modes depending on the type of predictions it receives:
-
Mask Pixel Area (Instance Segmentation): When the input detections include segmentation masks, the block counts the non-zero pixels in each mask using
cv2.countNonZero. This correctly handles masks with holes — hole pixels are zero and are excluded from the count. -
Bounding Box Area (Object Detection): When no segmentation mask is available, the block falls back to computing the area as the bounding box width multiplied by height (
w * h).
Unit Conversion¶
Set the pixels_per_unit input to convert pixel areas to real-world units (e.g., cm², in², mm²). Because area is two-dimensional, the conversion squares the ratio:
area_converted = area_px / (pixels_per_unit ** 2)
For example, if your calibration is 130 pixels/cm, a detection with area_px = 16900 would have area_converted = 16900 / 16900 = 1.0 cm².
How to determine pixels_per_unit: Place an object of known size in the camera's field of view (e.g., a ruler or calibration target). Measure its length in pixels in the image and divide by its real-world length. For instance, if a 10 cm reference object spans 1300 pixels, then pixels_per_cm = 1300 / 10 = 130. If you are using perspective correction, the calibration object must be placed on the same plane from which the perspective correction was calculated.
Common Use Cases¶
- Size-Based Filtering: Filter out small noise detections by chaining with a filtering block to keep only detections above a minimum area threshold.
- Quality Control: Verify that manufactured components meet size specifications by comparing measured areas against expected ranges.
- Agricultural Analysis: Measure leaf area, crop coverage, or canopy extent from aerial or close-up imagery.
- Medical Imaging: Quantify the area of wounds, lesions, or anatomical structures. Use
pixels_per_unitto get real-world measurements for clinical documentation.
Connecting to Other Blocks¶
- Upstream -- Detection and Segmentation Models: Connect the output of an object detection or instance segmentation model to the
predictionsinput. Instance segmentation models (which produce masks) yield more accurate area measurements than bounding-box-only detections. - Upstream -- Camera Calibration Block: Use
roboflow_core/camera_calibration@v1upstream to correct lens distortion before detection. - Upstream -- Perspective Correction Block: Use
roboflow_core/perspective_correction@v1upstream to transform angled images to a top-down view so that area measurements reflect true object footprints. - Downstream -- Visualization: Pass the output
predictionsto label or polygon visualization blocks. Thearea_pxandarea_convertedfields are available for display as labels. - Downstream -- Filtering Blocks: Use the enriched detections with a filtering block to keep only detections whose area meets a threshold.
Requirements¶
This block requires detection predictions from an object detection or instance segmentation model. No additional environment variables, API keys, or external dependencies are needed beyond OpenCV and NumPy (included with inference). For the most accurate area measurements, use instance segmentation models that produce per-object masks.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/mask_area_measurement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
pixels_per_unit |
float |
Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px).. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Mask Area Measurement in version v1.
- inputs:
Object Detection Model,Gaze Detection,SAM 3,Detections Stabilizer,Instance Segmentation Model,VLM As Detector,Overlap Filter,Time in Zone,Segment Anything 2 Model,Camera Focus,Bounding Rectangle,SAM 3,Detections Combine,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Detections Stitch,Detections Filter,Motion Detection,Moondream2,Velocity,Dynamic Zone,Cosine Similarity,Template Matching,Detections Consensus,Identify Changes,Detections Transformation,Detection Event Log,Detections Merge,Time in Zone,Object Detection Model,Instance Segmentation Model,Byte Tracker,Detection Offset,Byte Tracker,EasyOCR,Google Vision OCR,SAM 3,Path Deviation,Detections List Roll-Up,Dynamic Crop,Byte Tracker,Line Counter,Perspective Correction,YOLO-World Model,Seg Preview,Mask Area Measurement,Camera Focus,Time in Zone,Path Deviation,VLM As Detector,OCR Model - outputs:
Stitch OCR Detections,Detections Stabilizer,Heatmap Visualization,Ellipse Visualization,Distance Measurement,Overlap Filter,Time in Zone,Segment Anything 2 Model,Bounding Rectangle,Detections Combine,Roboflow Custom Metadata,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Detections Stitch,Label Visualization,Detections Filter,Stitch OCR Detections,Dynamic Zone,Velocity,Polygon Visualization,Size Measurement,Crop Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Transformation,Bounding Box Visualization,Detection Event Log,Blur Visualization,Dot Visualization,Corner Visualization,Time in Zone,Detections Merge,Model Monitoring Inference Aggregator,Color Visualization,Model Comparison Visualization,Icon Visualization,Byte Tracker,Byte Tracker,Detection Offset,Pixelate Visualization,Mask Visualization,Halo Visualization,Background Color Visualization,Circle Visualization,Halo Visualization,Florence-2 Model,Dynamic Crop,Path Deviation,Detections List Roll-Up,Line Counter,Perspective Correction,Byte Tracker,Line Counter,Polygon Visualization,Time in Zone,Mask Area Measurement,Florence-2 Model,Stability AI Inpainting,Roboflow Dataset Upload,Camera Focus,Trace Visualization,Path Deviation,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Mask Area Measurement in version v1 has.
Bindings
-
input
predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Detection predictions to measure areas for..pixels_per_unit(float): Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px)..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Mask Area Measurement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/mask_area_measurement@v1",
"predictions": "$steps.model.predictions",
"pixels_per_unit": 1.0
}