Mask Area Measurement¶
Class: MaskAreaMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.mask_area_measurement.v1.MaskAreaMeasurementBlockV1
Measure the area of detected objects. For instance segmentation masks, the area is computed by counting non-zero mask pixels (correctly handling holes). For bounding-box-only detections, the area is width multiplied by height. Optionally converts pixel areas to real-world units using a pixels_per_unit calibration value.
How This Block Works¶
This block calculates the area of each detected object and stores two values per detection:
area_px— area in square pixels (always computed)area_converted— area in real-world units:area_px / (pixels_per_unit ** 2)(equalsarea_pxwhenpixels_per_unitis 1.0)
Both values are attached to each detection and included in the serialized JSON output. The block returns the input detections with these fields added, so downstream blocks (e.g., label visualization) can display the area values.
Area Computation¶
The block operates in two modes depending on the type of predictions it receives:
-
Mask Pixel Area (Instance Segmentation): When the input detections include segmentation masks, the block counts the non-zero pixels in each mask using
cv2.countNonZero. This correctly handles masks with holes — hole pixels are zero and are excluded from the count. -
Bounding Box Area (Object Detection): When no segmentation mask is available, the block falls back to computing the area as the bounding box width multiplied by height (
w * h).
Unit Conversion¶
Set the pixels_per_unit input to convert pixel areas to real-world units (e.g., cm², in², mm²). Because area is two-dimensional, the conversion squares the ratio:
area_converted = area_px / (pixels_per_unit ** 2)
For example, if your calibration is 130 pixels/cm, a detection with area_px = 16900 would have area_converted = 16900 / 16900 = 1.0 cm².
How to determine pixels_per_unit: Place an object of known size in the camera's field of view (e.g., a ruler or calibration target). Measure its length in pixels in the image and divide by its real-world length. For instance, if a 10 cm reference object spans 1300 pixels, then pixels_per_cm = 1300 / 10 = 130. If you are using perspective correction, the calibration object must be placed on the same plane from which the perspective correction was calculated.
Common Use Cases¶
- Size-Based Filtering: Filter out small noise detections by chaining with a filtering block to keep only detections above a minimum area threshold.
- Quality Control: Verify that manufactured components meet size specifications by comparing measured areas against expected ranges.
- Agricultural Analysis: Measure leaf area, crop coverage, or canopy extent from aerial or close-up imagery.
- Medical Imaging: Quantify the area of wounds, lesions, or anatomical structures. Use
pixels_per_unitto get real-world measurements for clinical documentation.
Connecting to Other Blocks¶
- Upstream -- Detection and Segmentation Models: Connect the output of an object detection or instance segmentation model to the
predictionsinput. Instance segmentation models (which produce masks) yield more accurate area measurements than bounding-box-only detections. - Upstream -- Camera Calibration Block: Use
roboflow_core/camera_calibration@v1upstream to correct lens distortion before detection. - Upstream -- Perspective Correction Block: Use
roboflow_core/perspective_correction@v1upstream to transform angled images to a top-down view so that area measurements reflect true object footprints. - Downstream -- Visualization: Pass the output
predictionsto label or polygon visualization blocks. Thearea_pxandarea_convertedfields are available for display as labels. - Downstream -- Filtering Blocks: Use the enriched detections with a filtering block to keep only detections whose area meets a threshold.
Requirements¶
This block requires detection predictions from an object detection or instance segmentation model. No additional environment variables, API keys, or external dependencies are needed beyond OpenCV and NumPy (included with inference). For the most accurate area measurements, use instance segmentation models that produce per-object masks.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/mask_area_measurement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
pixels_per_unit |
float |
Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px).. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Mask Area Measurement in version v1.
- inputs:
Instance Segmentation Model,Dynamic Crop,Dynamic Zone,Segment Anything 2 Model,Path Deviation,Detection Offset,Byte Tracker,VLM As Detector,Mask Area Measurement,Time in Zone,Camera Focus,Object Detection Model,Detections Combine,Moondream2,Google Vision OCR,VLM As Detector,Camera Focus,Detections Consensus,OC-SORT Tracker,Template Matching,Detections Classes Replacement,Overlap Filter,Cosine Similarity,Detections List Roll-Up,SAM 3,ByteTrack Tracker,SAM 3,Bounding Rectangle,Perspective Correction,Seg Preview,Detections Stitch,Detections Transformation,Detection Event Log,Velocity,Motion Detection,OCR Model,Identify Changes,Detections Stabilizer,Byte Tracker,Gaze Detection,Line Counter,Instance Segmentation Model,Object Detection Model,Byte Tracker,Time in Zone,Detections Merge,YOLO-World Model,PTZ Tracking (ONVIF),SAM 3,Time in Zone,SORT Tracker,EasyOCR,Path Deviation,Detections Filter - outputs:
Roboflow Dataset Upload,Byte Tracker,Detection Offset,Blur Visualization,Time in Zone,Detections Combine,Triangle Visualization,Stitch OCR Detections,Heatmap Visualization,Camera Focus,Polygon Visualization,OC-SORT Tracker,Size Measurement,Icon Visualization,Halo Visualization,Detections Classes Replacement,Background Color Visualization,Overlap Filter,Florence-2 Model,Halo Visualization,Detections List Roll-Up,Circle Visualization,Ellipse Visualization,Detections Stitch,Detections Transformation,Line Counter,Detections Stabilizer,Florence-2 Model,Line Counter,Distance Measurement,Byte Tracker,Stitch OCR Detections,Time in Zone,Detections Merge,Dot Visualization,PTZ Tracking (ONVIF),Time in Zone,Polygon Visualization,Model Monitoring Inference Aggregator,Roboflow Custom Metadata,SORT Tracker,Path Deviation,Pixelate Visualization,Model Comparison Visualization,Dynamic Crop,Dynamic Zone,Segment Anything 2 Model,Path Deviation,Corner Visualization,Color Visualization,Mask Area Measurement,Crop Visualization,Detections Consensus,Bounding Box Visualization,Mask Visualization,ByteTrack Tracker,Trace Visualization,Label Visualization,Bounding Rectangle,Perspective Correction,Velocity,Detection Event Log,Byte Tracker,Roboflow Dataset Upload,Roboflow Vision Events,Stability AI Inpainting,Detections Filter
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Mask Area Measurement in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,object_detection_prediction]): Detection predictions to measure areas for..pixels_per_unit(float): Number of pixels per real-world unit of length (e.g., pixels per cm). The converted area is computed as area_px / (pixels_per_unit ** 2). Default 1.0 means no conversion (area_converted equals area_px)..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction.
Example JSON definition of step Mask Area Measurement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/mask_area_measurement@v1",
"predictions": "$steps.model.predictions",
"pixels_per_unit": 1.0
}