Size Measurement¶
Class: SizeMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.size_measurement.v1.SizeMeasurementBlockV1
The [Size Measurement Block](https://www.
How This Block Works¶
youtube.com/watch?v=FQY7TSHfZeI) calculates the dimensions of objects relative to a reference object. It uses one model to detect the reference object and another to detect the objects to measure. The block outputs the dimensions of the objects in terms of the reference object.
- Reference Object: This is the known object used as a baseline for measurements. Its dimensions are known and used to scale the measurements of other objects.
- Object to Measure: This is the object whose dimensions are being calculated. The block measures these dimensions relative to the reference object.
Block Usage¶
To use the Size Measurement Block, follow these steps:
- Select Models: Choose a model to detect the reference object and another model to detect the objects you want to measure.
- Configure Inputs: Provide the predictions from both models as inputs to the block.
- Set Reference Dimensions: Specify the known dimensions of the reference object in the format 'width,height' or as a tuple (width, height).
- Run the Block: Execute the block to calculate the dimensions of the detected objects relative to the reference object.
Example¶
Imagine you have a scene with a calibration card and several packages. The calibration card has known dimensions of 5.0 inches by 3.0 inches. You want to measure the dimensions of packages in the scene.
- Reference Object: Calibration card with dimensions 5.0 inches (width) by 3.0 inches (height).
- Objects to Measure: Packages detected in the scene.
The block will use the known dimensions of the calibration card to calculate the dimensions of each package. For example, if a package is detected with a width of 100 pixels and a height of 60 pixels, and the calibration card is detected with a width of 50 pixels and a height of 30 pixels, the block will calculate the package's dimensions as:
- Width: (100 pixels / 50 pixels) * 5.0 inches = 10.0 inches
- Height: (60 pixels / 30 pixels) * 3.0 inches = 6.0 inches
This allows you to obtain the real-world dimensions of the packages based on the reference object's known size.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/size_measurement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
reference_predictions |
List[Any] |
Reference object used to calculate the dimensions of the specified objects. If multiple objects are provided, the highest confidence prediction will be used.. | ✅ |
reference_dimensions |
Union[List[float], Tuple[float, float], str] |
Dimensions of the reference object in desired units, (e.g. inches). Will be used to convert the pixel dimensions of the other objects to real-world units.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Size Measurement in version v1.
- inputs:
Dynamic Crop,Time in Zone,Motion Detection,OCR Model,Email Notification,OpenAI,Google Vision OCR,Seg Preview,Time in Zone,Google Gemini,Instance Segmentation Model,Object Detection Model,Local File Sink,Single-Label Classification Model,Model Monitoring Inference Aggregator,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Detections Stitch,Email Notification,Slack Notification,Twilio SMS/MMS Notification,VLM As Detector,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Camera Focus,SAM 3,Stitch OCR Detections,Perspective Correction,Moondream2,PTZ Tracking (ONVIF),Line Counter,Detections List Roll-Up,Overlap Filter,OpenAI,Qwen3.5-VL,Google Gemini,Byte Tracker,VLM As Detector,Detection Event Log,LMM,CogVLM,Time in Zone,Dimension Collapse,VLM As Classifier,Instance Segmentation Model,Detections Classes Replacement,Detections Combine,Bounding Rectangle,Stitch OCR Detections,Llama 3.2 Vision,OpenAI,Clip Comparison,Clip Comparison,Webhook Sink,Mask Area Measurement,Byte Tracker,Florence-2 Model,Buffer,SAM 3,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Velocity,YOLO-World Model,Object Detection Model,Byte Tracker,Detections Consensus,Template Matching,Anthropic Claude,Google Gemini,Detection Offset,EasyOCR,Path Deviation,S3 Sink,Anthropic Claude,SAM 3,Detections Transformation,Path Deviation,Segment Anything 2 Model,Twilio SMS Notification,Size Measurement,Detections Filter,Detections Stabilizer,OpenAI,Detections Merge,Roboflow Dataset Upload - outputs:
Time in Zone,Motion Detection,Email Notification,OpenAI,Seg Preview,Time in Zone,Google Gemini,Instance Segmentation Model,Object Detection Model,Bounding Box Visualization,Anthropic Claude,Keypoint Detection Model,Email Notification,Twilio SMS/MMS Notification,VLM As Detector,Dot Visualization,Florence-2 Model,Roboflow Dataset Upload,Cache Set,SAM 3,Polygon Visualization,Perspective Correction,OpenAI,Line Counter,Detections List Roll-Up,Corner Visualization,Line Counter Visualization,Google Gemini,VLM As Detector,Keypoint Visualization,Halo Visualization,Keypoint Detection Model,Label Visualization,Polygon Visualization,Time in Zone,Triangle Visualization,Mask Visualization,VLM As Classifier,Color Visualization,Instance Segmentation Model,Detections Classes Replacement,Line Counter,Reference Path Visualization,Llama 3.2 Vision,Clip Comparison,Clip Comparison,Classification Label Visualization,Webhook Sink,Circle Visualization,Polygon Zone Visualization,Grid Visualization,VLM As Classifier,Buffer,Florence-2 Model,SAM 3,LMM For Classification,YOLO-World Model,Halo Visualization,Object Detection Model,Detections Consensus,Anthropic Claude,Google Gemini,Path Deviation,Anthropic Claude,SAM 3,Ellipse Visualization,Path Deviation,Crop Visualization,Trace Visualization,Size Measurement,OpenAI,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Size Measurement in version v1 has.
Bindings
-
input
object_predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to measure the dimensions of..reference_predictions(Union[object_detection_prediction,list_of_values,instance_segmentation_prediction]): Reference object used to calculate the dimensions of the specified objects. If multiple objects are provided, the highest confidence prediction will be used..reference_dimensions(Union[string,list_of_values]): Dimensions of the reference object in desired units, (e.g. inches). Will be used to convert the pixel dimensions of the other objects to real-world units..
-
output
dimensions(list_of_values): List of values of any type.
Example JSON definition of step Size Measurement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/size_measurement@v1",
"object_predictions": "$segmentation.object_predictions",
"reference_predictions": "$segmentation.reference_predictions",
"reference_dimensions": [
4.5,
3.0
]
}