Size Measurement¶
Class: SizeMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.size_measurement.v1.SizeMeasurementBlockV1
The [Size Measurement Block](https://www.
How This Block Works¶
youtube.com/watch?v=FQY7TSHfZeI) calculates the dimensions of objects relative to a reference object. It uses one model to detect the reference object and another to detect the objects to measure. The block outputs the dimensions of the objects in terms of the reference object.
- Reference Object: This is the known object used as a baseline for measurements. Its dimensions are known and used to scale the measurements of other objects.
- Object to Measure: This is the object whose dimensions are being calculated. The block measures these dimensions relative to the reference object.
Block Usage¶
To use the Size Measurement Block, follow these steps:
- Select Models: Choose a model to detect the reference object and another model to detect the objects you want to measure.
- Configure Inputs: Provide the predictions from both models as inputs to the block.
- Set Reference Dimensions: Specify the known dimensions of the reference object in the format 'width,height' or as a tuple (width, height).
- Run the Block: Execute the block to calculate the dimensions of the detected objects relative to the reference object.
Example¶
Imagine you have a scene with a calibration card and several packages. The calibration card has known dimensions of 5.0 inches by 3.0 inches. You want to measure the dimensions of packages in the scene.
- Reference Object: Calibration card with dimensions 5.0 inches (width) by 3.0 inches (height).
- Objects to Measure: Packages detected in the scene.
The block will use the known dimensions of the calibration card to calculate the dimensions of each package. For example, if a package is detected with a width of 100 pixels and a height of 60 pixels, and the calibration card is detected with a width of 50 pixels and a height of 30 pixels, the block will calculate the package's dimensions as:
- Width: (100 pixels / 50 pixels) * 5.0 inches = 10.0 inches
- Height: (60 pixels / 30 pixels) * 3.0 inches = 6.0 inches
This allows you to obtain the real-world dimensions of the packages based on the reference object's known size.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/size_measurement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
reference_predictions |
List[Any] |
Reference object used to calculate the dimensions of the specified objects. If multiple objects are provided, the highest confidence prediction will be used.. | โ |
reference_dimensions |
Union[List[float], Tuple[float, float], str] |
Dimensions of the reference object in desired units, (e.g. inches). Will be used to convert the pixel dimensions of the other objects to real-world units.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Size Measurement in version v1.
- inputs:
Roboflow Dataset Upload,Mask Edge Snap,OCR Model,Instance Segmentation Model,Bounding Rectangle,ByteTrack Tracker,Byte Tracker,Detections Consensus,Detections Classes Replacement,Webhook Sink,Stitch OCR Detections,Object Detection Model,Camera Focus,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,SORT Tracker,Florence-2 Model,Detections Transformation,Path Deviation,GLM-OCR,S3 Sink,Path Deviation,Seg Preview,Twilio SMS Notification,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,Line Counter,Twilio SMS/MMS Notification,Motion Detection,CSV Formatter,Detections Merge,Perspective Correction,Overlap Filter,Anthropic Claude,Velocity,Roboflow Vision Events,VLM As Detector,Google Gemini,Qwen3.5-VL,Per-Class Confidence Filter,Segment Anything 2 Model,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Email Notification,Google Gemma API,Google Vision OCR,Google Gemini,EasyOCR,Detections Combine,Object Detection Model,SAM2 Video Tracker,Detection Event Log,Byte Tracker,OpenAI,Anthropic Claude,Time in Zone,Roboflow Custom Metadata,YOLO-World Model,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,Detections List Roll-Up,VLM As Classifier,Mask Area Measurement,Template Matching,Qwen 3.6 API,Instance Segmentation Model,CogVLM,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Local File Sink,Detections Filter,Time in Zone,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Multi-Label Classification Model,Byte Tracker,SAM 3,OpenAI,Dynamic Crop,Moondream2,LMM For Classification,Keypoint Detection Model,PTZ Tracking (ONVIF),Stitch OCR Detections - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Object Detection Model,Email Notification,Google Gemma API,Instance Segmentation Model,Google Gemini,Color Visualization,Object Detection Model,OpenAI,Ellipse Visualization,Polygon Visualization,Anthropic Claude,Detections Consensus,Detections Classes Replacement,Time in Zone,Cache Set,Webhook Sink,Trace Visualization,Object Detection Model,Qwen 3.5 API,YOLO-World Model,Buffer,SAM 3,Instance Segmentation Model,Detections List Roll-Up,Size Measurement,VLM As Classifier,Qwen 3.6 API,Florence-2 Model,Halo Visualization,Instance Segmentation Model,Crop Visualization,Florence-2 Model,Path Deviation,Time in Zone,Dot Visualization,Path Deviation,SAM 3,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Keypoint Detection Model,Line Counter,Twilio SMS/MMS Notification,Time in Zone,Polygon Zone Visualization,Reference Path Visualization,Motion Detection,Anthropic Claude,Clip Comparison,VLM As Detector,Perspective Correction,Anthropic Claude,Line Counter,Bounding Box Visualization,Classification Label Visualization,Polygon Visualization,SAM 3,VLM As Detector,Google Gemini,Label Visualization,OpenAI,Corner Visualization,Grid Visualization,Keypoint Detection Model,Keypoint Visualization,Triangle Visualization,Halo Visualization,Circle Visualization,Mask Visualization,LMM For Classification,OpenAI,Keypoint Detection Model,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Size Measurement in version v1 has.
Bindings
-
input
object_predictions(Union[object_detection_prediction,instance_segmentation_prediction]): Model predictions to measure the dimensions of..reference_predictions(Union[object_detection_prediction,instance_segmentation_prediction,list_of_values]): Reference object used to calculate the dimensions of the specified objects. If multiple objects are provided, the highest confidence prediction will be used..reference_dimensions(Union[string,list_of_values]): Dimensions of the reference object in desired units, (e.g. inches). Will be used to convert the pixel dimensions of the other objects to real-world units..
-
output
dimensions(list_of_values): List of values of any type.
Example JSON definition of step Size Measurement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/size_measurement@v1",
"object_predictions": "$segmentation.object_predictions",
"reference_predictions": "$segmentation.reference_predictions",
"reference_dimensions": [
4.5,
3.0
]
}