Size Measurement¶
Class: SizeMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.size_measurement.v1.SizeMeasurementBlockV1
The Size Measurement Block calculates the dimensions of objects relative to a reference object. It uses one model to detect the reference object and another to detect the objects to measure. The block outputs the dimensions of the objects in terms of the reference object.
- Reference Object: This is the known object used as a baseline for measurements. Its dimensions are known and used to scale the measurements of other objects.
- Object to Measure: This is the object whose dimensions are being calculated. The block measures these dimensions relative to the reference object.
Block Usage¶
To use the Size Measurement Block, follow these steps:
- Select Models: Choose a model to detect the reference object and another model to detect the objects you want to measure.
- Configure Inputs: Provide the predictions from both models as inputs to the block.
- Set Reference Dimensions: Specify the known dimensions of the reference object in the format 'width,height' or as a tuple (width, height).
- Run the Block: Execute the block to calculate the dimensions of the detected objects relative to the reference object.
Example¶
Imagine you have a scene with a calibration card and several packages. The calibration card has known dimensions of 5.0 inches by 3.0 inches. You want to measure the dimensions of packages in the scene.
- Reference Object: Calibration card with dimensions 5.0 inches (width) by 3.0 inches (height).
- Objects to Measure: Packages detected in the scene.
The block will use the known dimensions of the calibration card to calculate the dimensions of each package. For example, if a package is detected with a width of 100 pixels and a height of 60 pixels, and the calibration card is detected with a width of 50 pixels and a height of 30 pixels, the block will calculate the package's dimensions as:
- Width: (100 pixels / 50 pixels) * 5.0 inches = 10.0 inches
- Height: (60 pixels / 30 pixels) * 3.0 inches = 6.0 inches
This allows you to obtain the real-world dimensions of the packages based on the reference object's known size.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/size_measurement@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
reference_dimensions |
Union[List[float], Tuple[float, float], str] |
Dimensions of the reference object (width, height) in desired units (e.g., inches) as a string in the format 'width,height' or as a tuple (width, height). | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Size Measurement
in version v1
.
- inputs:
Time in Zone
,Detections Stitch
,Path Deviation
,Florence-2 Model
,Multi-Label Classification Model
,LMM For Classification
,Instance Segmentation Model
,Keypoint Detection Model
,Single-Label Classification Model
,OCR Model
,Object Detection Model
,Perspective Correction
,Local File Sink
,Line Counter
,Detections Filter
,YOLO-World Model
,Model Monitoring Inference Aggregator
,VLM as Classifier
,VLM as Detector
,Dimension Collapse
,Google Vision OCR
,Size Measurement
,Detections Consensus
,Email Notification
,OpenAI
,Byte Tracker
,Webhook Sink
,CogVLM
,Detections Classes Replacement
,Template Matching
,Instance Segmentation Model
,Roboflow Custom Metadata
,Detection Offset
,Buffer
,Roboflow Dataset Upload
,Clip Comparison
,Roboflow Dataset Upload
,Stitch OCR Detections
,Slack Notification
,Anthropic Claude
,Dynamic Zone
,Google Gemini
,Segment Anything 2 Model
,Clip Comparison
,Byte Tracker
,Time in Zone
,Florence-2 Model
,LMM
,Detections Stabilizer
,Twilio SMS Notification
,Path Deviation
,VLM as Detector
,OpenAI
,Byte Tracker
,CSV Formatter
,Llama 3.2 Vision
,Bounding Rectangle
,Detections Transformation
,Object Detection Model
- outputs:
Time in Zone
,Florence-2 Model
,Path Deviation
,LMM For Classification
,Keypoint Detection Model
,Line Counter
,Instance Segmentation Model
,Keypoint Detection Model
,Corner Visualization
,Mask Visualization
,Object Detection Model
,Perspective Correction
,Line Counter
,YOLO-World Model
,Polygon Zone Visualization
,Polygon Visualization
,VLM as Classifier
,Halo Visualization
,VLM as Detector
,Grid Visualization
,Trace Visualization
,Email Notification
,Webhook Sink
,OpenAI
,Detections Consensus
,Size Measurement
,Cache Set
,Instance Segmentation Model
,Crop Visualization
,Buffer
,Clip Comparison
,VLM as Classifier
,Anthropic Claude
,Circle Visualization
,Dot Visualization
,Google Gemini
,Clip Comparison
,Bounding Box Visualization
,Label Visualization
,Classification Label Visualization
,Line Counter Visualization
,Ellipse Visualization
,Time in Zone
,Florence-2 Model
,Reference Path Visualization
,Path Deviation
,VLM as Detector
,Triangle Visualization
,Color Visualization
,Llama 3.2 Vision
,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Size Measurement
in version v1
has.
Bindings
-
input
reference_predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions from the reference object model.object_predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions from the model that detects the object to measure.reference_dimensions
(Union[string
,list_of_values
]): Dimensions of the reference object (width, height) in desired units (e.g., inches) as a string in the format 'width,height' or as a tuple (width, height).
-
output
dimensions
(list_of_values
): List of values of any type.
Example JSON definition of step Size Measurement
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/size_measurement@v1",
"reference_predictions": "$segmentation.reference_predictions",
"object_predictions": "$segmentation.object_predictions",
"reference_dimensions": "5.0,5.0"
}