Distance Measurement¶
Class: DistanceMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.distance_measurement.v1.DistanceMeasurementBlockV1
Calculate the distance between two bounding boxes on a 2D plane, leveraging a perpendicular camera view and either a reference object or a pixel-to-unit scaling ratio for precise measurements.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/distance_measurement@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
object_1_class_name |
str |
The class name of the first object.. | ❌ |
object_2_class_name |
str |
The class name of the second object.. | ❌ |
reference_axis |
str |
The axis along which the distance will be measured.. | ❌ |
calibration_method |
str |
Select how to calibrate the measurement of distance between objects.. | ❌ |
reference_object_class_name |
str |
The class name of the reference object.. | ✅ |
reference_width |
float |
Width of the reference object in centimeters. | ✅ |
reference_height |
float |
Height of the reference object in centimeters. | ✅ |
pixel_ratio |
float |
The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Distance Measurement
in version v1
.
- inputs:
Local File Sink
,Byte Tracker
,Webhook Sink
,LMM For Classification
,Time in Zone
,VLM as Classifier
,OpenAI
,Cosine Similarity
,Detections Stitch
,Time in Zone
,Velocity
,Perspective Correction
,CSV Formatter
,Object Detection Model
,Detections Transformation
,LMM
,Byte Tracker
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),Florence-2 Model
,Florence-2 Model
,OpenAI
,Roboflow Custom Metadata
,Object Detection Model
,Multi-Label Classification Model
,Detections Combine
,Detection Offset
,CogVLM
,Dynamic Zone
,EasyOCR
,Byte Tracker
,Model Monitoring Inference Aggregator
,Gaze Detection
,Line Counter
,Anthropic Claude
,Time in Zone
,Stitch OCR Detections
,VLM as Detector
,Path Deviation
,OpenAI
,Slack Notification
,Google Vision OCR
,Keypoint Detection Model
,Twilio SMS Notification
,Identify Changes
,Camera Focus
,Roboflow Dataset Upload
,Email Notification
,Detections Filter
,YOLO-World Model
,Instance Segmentation Model
,Clip Comparison
,Template Matching
,Detections Classes Replacement
,Bounding Rectangle
,OCR Model
,Detections Stabilizer
,Llama 3.2 Vision
,Instance Segmentation Model
,VLM as Detector
,Google Gemini
,Dynamic Crop
,Roboflow Dataset Upload
,Detections Merge
,Path Deviation
,Single-Label Classification Model
,Detections Consensus
,Moondream2
,Segment Anything 2 Model
- outputs:
Byte Tracker
,Identify Outliers
,Dot Visualization
,Morphological Transformation
,Blur Visualization
,Perspective Correction
,Corner Visualization
,Pixel Color Count
,PTZ Tracking (ONVIF)
.md),Grid Visualization
,Image Threshold
,Halo Visualization
,Keypoint Detection Model
,Detection Offset
,Byte Tracker
,Line Counter Visualization
,Stitch OCR Detections
,Stability AI Outpainting
,Twilio SMS Notification
,Keypoint Detection Model
,Identify Changes
,Email Notification
,Instance Segmentation Model
,Image Slicer
,Keypoint Visualization
,Detections Stabilizer
,Bounding Box Visualization
,Instance Segmentation Model
,Reference Path Visualization
,Mask Visualization
,Image Preprocessing
,Webhook Sink
,Image Slicer
,QR Code Generator
,SIFT Comparison
,Dominant Color
,Trace Visualization
,Object Detection Model
,Byte Tracker
,Crop Visualization
,SIFT Comparison
,Object Detection Model
,Pixelate Visualization
,Dynamic Zone
,Anthropic Claude
,Polygon Visualization
,Image Contours
,Slack Notification
,Triangle Visualization
,Classification Label Visualization
,Detections Classes Replacement
,Circle Visualization
,Image Blur
,Label Visualization
,Absolute Static Crop
,Stability AI Inpainting
,Icon Visualization
,Ellipse Visualization
,Color Visualization
,Detections Consensus
,Stitch Images
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Distance Measurement
in version v1
has.
Bindings
-
input
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): The output of a detection model describing the bounding boxes that will be used to measure the objects..reference_object_class_name
(string
): The class name of the reference object..reference_width
(float
): Width of the reference object in centimeters.reference_height
(float
): Height of the reference object in centimeters.pixel_ratio
(float
): The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels..
-
output
Example JSON definition of step Distance Measurement
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/distance_measurement@v1",
"predictions": "$steps.model.predictions",
"object_1_class_name": "car",
"object_2_class_name": "person",
"reference_axis": "vertical",
"calibration_method": "<block_does_not_provide_example>",
"reference_object_class_name": "reference-object",
"reference_width": 2.5,
"reference_height": 2.5,
"pixel_ratio": 100
}