Depth Estimation¶
Class: DepthEstimationBlockV1
Source: inference.core.workflows.core_steps.models.foundation.depth_estimation.v1.DepthEstimationBlockV1
π― This workflow block performs depth estimation on images using Apple's DepthPro model. It analyzes the spatial relationships
and depth information in images to create a depth map where:
π Each pixel's value represents its relative distance from the camera
π Lower values (darker colors) indicate closer objects
π Higher values (lighter colors) indicate further objects
The model outputs:
1. πΊοΈ A depth map showing the relative distances of objects in the scene
2. π The camera's field of view (in degrees)
3. π¬ The camera's focal length
This is particularly useful for:
- ποΈ Understanding 3D structure from 2D images
- π¨ Creating depth-aware visualizations
- π Analyzing spatial relationships in scenes
- πΆοΈ Applications in augmented reality and 3D reconstruction
β‘ The model runs efficiently on Apple Silicon (M1-M4) using Metal Performance Shaders (MPS) for accelerated inference.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/depth_estimation@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | β |
model_version |
str |
The Depth Estimation model to be used for inference.. | β |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Depth Estimation in version v1.
- inputs:
Absolute Static Crop,Relative Static Crop,Polygon Visualization,Keypoint Visualization,Icon Visualization,Blur Visualization,Trace Visualization,Color Visualization,Image Contours,Polygon Zone Visualization,Camera Focus,Halo Visualization,Bounding Box Visualization,SIFT,Camera Calibration,Triangle Visualization,Classification Label Visualization,Background Color Visualization,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Image Threshold,Reference Path Visualization,Corner Visualization,Ellipse Visualization,Image Slicer,Stitch Images,Crop Visualization,Morphological Transformation,Grid Visualization,Image Preprocessing,Mask Visualization,Line Counter Visualization,SIFT Comparison,QR Code Generator,Depth Estimation,Image Slicer,Perspective Correction,Image Convert Grayscale,Stability AI Image Generation,Contrast Equalization,Label Visualization,Image Blur,Model Comparison Visualization,Circle Visualization,Stability AI Outpainting - outputs:
Absolute Static Crop,VLM as Detector,Relative Static Crop,Keypoint Visualization,Clip Comparison,Object Detection Model,LMM For Classification,SmolVLM2,VLM as Classifier,Google Vision OCR,Seg Preview,Color Visualization,Trace Visualization,Instance Segmentation Model,Polygon Zone Visualization,Camera Focus,Halo Visualization,OCR Model,Camera Calibration,VLM as Classifier,Triangle Visualization,Single-Label Classification Model,Segment Anything 2 Model,Stability AI Inpainting,Image Threshold,Reference Path Visualization,Corner Visualization,Gaze Detection,Ellipse Visualization,OpenAI,Single-Label Classification Model,Morphological Transformation,Image Preprocessing,CogVLM,Line Counter Visualization,OpenAI,YOLO-World Model,SIFT Comparison,Florence-2 Model,Roboflow Dataset Upload,Label Visualization,Model Comparison Visualization,Multi-Label Classification Model,Multi-Label Classification Model,Roboflow Dataset Upload,Template Matching,OpenAI,Polygon Visualization,Instance Segmentation Model,Detections Stabilizer,Llama 3.2 Vision,Icon Visualization,Time in Zone,Blur Visualization,Image Contours,Clip Comparison,VLM as Detector,Object Detection Model,Moondream2,Bounding Box Visualization,SIFT,Classification Label Visualization,Background Color Visualization,Dynamic Crop,Dot Visualization,Pixelate Visualization,Dominant Color,QR Code Detection,Byte Tracker,Buffer,CLIP Embedding Model,Image Slicer,Stitch Images,Crop Visualization,Qwen2.5-VL,Keypoint Detection Model,Perception Encoder Embedding Model,Mask Visualization,SIFT Comparison,Barcode Detection,Pixel Color Count,Depth Estimation,Google Gemini,Image Slicer,Perspective Correction,Keypoint Detection Model,Image Convert Grayscale,Stability AI Image Generation,EasyOCR,Contrast Equalization,Anthropic Claude,Image Blur,Circle Visualization,Stability AI Outpainting,LMM,Detections Stitch,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Depth Estimation in version v1 has.
Bindings
-
input
images(image): The image to infer on..
-
output
image(image): Image in workflows.normalized_depth(numpy_array): Numpy array.
Example JSON definition of step Depth Estimation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/depth_estimation@v1",
"images": "$inputs.image",
"model_version": "depth-anything-v2/small"
}