You can use Depth-Anything-V2-Small to estimate the depth of objects in images, creating a depth map where:
- Each pixel's value represents its relative distance from the camera
- Lower values (darker colors) indicate closer objects
- Higher values (lighter colors) indicate further objects
You can deploy Depth-Anything-V2-Small with Inference.
Create a new Python file called app.py and add the following code:
fromPILimportImageimportmatplotlib.pyplotaspltimportnumpyasnpfrominference.models.depth_estimation.depthestimationimportDepthEstimator# Initialize the modelmodel=DepthEstimator()# Load an imageimage=Image.open("your_image.jpg")# Run inferenceresults=model.predict(image)# Get the depth map and visualizationdepth_map=results[0]['normalized_depth']visualization=results[0]['image']# Convert visualization to numpy array for displayvisualization_array=visualization.numpy()# Display the resultsplt.figure(figsize=(12,6))plt.subplot(1,2,1)plt.imshow(image)plt.title('Original Image')plt.axis('off')plt.subplot(1,2,2)plt.imshow(visualization_array)plt.title('Depth Map')plt.axis('off')plt.show()
In this code, we:
1. Load the Depth-Anything-V2-Small model
2. Load an image for depth estimation
3. Run inference to get the depth map
4. Display both the original image and the depth map visualization
The depth map visualization uses a viridis colormap where:
- Darker colors (purple/blue) represent objects closer to the camera
- Lighter colors (yellow/green) represent objects further from the camera
To use Depth-Anything-V2-Small with Inference, you will need a Hugging Face token. If you don't already have a Hugging Face account, sign up for a free Hugging Face account.
Then, set your Hugging Face token as an environment variable:
exportHUGGING_FACE_HUB_TOKEN=your_token_here
Or you can log in using the Hugging Face CLI:
huggingface-clilogin
Then, run the Python script you have created:
pythonapp.py
The script will display both the original image and the depth map visualization.