Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera
function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/camera-calibration@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration
in version v1
.
- inputs:
Image Threshold
,Reference Path Visualization
,Depth Estimation
,Bounding Box Visualization
,Background Color Visualization
,Icon Visualization
,Polygon Visualization
,Trace Visualization
,Ellipse Visualization
,Pixelate Visualization
,SIFT
,Grid Visualization
,Dot Visualization
,Camera Focus
,Classification Label Visualization
,Stitch Images
,Line Counter Visualization
,Image Blur
,SIFT Comparison
,Cosine Similarity
,Keypoint Visualization
,Polygon Zone Visualization
,Model Comparison Visualization
,Crop Visualization
,Corner Visualization
,Triangle Visualization
,Absolute Static Crop
,Stability AI Image Generation
,Image Contours
,Dynamic Crop
,Stability AI Inpainting
,Image Slicer
,QR Code Generator
,Gaze Detection
,Stability AI Outpainting
,Camera Calibration
,Image Convert Grayscale
,Image Preprocessing
,Circle Visualization
,Blur Visualization
,Identify Changes
,Color Visualization
,Relative Static Crop
,Halo Visualization
,Image Slicer
,Perspective Correction
,Mask Visualization
,Label Visualization
- outputs:
Background Color Visualization
,Perception Encoder Embedding Model
,VLM as Classifier
,Trace Visualization
,Google Vision OCR
,OpenAI
,Dot Visualization
,Classification Label Visualization
,Stitch Images
,Line Counter Visualization
,SIFT Comparison
,Anthropic Claude
,Pixel Color Count
,LMM
,Qwen2.5-VL
,Multi-Label Classification Model
,Model Comparison Visualization
,OCR Model
,Stability AI Image Generation
,Object Detection Model
,Absolute Static Crop
,Image Contours
,Dynamic Crop
,LMM For Classification
,Image Slicer
,Gaze Detection
,CogVLM
,QR Code Detection
,Object Detection Model
,Image Preprocessing
,Circle Visualization
,CLIP Embedding Model
,Florence-2 Model
,VLM as Detector
,Keypoint Detection Model
,Keypoint Detection Model
,Relative Static Crop
,Image Slicer
,YOLO-World Model
,Moondream2
,Label Visualization
,Roboflow Dataset Upload
,Image Threshold
,Reference Path Visualization
,Depth Estimation
,Bounding Box Visualization
,Icon Visualization
,Polygon Visualization
,Florence-2 Model
,Byte Tracker
,Dominant Color
,VLM as Classifier
,Buffer
,Pixelate Visualization
,SIFT
,Ellipse Visualization
,Camera Focus
,Instance Segmentation Model
,Clip Comparison
,Single-Label Classification Model
,Image Blur
,Keypoint Visualization
,Polygon Zone Visualization
,Template Matching
,Crop Visualization
,Corner Visualization
,Detections Stabilizer
,Triangle Visualization
,Clip Comparison
,Stability AI Inpainting
,OpenAI
,OpenAI
,Stability AI Outpainting
,Camera Calibration
,Detections Stitch
,Multi-Label Classification Model
,Image Convert Grayscale
,Single-Label Classification Model
,Llama 3.2 Vision
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Blur Visualization
,VLM as Detector
,Color Visualization
,Instance Segmentation Model
,SmolVLM2
,Barcode Detection
,Halo Visualization
,Google Gemini
,Mask Visualization
,Perspective Correction
,Time in Zone
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration
in version v1
has.
Bindings
-
input
image
(image
): Image to remove distortions from.fx
(float
): Focal length along the x-axis.fy
(float
): Focal length along the y-axis.cx
(float
): Optical center along the x-axis.cy
(float
): Optical center along the y-axis.k1
(float
): Radial distortion coefficient k1.k2
(float
): Radial distortion coefficient k2.k3
(float
): Radial distortion coefficient k3.p1
(float
): Distortion coefficient p1.p2
(float
): Distortion coefficient p2.
-
output
calibrated_image
(image
): Image in workflows.
Example JSON definition of step Camera Calibration
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}