Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera
function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/camera-calibration@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration
in version v1
.
- inputs:
Image Convert Grayscale
,Absolute Static Crop
,Relative Static Crop
,Label Visualization
,Line Counter Visualization
,Gaze Detection
,Background Color Visualization
,Stitch Images
,Camera Focus
,Image Contours
,Image Preprocessing
,Image Slicer
,Reference Path Visualization
,SIFT Comparison
,Triangle Visualization
,Grid Visualization
,Polygon Zone Visualization
,Keypoint Visualization
,Depth Estimation
,Bounding Box Visualization
,Image Blur
,Perspective Correction
,Halo Visualization
,Ellipse Visualization
,Color Visualization
,Crop Visualization
,Identify Changes
,Dot Visualization
,Pixelate Visualization
,Model Comparison Visualization
,Classification Label Visualization
,Camera Calibration
,Stability AI Image Generation
,Polygon Visualization
,Trace Visualization
,Corner Visualization
,Image Threshold
,Blur Visualization
,Image Slicer
,Stability AI Inpainting
,Mask Visualization
,SIFT
,Cosine Similarity
,Circle Visualization
,Dynamic Crop
- outputs:
LMM
,Buffer
,Image Convert Grayscale
,VLM as Detector
,Absolute Static Crop
,Multi-Label Classification Model
,Relative Static Crop
,Line Counter Visualization
,Gaze Detection
,Background Color Visualization
,OCR Model
,Camera Focus
,Image Contours
,Image Slicer
,Reference Path Visualization
,Keypoint Detection Model
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Triangle Visualization
,Detections Stabilizer
,Multi-Label Classification Model
,Depth Estimation
,Google Vision OCR
,Llama 3.2 Vision
,Roboflow Dataset Upload
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Crop Visualization
,Dot Visualization
,Model Comparison Visualization
,Instance Segmentation Model
,Classification Label Visualization
,Camera Calibration
,Qwen2.5-VL
,Stability AI Image Generation
,Trace Visualization
,Time in Zone
,Corner Visualization
,Image Threshold
,Blur Visualization
,QR Code Detection
,CogVLM
,Stability AI Inpainting
,Keypoint Detection Model
,SIFT
,Circle Visualization
,OpenAI
,Moondream2
,Florence-2 Model
,Label Visualization
,Stitch Images
,Image Preprocessing
,Detections Stitch
,Template Matching
,Byte Tracker
,SmolVLM2
,Dominant Color
,Polygon Zone Visualization
,Keypoint Visualization
,LMM For Classification
,Bounding Box Visualization
,CLIP Embedding Model
,OpenAI
,Halo Visualization
,Google Gemini
,Ellipse Visualization
,Image Blur
,Color Visualization
,Barcode Detection
,Pixelate Visualization
,Single-Label Classification Model
,Pixel Color Count
,YOLO-World Model
,VLM as Detector
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Polygon Visualization
,Single-Label Classification Model
,VLM as Classifier
,Image Slicer
,Clip Comparison
,Mask Visualization
,VLM as Classifier
,Anthropic Claude
,Florence-2 Model
,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration
in version v1
has.
Bindings
-
input
image
(image
): Image to remove distortions from.fx
(float
): Focal length along the x-axis.fy
(float
): Focal length along the y-axis.cx
(float
): Optical center along the x-axis.cy
(float
): Optical center along the y-axis.k1
(float
): Radial distortion coefficient k1.k2
(float
): Radial distortion coefficient k2.k3
(float
): Radial distortion coefficient k3.p1
(float
): Distortion coefficient p1.p2
(float
): Distortion coefficient p2.
-
output
calibrated_image
(image
): Image in workflows.
Example JSON definition of step Camera Calibration
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}