Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera-calibration@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration in version v1.
- inputs:
Gaze Detection,Depth Estimation,Image Threshold,Blur Visualization,Pixelate Visualization,Circle Visualization,Corner Visualization,QR Code Generator,Relative Static Crop,Label Visualization,Image Blur,Image Slicer,Background Subtraction,Image Contours,Classification Label Visualization,Color Visualization,Mask Visualization,SIFT Comparison,Stability AI Outpainting,Line Counter Visualization,Reference Path Visualization,Image Convert Grayscale,Grid Visualization,Bounding Box Visualization,Triangle Visualization,Perspective Correction,Dot Visualization,Halo Visualization,Image Slicer,Background Color Visualization,Polygon Visualization,Trace Visualization,Camera Calibration,Morphological Transformation,Model Comparison Visualization,Absolute Static Crop,Stability AI Image Generation,Stability AI Inpainting,Image Preprocessing,SIFT,Camera Focus,Dynamic Crop,Ellipse Visualization,Icon Visualization,Crop Visualization,Camera Focus,Stitch Images,Identify Changes,Cosine Similarity,Contrast Equalization,Keypoint Visualization,Polygon Zone Visualization - outputs:
QR Code Detection,Byte Tracker,Circle Visualization,Google Vision OCR,Multi-Label Classification Model,Detections Stitch,Label Visualization,OpenAI,Image Slicer,Mask Visualization,Classification Label Visualization,Stability AI Outpainting,Florence-2 Model,Color Visualization,Instance Segmentation Model,Moondream2,Email Notification,VLM as Classifier,Halo Visualization,Time in Zone,Roboflow Dataset Upload,Image Slicer,Pixel Color Count,Polygon Visualization,Clip Comparison,OCR Model,VLM as Classifier,Model Comparison Visualization,LMM For Classification,Template Matching,Single-Label Classification Model,Llama 3.2 Vision,OpenAI,Seg Preview,SAM 3,Image Preprocessing,Stability AI Inpainting,Perception Encoder Embedding Model,SIFT,Dynamic Crop,Ellipse Visualization,Camera Focus,Anthropic Claude,Keypoint Detection Model,VLM as Detector,Stitch Images,Single-Label Classification Model,Contrast Equalization,Gaze Detection,Depth Estimation,Keypoint Detection Model,Image Threshold,Blur Visualization,Pixelate Visualization,Dominant Color,Corner Visualization,VLM as Detector,EasyOCR,Relative Static Crop,Image Blur,Qwen2.5-VL,Background Subtraction,Image Contours,SIFT Comparison,Barcode Detection,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Google Gemini,Line Counter Visualization,Reference Path Visualization,YOLO-World Model,Segment Anything 2 Model,Image Convert Grayscale,Instance Segmentation Model,Bounding Box Visualization,Triangle Visualization,Perspective Correction,Dot Visualization,SmolVLM2,OpenAI,OpenAI,SAM 3,Background Color Visualization,Multi-Label Classification Model,SAM 3,Trace Visualization,Morphological Transformation,Camera Calibration,Anthropic Claude,Florence-2 Model,Motion Detection,Absolute Static Crop,Stability AI Image Generation,Google Gemini,Clip Comparison,CLIP Embedding Model,Camera Focus,Icon Visualization,Object Detection Model,Crop Visualization,Detections Stabilizer,CogVLM,Object Detection Model,Buffer,LMM,Keypoint Visualization,Polygon Zone Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration in version v1 has.
Bindings
-
input
image(image): Image to remove distortions from.fx(float): Focal length along the x-axis.fy(float): Focal length along the y-axis.cx(float): Optical center along the x-axis.cy(float): Optical center along the y-axis.k1(float): Radial distortion coefficient k1.k2(float): Radial distortion coefficient k2.k3(float): Radial distortion coefficient k3.p1(float): Distortion coefficient p1.p2(float): Distortion coefficient p2.
-
output
calibrated_image(image): Image in workflows.
Example JSON definition of step Camera Calibration in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}