Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera-calibration@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration in version v1.
- inputs:
Label Visualization,Cosine Similarity,Blur Visualization,Background Color Visualization,Contrast Equalization,Bounding Box Visualization,Camera Calibration,Polygon Visualization,Stability AI Outpainting,Image Slicer,Keypoint Visualization,Reference Path Visualization,Pixelate Visualization,Icon Visualization,Identify Changes,Triangle Visualization,Model Comparison Visualization,Corner Visualization,Image Preprocessing,Color Visualization,SIFT Comparison,Line Counter Visualization,Grid Visualization,Stitch Images,Halo Visualization,Stability AI Image Generation,QR Code Generator,Circle Visualization,Image Contours,Relative Static Crop,Dot Visualization,Polygon Zone Visualization,Ellipse Visualization,Image Blur,Absolute Static Crop,Depth Estimation,Image Slicer,Morphological Transformation,Stability AI Inpainting,Dynamic Crop,Gaze Detection,Camera Focus,Crop Visualization,Image Threshold,Perspective Correction,Image Convert Grayscale,Mask Visualization,Trace Visualization,Classification Label Visualization,SIFT - outputs:
Google Vision OCR,Label Visualization,LMM For Classification,Blur Visualization,Background Color Visualization,Contrast Equalization,Reference Path Visualization,Keypoint Visualization,Stability AI Outpainting,Bounding Box Visualization,Image Slicer,Pixelate Visualization,Single-Label Classification Model,Clip Comparison,SAM 3,Perception Encoder Embedding Model,Seg Preview,Byte Tracker,Image Preprocessing,SAM 3,Color Visualization,SIFT Comparison,Qwen2.5-VL,Object Detection Model,Dominant Color,Anthropic Claude,Circle Visualization,Image Contours,Object Detection Model,QR Code Detection,Polygon Zone Visualization,Ellipse Visualization,Email Notification,Clip Comparison,Moondream2,VLM as Classifier,OCR Model,Absolute Static Crop,Depth Estimation,LMM,Time in Zone,Morphological Transformation,Roboflow Dataset Upload,Gaze Detection,Crop Visualization,OpenAI,Florence-2 Model,Barcode Detection,Image Convert Grayscale,SAM 3,CogVLM,VLM as Detector,Multi-Label Classification Model,Classification Label Visualization,Buffer,Keypoint Detection Model,Segment Anything 2 Model,Keypoint Detection Model,YOLO-World Model,Polygon Visualization,CLIP Embedding Model,Camera Calibration,Icon Visualization,Triangle Visualization,Template Matching,Roboflow Dataset Upload,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,VLM as Detector,Line Counter Visualization,SmolVLM2,Halo Visualization,Stability AI Image Generation,Relative Static Crop,Dot Visualization,Detections Stitch,Llama 3.2 Vision,Image Blur,OpenAI,Instance Segmentation Model,Multi-Label Classification Model,Image Slicer,OpenAI,Stability AI Inpainting,Dynamic Crop,Single-Label Classification Model,Camera Focus,Pixel Color Count,Detections Stabilizer,Instance Segmentation Model,VLM as Classifier,Mask Visualization,Perspective Correction,Image Threshold,OpenAI,Trace Visualization,Stitch Images,SIFT
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration in version v1 has.
Bindings
-
input
image(image): Image to remove distortions from.fx(float): Focal length along the x-axis.fy(float): Focal length along the y-axis.cx(float): Optical center along the x-axis.cy(float): Optical center along the y-axis.k1(float): Radial distortion coefficient k1.k2(float): Radial distortion coefficient k2.k3(float): Radial distortion coefficient k3.p1(float): Distortion coefficient p1.p2(float): Distortion coefficient p2.
-
output
calibrated_image(image): Image in workflows.
Example JSON definition of step Camera Calibration in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}