Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera-calibration@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration in version v1.
- inputs:
Polygon Visualization,Grid Visualization,Background Color Visualization,Stitch Images,Image Slicer,Corner Visualization,Trace Visualization,Classification Label Visualization,Camera Calibration,Mask Visualization,Bounding Box Visualization,Model Comparison Visualization,Image Convert Grayscale,Pixelate Visualization,Polygon Zone Visualization,Relative Static Crop,Crop Visualization,Ellipse Visualization,Triangle Visualization,Image Slicer,Camera Focus,Gaze Detection,Icon Visualization,QR Code Generator,Color Visualization,Label Visualization,Keypoint Visualization,Contrast Equalization,Identify Changes,Image Contours,Blur Visualization,Dot Visualization,Circle Visualization,Stability AI Image Generation,Perspective Correction,Absolute Static Crop,Reference Path Visualization,Morphological Transformation,Cosine Similarity,Image Blur,Image Threshold,SIFT,Line Counter Visualization,Depth Estimation,Stability AI Outpainting,Halo Visualization,Stability AI Inpainting,Image Preprocessing,Dynamic Crop,SIFT Comparison - outputs:
LMM,Background Color Visualization,Stitch Images,Image Slicer,VLM as Classifier,Corner Visualization,Camera Calibration,Mask Visualization,CLIP Embedding Model,Object Detection Model,Barcode Detection,Model Comparison Visualization,QR Code Detection,Keypoint Detection Model,Pixelate Visualization,Time in Zone,Anthropic Claude,Relative Static Crop,Google Gemini,Florence-2 Model,Multi-Label Classification Model,Ellipse Visualization,Triangle Visualization,Segment Anything 2 Model,Camera Focus,OCR Model,Dominant Color,Label Visualization,Pixel Color Count,SmolVLM2,Florence-2 Model,LMM For Classification,Blur Visualization,Single-Label Classification Model,Dot Visualization,Stability AI Image Generation,Google Vision OCR,Llama 3.2 Vision,EasyOCR,Detections Stabilizer,Absolute Static Crop,SAM 3,Morphological Transformation,Image Blur,Clip Comparison,Image Threshold,Depth Estimation,Stability AI Outpainting,Halo Visualization,Qwen2.5-VL,Stability AI Inpainting,Byte Tracker,Polygon Visualization,OpenAI,Roboflow Dataset Upload,Template Matching,CogVLM,Classification Label Visualization,Trace Visualization,Email Notification,VLM as Detector,Instance Segmentation Model,Bounding Box Visualization,Image Convert Grayscale,Perception Encoder Embedding Model,Polygon Zone Visualization,OpenAI,Clip Comparison,Detections Stitch,Keypoint Detection Model,Crop Visualization,Object Detection Model,Image Slicer,YOLO-World Model,Multi-Label Classification Model,Gaze Detection,Moondream2,Seg Preview,Icon Visualization,Color Visualization,Roboflow Dataset Upload,Keypoint Visualization,Contrast Equalization,Buffer,Image Contours,Instance Segmentation Model,Circle Visualization,OpenAI,VLM as Classifier,Reference Path Visualization,VLM as Detector,Dynamic Crop,Single-Label Classification Model,SIFT,Line Counter Visualization,Image Preprocessing,Perspective Correction,SIFT Comparison
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration in version v1 has.
Bindings
-
input
image(image): Image to remove distortions from.fx(float): Focal length along the x-axis.fy(float): Focal length along the y-axis.cx(float): Optical center along the x-axis.cy(float): Optical center along the y-axis.k1(float): Radial distortion coefficient k1.k2(float): Radial distortion coefficient k2.k3(float): Radial distortion coefficient k3.p1(float): Distortion coefficient p1.p2(float): Distortion coefficient p2.
-
output
calibrated_image(image): Image in workflows.
Example JSON definition of step Camera Calibration in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}