Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
This block uses the OpenCV calibrateCamera
function to remove lens distortions from an image.
Please refer to OpenCV documentation where camera calibration methodology is described:
https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
This block requires following parameters in order to perform the calibration: Lens focal length along the x-axis and y-axis (fx, fy) Lens optical centers along the x-axis and y-axis (cx, cy) Radial distortion coefficients (k1, k2, k3) Tangential distortion coefficients (p1, p2)
Based on above parameters, camera matrix will be built as follows: [[fx 0 cx][ 0 fy cy] [ 0 0 1 ]]
Distortions coefficient will be passed as 5-tuple (k1, k2, p1, p2, k3)
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/camera-calibration@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis. | ✅ |
fy |
float |
Focal length along the y-axis. | ✅ |
cx |
float |
Optical center along the x-axis. | ✅ |
cy |
float |
Optical center along the y-axis. | ✅ |
k1 |
float |
Radial distortion coefficient k1. | ✅ |
k2 |
float |
Radial distortion coefficient k2. | ✅ |
k3 |
float |
Radial distortion coefficient k3. | ✅ |
p1 |
float |
Distortion coefficient p1. | ✅ |
p2 |
float |
Distortion coefficient p2. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration
in version v1
.
- inputs:
Crop Visualization
,SIFT
,Stability AI Image Generation
,Triangle Visualization
,Blur Visualization
,Background Color Visualization
,Relative Static Crop
,Color Visualization
,Image Contours
,Camera Focus
,Corner Visualization
,Line Counter Visualization
,Icon Visualization
,Mask Visualization
,Image Convert Grayscale
,Circle Visualization
,Image Blur
,Pixelate Visualization
,Cosine Similarity
,Absolute Static Crop
,Model Comparison Visualization
,Gaze Detection
,Image Threshold
,Reference Path Visualization
,Image Slicer
,Stitch Images
,Depth Estimation
,Trace Visualization
,Image Preprocessing
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,Grid Visualization
,Bounding Box Visualization
,Camera Calibration
,Polygon Zone Visualization
,Ellipse Visualization
,QR Code Generator
,Halo Visualization
,Perspective Correction
,Stability AI Inpainting
,Image Slicer
,SIFT Comparison
,Label Visualization
,Identify Changes
,Dynamic Crop
- outputs:
QR Code Detection
,Anthropic Claude
,Crop Visualization
,SIFT
,LMM For Classification
,Blur Visualization
,Line Counter Visualization
,Color Visualization
,Image Contours
,Camera Focus
,Mask Visualization
,Image Convert Grayscale
,Google Gemini
,Circle Visualization
,Absolute Static Crop
,VLM as Classifier
,Object Detection Model
,Multi-Label Classification Model
,Keypoint Detection Model
,Stitch Images
,Trace Visualization
,Image Preprocessing
,Qwen2.5-VL
,OCR Model
,Object Detection Model
,Clip Comparison
,SmolVLM2
,Polygon Zone Visualization
,LMM
,YOLO-World Model
,Halo Visualization
,CLIP Embedding Model
,Florence-2 Model
,Moondream2
,Perspective Correction
,Stability AI Inpainting
,Buffer
,Template Matching
,Label Visualization
,VLM as Detector
,Pixel Color Count
,Segment Anything 2 Model
,Perception Encoder Embedding Model
,Stability AI Image Generation
,Keypoint Detection Model
,Triangle Visualization
,Background Color Visualization
,Relative Static Crop
,Detections Stabilizer
,Corner Visualization
,Multi-Label Classification Model
,Icon Visualization
,Pixelate Visualization
,Image Blur
,Gaze Detection
,Model Comparison Visualization
,VLM as Detector
,Llama 3.2 Vision
,Time in Zone
,Instance Segmentation Model
,Image Threshold
,VLM as Classifier
,Google Vision OCR
,Reference Path Visualization
,Image Slicer
,Roboflow Dataset Upload
,CogVLM
,Byte Tracker
,Barcode Detection
,Depth Estimation
,Roboflow Dataset Upload
,Single-Label Classification Model
,OpenAI
,Classification Label Visualization
,Polygon Visualization
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,OpenAI
,Single-Label Classification Model
,Bounding Box Visualization
,Camera Calibration
,Ellipse Visualization
,OpenAI
,Florence-2 Model
,Image Slicer
,Instance Segmentation Model
,SIFT Comparison
,Detections Stitch
,Dominant Color
,Clip Comparison
,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration
in version v1
has.
Bindings
-
input
image
(image
): Image to remove distortions from.fx
(float
): Focal length along the x-axis.fy
(float
): Focal length along the y-axis.cx
(float
): Optical center along the x-axis.cy
(float
): Optical center along the y-axis.k1
(float
): Radial distortion coefficient k1.k2
(float
): Radial distortion coefficient k2.k3
(float
): Radial distortion coefficient k3.p1
(float
): Distortion coefficient p1.p2
(float
): Distortion coefficient p2.
-
output
calibrated_image
(image
): Image in workflows.
Example JSON definition of step Camera Calibration
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}