Camera Calibration¶
Class: CameraCalibrationBlockV1
Source: inference.core.workflows.core_steps.transformations.camera_calibration.v1.CameraCalibrationBlockV1
Remove lens distortions from images using camera calibration parameters (focal lengths, optical centers, and distortion coefficients) to correct radial and tangential distortions introduced by camera lenses, producing undistorted images suitable for accurate measurement, geometric analysis, and precision computer vision applications.
How This Block Works¶
Camera lenses introduce distortions that cause straight lines to appear curved and objects near image edges to appear stretched or compressed. This block corrects these distortions using known camera calibration parameters. The block:
- Receives input images and camera calibration parameters (focal lengths fx/fy, optical centers cx/cy, radial distortion coefficients k1/k2/k3, tangential distortion coefficients p1/p2)
- Constructs a camera matrix from the intrinsic parameters (focal lengths and optical centers) in the standard OpenCV format: 3x3 matrix with fx, fy on the diagonal, cx, cy as the optical center, and 1 in the bottom-right corner
- Assembles distortion coefficients into a 5-element array (k1, k2, p1, p2, k3) representing radial and tangential distortion parameters
- Computes an optimal new camera matrix using OpenCV's
getOptimalNewCameraMatrixto maximize the usable image area after correction (removes black borders that result from distortion correction) - Applies OpenCV's
undistortfunction to correct both radial distortions (barrel and pincushion distortion causing curved lines) and tangential distortions (lens misalignment causing skewed images) - Returns the corrected, undistorted image with straight lines corrected, edge distortions removed, and geometric accuracy restored
The block uses OpenCV's camera calibration functions under the hood, following standard computer vision camera calibration methodology (see OpenCV calibration tutorial for details on obtaining calibration parameters). Radial distortion coefficients (k1, k2, k3) correct barrel/pincushion distortion where image points are displaced radially from the optical center. Tangential distortion coefficients (p1, p2) correct distortion caused by lens misalignment. The calibration parameters must be obtained beforehand through a camera calibration process (typically using checkerboard patterns) or provided by the camera manufacturer.
Requirements¶
Camera Calibration Parameters: This block requires pre-computed camera calibration parameters obtained through camera calibration: - Focal lengths (fx, fy): Pixel focal lengths along x and y axes (may differ for non-square pixels) - Optical centers (cx, cy): Principal point coordinates (image center in ideal cameras) - Radial distortion coefficients (k1, k2, k3): Correct barrel and pincushion distortion - Tangential distortion coefficients (p1, p2): Correct lens misalignment distortion
These parameters are typically obtained using OpenCV's camera calibration process with a checkerboard pattern or similar calibration target. See OpenCV camera calibration documentation for calibration methodology.
Common Use Cases¶
- Measurement and Metrology Applications: Correct lens distortions for accurate measurement workflows (e.g., remove distortions before measuring object sizes, correct geometric distortions for precision measurements, undistort images for dimensional analysis), enabling accurate measurements from camera images
- Geometric Analysis Workflows: Prepare images for geometric computer vision tasks (e.g., undistort images before line detection, correct distortions for geometric shape analysis, prepare images for accurate angle measurements), enabling precise geometric analysis with corrected images
- Multi-Camera Systems: Standardize images from multiple cameras with different lens characteristics (e.g., undistort images from different camera angles, correct wide-angle lens distortions, standardize images from multiple cameras for stereo vision), enabling consistent image geometry across camera setups
- Pre-Processing for Precision Models: Prepare images for models requiring high geometric accuracy (e.g., undistort images before running geometric models, correct distortions for accurate feature detection, prepare images for precise pose estimation), enabling better accuracy for geometric computer vision tasks
- Wide-Angle and Fisheye Correction: Correct severe distortions from wide-angle or fisheye lenses (e.g., correct barrel distortion from wide-angle lenses, remove fisheye distortion effects, straighten curved lines in wide-angle images), enabling use of wide-angle lenses with standard computer vision workflows
- Video Stabilization Preparation: Correct lens distortions as part of video stabilization pipelines (e.g., undistort video frames before stabilization, correct camera-specific distortions in video streams, prepare frames for motion analysis), enabling more accurate video processing
Connecting to Other Blocks¶
This block receives images and produces undistorted images:
- After image loading blocks to correct lens distortions before processing, enabling accurate analysis with geometrically correct images
- Before measurement and analysis blocks that require geometric accuracy (e.g., size measurement, angle measurement, distance calculation, geometric shape analysis), enabling precise measurements from undistorted images
- Before geometric computer vision blocks that analyze lines, shapes, or spatial relationships (e.g., line detection, contour analysis, geometric pattern matching, pose estimation), enabling accurate geometric analysis with corrected images
- In multi-camera workflows to standardize images from different cameras before processing (e.g., undistort images from different camera angles, correct camera-specific distortions before comparison, standardize images for stereo vision), enabling consistent processing across camera setups
- Before detection or classification blocks in precision applications where geometric accuracy matters (e.g., detect objects in undistorted images for accurate localization, classify objects in geometrically correct images, run models requiring precise spatial relationships), enabling improved accuracy for detection and classification tasks
- In video processing workflows to correct distortions in video frames (e.g., undistort video frames for motion analysis, correct camera distortions in video streams, prepare frames for tracking algorithms), enabling accurate video analysis with corrected frames
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera-calibration@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
fx |
float |
Focal length along the x-axis in pixels. Part of the camera's intrinsic parameters. Typically obtained through camera calibration (e.g., using OpenCV calibration with a checkerboard pattern). Represents the camera's horizontal focal length. For square pixels, fx and fy are usually equal. Must be obtained from camera calibration or manufacturer specifications.. | ✅ |
fy |
float |
Focal length along the y-axis in pixels. Part of the camera's intrinsic parameters. Typically obtained through camera calibration (e.g., using OpenCV calibration with a checkerboard pattern). Represents the camera's vertical focal length. For square pixels, fx and fy are usually equal. Must be obtained from camera calibration or manufacturer specifications.. | ✅ |
cx |
float |
Optical center (principal point) x-coordinate in pixels. Part of the camera's intrinsic parameters representing the x-coordinate of the camera's principal point (image center in ideal cameras). Typically near half the image width. Obtained through camera calibration. Used with cy to define the optical center of the camera.. | ✅ |
cy |
float |
Optical center (principal point) y-coordinate in pixels. Part of the camera's intrinsic parameters representing the y-coordinate of the camera's principal point (image center in ideal cameras). Typically near half the image height. Obtained through camera calibration. Used with cx to define the optical center of the camera.. | ✅ |
k1 |
float |
First radial distortion coefficient. Part of the camera's distortion parameters used to correct barrel and pincushion distortion (where straight lines appear curved). k1 is typically the dominant radial distortion term. Positive values often indicate barrel distortion, negative values indicate pincushion distortion. Obtained through camera calibration.. | ✅ |
k2 |
float |
Second radial distortion coefficient. Part of the camera's distortion parameters used to correct higher-order radial distortion effects. k2 helps correct more complex radial distortion patterns beyond the first-order k1 term. Obtained through camera calibration. Often smaller in magnitude than k1.. | ✅ |
k3 |
float |
Third radial distortion coefficient. Part of the camera's distortion parameters used to correct additional higher-order radial distortion effects. k3 is typically the smallest radial distortion term and is used for very precise distortion correction, especially for wide-angle lenses. Obtained through camera calibration. Often set to 0 for standard lenses.. | ✅ |
p1 |
float |
First tangential distortion coefficient. Part of the camera's distortion parameters used to correct tangential distortion caused by lens misalignment. p1 corrects skew distortions where the lens is not perfectly aligned with the image sensor. Obtained through camera calibration. For well-aligned lenses, p1 and p2 are often close to zero.. | ✅ |
p2 |
float |
Second tangential distortion coefficient. Part of the camera's distortion parameters used to correct additional tangential distortion effects. p2 works together with p1 to correct lens misalignment distortions. Obtained through camera calibration. For well-aligned lenses, p1 and p2 are often close to zero.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Calibration in version v1.
- inputs:
Contrast Equalization,Image Contours,Image Slicer,Depth Estimation,Polygon Visualization,QR Code Generator,Image Blur,SIFT Comparison,Stitch Images,Dynamic Crop,Bounding Box Visualization,Text Display,Model Comparison Visualization,Camera Focus,SIFT,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Mask Visualization,Relative Static Crop,Cosine Similarity,Keypoint Visualization,Circle Visualization,Camera Focus,Trace Visualization,Pixelate Visualization,Color Visualization,Absolute Static Crop,Image Slicer,Stability AI Inpainting,Reference Path Visualization,Dot Visualization,Label Visualization,Perspective Correction,Ellipse Visualization,Crop Visualization,Halo Visualization,Image Threshold,Grid Visualization,Image Convert Grayscale,Corner Visualization,Image Preprocessing,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Identify Changes,Icon Visualization,Gaze Detection,Triangle Visualization,Stability AI Image Generation,Background Subtraction - outputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,Anthropic Claude,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,SmolVLM2,CLIP Embedding Model,Roboflow Dataset Upload,Text Display,Motion Detection,SIFT,Model Comparison Visualization,Camera Focus,Moondream2,LMM,Qwen3-VL,Single-Label Classification Model,Google Vision OCR,SAM 3,Anthropic Claude,Relative Static Crop,Mask Visualization,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Multi-Label Classification Model,Time in Zone,VLM as Classifier,Reference Path Visualization,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Ellipse Visualization,Crop Visualization,Keypoint Detection Model,Florence-2 Model,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Image Preprocessing,Barcode Detection,Icon Visualization,SAM 3,Background Subtraction,Segment Anything 2 Model,Qwen2.5-VL,Image Slicer,Image Contours,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Detections Stitch,Stitch Images,QR Code Detection,Dynamic Crop,Bounding Box Visualization,Anthropic Claude,VLM as Classifier,YOLO-World Model,Instance Segmentation Model,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,Stability AI Image Generation,Dominant Color,OCR Model,Keypoint Visualization,Google Gemini,OpenAI,Camera Focus,Trace Visualization,CogVLM,OpenAI,Image Slicer,Absolute Static Crop,Color Visualization,Dot Visualization,Label Visualization,Buffer,Florence-2 Model,Google Gemini,Google Gemini,Object Detection Model,LMM For Classification,Template Matching,OpenAI,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Byte Tracker,SAM 3,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Gaze Detection,Clip Comparison,Triangle Visualization,VLM as Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Calibration in version v1 has.
Bindings
-
input
image(image): Input image to remove lens distortions from. The image will be corrected for radial and tangential distortions using the provided camera calibration parameters. Works with images from cameras with known calibration parameters. The undistorted output image will have corrected geometry with straight lines straightened and edge distortions removed..fx(float): Focal length along the x-axis in pixels. Part of the camera's intrinsic parameters. Typically obtained through camera calibration (e.g., using OpenCV calibration with a checkerboard pattern). Represents the camera's horizontal focal length. For square pixels, fx and fy are usually equal. Must be obtained from camera calibration or manufacturer specifications..fy(float): Focal length along the y-axis in pixels. Part of the camera's intrinsic parameters. Typically obtained through camera calibration (e.g., using OpenCV calibration with a checkerboard pattern). Represents the camera's vertical focal length. For square pixels, fx and fy are usually equal. Must be obtained from camera calibration or manufacturer specifications..cx(float): Optical center (principal point) x-coordinate in pixels. Part of the camera's intrinsic parameters representing the x-coordinate of the camera's principal point (image center in ideal cameras). Typically near half the image width. Obtained through camera calibration. Used with cy to define the optical center of the camera..cy(float): Optical center (principal point) y-coordinate in pixels. Part of the camera's intrinsic parameters representing the y-coordinate of the camera's principal point (image center in ideal cameras). Typically near half the image height. Obtained through camera calibration. Used with cx to define the optical center of the camera..k1(float): First radial distortion coefficient. Part of the camera's distortion parameters used to correct barrel and pincushion distortion (where straight lines appear curved). k1 is typically the dominant radial distortion term. Positive values often indicate barrel distortion, negative values indicate pincushion distortion. Obtained through camera calibration..k2(float): Second radial distortion coefficient. Part of the camera's distortion parameters used to correct higher-order radial distortion effects. k2 helps correct more complex radial distortion patterns beyond the first-order k1 term. Obtained through camera calibration. Often smaller in magnitude than k1..k3(float): Third radial distortion coefficient. Part of the camera's distortion parameters used to correct additional higher-order radial distortion effects. k3 is typically the smallest radial distortion term and is used for very precise distortion correction, especially for wide-angle lenses. Obtained through camera calibration. Often set to 0 for standard lenses..p1(float): First tangential distortion coefficient. Part of the camera's distortion parameters used to correct tangential distortion caused by lens misalignment. p1 corrects skew distortions where the lens is not perfectly aligned with the image sensor. Obtained through camera calibration. For well-aligned lenses, p1 and p2 are often close to zero..p2(float): Second tangential distortion coefficient. Part of the camera's distortion parameters used to correct additional tangential distortion effects. p2 works together with p1 to correct lens misalignment distortions. Obtained through camera calibration. For well-aligned lenses, p1 and p2 are often close to zero..
-
output
calibrated_image(image): Image in workflows.
Example JSON definition of step Camera Calibration in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera-calibration@v1",
"image": "$inputs.image",
"fx": 0.123,
"fy": 0.123,
"cx": 0.123,
"cy": 0.123,
"k1": 0.123,
"k2": 0.123,
"k3": 0.123,
"p1": 0.123,
"p2": 0.123
}