Stitch Images¶
Class: StitchImagesBlockV1
Source: inference.core.workflows.core_steps.transformations.stitch_images.v1.StitchImagesBlockV1
Stitch two overlapping images together into a single panoramic image using SIFT (Scale Invariant Feature Transform) feature matching and homography-based image alignment, automatically detecting common features, calculating geometric transformations, and blending images to create seamless panoramic compositions from overlapping scenes.
How This Block Works¶
This block stitches two overlapping images together by detecting common features, calculating geometric transformations, and aligning the images into a single panoramic result. The block:
- Receives two input images (image1 and image2) that contain overlapping regions with sufficient detail for feature matching
- Detects keypoints and computes descriptors using SIFT (Scale Invariant Feature Transform) for both images:
- Identifies distinctive feature points (keypoints) in each image that are invariant to scale and rotation
- Computes feature descriptors (128-dimensional vectors) describing the visual characteristics around each keypoint
- Matches keypoints between the two images using brute force matching:
- Finds the best matching descriptors for each keypoint in image1 among all keypoints in image2
- Uses k-nearest neighbor matching (configurable via count_of_best_matches_per_query_descriptor) to find multiple potential matches per query keypoint
- Filters good matches using Lowe's ratio test:
- Compares the distance to the best match with the distance to the second-best match
- Keeps matches where the best match distance is less than 0.75 times the second-best match distance (reduces false matches)
- Determines image ordering based on keypoint positions (identifies which image should be placed first based on spatial distribution of matched features)
- Calculates homography transformation matrix using RANSAC (Random Sample Consensus):
- Finds a perspective transformation matrix that maps points from one image to the other
- Uses RANSAC to robustly estimate the transformation while filtering out outlier matches
- Configurable maximum reprojection error (max_allowed_reprojection_error) controls which point pairs are considered inliers
- Calculates canvas size and translation:
- Determines the size needed to contain both images after transformation
- Calculates translation needed to ensure both images fit within the canvas boundaries
- Warps the second image using the homography transformation:
- Applies perspective transformation to align the second image with the first
- Combines homography matrix with translation matrix for correct positioning
- Stitches images together:
- Places the first image onto the warped second image canvas
- Creates the final stitched panoramic image containing both input images aligned and blended
- Returns the stitched image, or None if stitching fails (e.g., insufficient matches, transformation calculation failure)
The block uses SIFT for robust feature detection that works well with images containing sufficient detail and texture. The RANSAC-based homography calculation handles perspective distortions and ensures robust alignment even with some incorrect matches. The reprojection error threshold controls the sensitivity of the alignment - lower values require more precise matches, while higher values (useful for low-detail images) allow more tolerance for matching variations.
Common Use Cases¶
- Panoramic Image Creation: Stitch overlapping images together to create wide panoramic views (e.g., create panoramic photos from overlapping camera shots, stitch together images from rotating cameras, combine multiple overlapping images into panoramas), enabling panoramic image generation workflows
- Wide-Area Scene Reconstruction: Combine multiple overlapping views of a scene into a single comprehensive image (e.g., reconstruct wide scenes from multiple camera angles, combine overlapping surveillance camera views, stitch together images from multiple viewpoints), enabling wide-area scene visualization
- Multi-Image Mosaicking: Create image mosaics from overlapping image tiles or sections (e.g., stitch together image tiles for large-scale mapping, combine overlapping satellite image sections, create mosaics from overlapping image captures), enabling image mosaic creation workflows
- Scene Documentation: Combine multiple overlapping images to document large scenes or areas (e.g., document large spaces with multiple overlapping photos, combine overlapping views for scene documentation, stitch together images for comprehensive scene capture), enabling comprehensive scene documentation
- Video Frame Stitching: Stitch together overlapping frames from video sequences (e.g., create panoramic views from video frames, combine overlapping frames from moving cameras, stitch together consecutive video frames), enabling video-based panoramic workflows
- Multi-Camera View Combination: Combine overlapping views from multiple cameras into a single unified view (e.g., stitch together overlapping camera feeds, combine multi-camera views for monitoring, merge overlapping camera perspectives), enabling multi-camera view integration workflows
Connecting to Other Blocks¶
This block receives two images and produces a single stitched image:
- After image input blocks or image preprocessing blocks to stitch preprocessed images together (e.g., stitch images after preprocessing, combine images after enhancement, merge images after filtering), enabling image stitching workflows
- After crop blocks to stitch together cropped image regions from different sources (e.g., stitch cropped regions from different images, combine cropped sections from multiple sources, merge cropped regions into panoramas), enabling cropped region stitching workflows
- After transformation blocks to stitch images that have been transformed or adjusted (e.g., stitch images after perspective correction, combine images after geometric transformations, merge images after adjustments), enabling transformed image stitching workflows
- Before detection or analysis blocks that benefit from panoramic views (e.g., detect objects in stitched panoramic images, analyze wide-area stitched scenes, process comprehensive stitched views), enabling panoramic analysis workflows
- Before visualization blocks to display stitched panoramic images (e.g., visualize stitched panoramas, display wide-area stitched views, show comprehensive stitched scenes), enabling panoramic visualization outputs
- In multi-stage image processing workflows where images need to be stitched before further processing (e.g., stitch images before detection, combine images before analysis, merge images for comprehensive processing), enabling multi-stage panoramic processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/stitch_images@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
max_allowed_reprojection_error |
float |
Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings.. | ✅ |
count_of_best_matches_per_query_descriptor |
int |
Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Stitch Images in version v1.
- inputs:
Corner Visualization,Image Convert Grayscale,Label Visualization,Image Slicer,Image Blur,SIFT Comparison,Ellipse Visualization,Halo Visualization,Contrast Equalization,Stability AI Outpainting,Camera Focus,Model Comparison Visualization,Stitch Images,Line Counter,Distance Measurement,Polygon Visualization,Detection Event Log,Stability AI Inpainting,Reference Path Visualization,Circle Visualization,Background Subtraction,Stability AI Image Generation,Icon Visualization,Pixel Color Count,Color Visualization,Clip Comparison,Mask Visualization,Image Slicer,Template Matching,Line Counter,Pixelate Visualization,Image Contours,Text Display,Blur Visualization,Triangle Visualization,Identify Outliers,Relative Static Crop,Detections Consensus,Camera Focus,Classification Label Visualization,Image Threshold,Camera Calibration,Dot Visualization,Background Color Visualization,Polygon Zone Visualization,Keypoint Visualization,Grid Visualization,Dynamic Crop,Trace Visualization,Crop Visualization,Absolute Static Crop,Line Counter Visualization,Image Preprocessing,Identify Changes,SIFT,Perspective Correction,Halo Visualization,SIFT Comparison,Depth Estimation,Morphological Transformation,Polygon Visualization,QR Code Generator,Bounding Box Visualization - outputs:
Corner Visualization,Image Convert Grayscale,Label Visualization,Clip Comparison,Image Slicer,SmolVLM2,Image Blur,Florence-2 Model,SIFT Comparison,Google Gemini,OCR Model,Ellipse Visualization,Halo Visualization,Single-Label Classification Model,Stability AI Outpainting,Contrast Equalization,Perception Encoder Embedding Model,Qwen3-VL,Camera Focus,Stitch Images,Model Comparison Visualization,Polygon Visualization,Object Detection Model,Stability AI Inpainting,Reference Path Visualization,OpenAI,Detections Stabilizer,OpenAI,Circle Visualization,Background Subtraction,Roboflow Dataset Upload,Stability AI Image Generation,Icon Visualization,LMM For Classification,YOLO-World Model,VLM as Classifier,Pixel Color Count,Twilio SMS/MMS Notification,Object Detection Model,Multi-Label Classification Model,Color Visualization,Clip Comparison,SAM 3,Barcode Detection,Mask Visualization,Roboflow Dataset Upload,Anthropic Claude,Image Slicer,Template Matching,Buffer,Pixelate Visualization,OpenAI,Byte Tracker,CLIP Embedding Model,Instance Segmentation Model,Keypoint Detection Model,Email Notification,Image Contours,Google Gemini,Text Display,Blur Visualization,Triangle Visualization,Google Vision OCR,VLM as Detector,Relative Static Crop,Llama 3.2 Vision,Camera Focus,SAM 3,Classification Label Visualization,Multi-Label Classification Model,Image Threshold,LMM,Dot Visualization,Anthropic Claude,Camera Calibration,Background Color Visualization,Qwen2.5-VL,Dominant Color,Seg Preview,Polygon Zone Visualization,Keypoint Visualization,Anthropic Claude,Dynamic Crop,Keypoint Detection Model,Trace Visualization,SAM 3,Crop Visualization,Absolute Static Crop,Line Counter Visualization,Florence-2 Model,Google Gemini,Time in Zone,Detections Stitch,Segment Anything 2 Model,Moondream2,Image Preprocessing,Gaze Detection,Instance Segmentation Model,SIFT,Perspective Correction,Motion Detection,Halo Visualization,VLM as Classifier,EasyOCR,Depth Estimation,CogVLM,Morphological Transformation,OpenAI,Polygon Visualization,Single-Label Classification Model,QR Code Detection,VLM as Detector,Bounding Box Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Stitch Images in version v1 has.
Bindings
-
input
image1(image): First input image to stitch. Should contain overlapping regions with image2 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will determine the optimal positioning and alignment of this image relative to image2 during stitching. Images with rich texture and detail work best for SIFT-based feature matching..image2(image): Second input image to stitch. Should contain overlapping regions with image1 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will warp and align this image to match image1's perspective during stitching. Images with rich texture and detail work best for SIFT-based feature matching..max_allowed_reprojection_error(float_zero_to_one): Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings..count_of_best_matches_per_query_descriptor(integer): Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images..
-
output
stitched_image(image): Image in workflows.
Example JSON definition of step Stitch Images in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/stitch_images@v1",
"image1": "$inputs.image1",
"image2": "$inputs.image2",
"max_allowed_reprojection_error": 3,
"count_of_best_matches_per_query_descriptor": 2
}