Stitch Images¶
Class: StitchImagesBlockV1
Source: inference.core.workflows.core_steps.transformations.stitch_images.v1.StitchImagesBlockV1
Stitch two overlapping images together into a single panoramic image using SIFT (Scale Invariant Feature Transform) feature matching and homography-based image alignment, automatically detecting common features, calculating geometric transformations, and blending images to create seamless panoramic compositions from overlapping scenes.
How This Block Works¶
This block stitches two overlapping images together by detecting common features, calculating geometric transformations, and aligning the images into a single panoramic result. The block:
- Receives two input images (image1 and image2) that contain overlapping regions with sufficient detail for feature matching
- Detects keypoints and computes descriptors using SIFT (Scale Invariant Feature Transform) for both images:
- Identifies distinctive feature points (keypoints) in each image that are invariant to scale and rotation
- Computes feature descriptors (128-dimensional vectors) describing the visual characteristics around each keypoint
- Matches keypoints between the two images using brute force matching:
- Finds the best matching descriptors for each keypoint in image1 among all keypoints in image2
- Uses k-nearest neighbor matching (configurable via count_of_best_matches_per_query_descriptor) to find multiple potential matches per query keypoint
- Filters good matches using Lowe's ratio test:
- Compares the distance to the best match with the distance to the second-best match
- Keeps matches where the best match distance is less than 0.75 times the second-best match distance (reduces false matches)
- Determines image ordering based on keypoint positions (identifies which image should be placed first based on spatial distribution of matched features)
- Calculates homography transformation matrix using RANSAC (Random Sample Consensus):
- Finds a perspective transformation matrix that maps points from one image to the other
- Uses RANSAC to robustly estimate the transformation while filtering out outlier matches
- Configurable maximum reprojection error (max_allowed_reprojection_error) controls which point pairs are considered inliers
- Calculates canvas size and translation:
- Determines the size needed to contain both images after transformation
- Calculates translation needed to ensure both images fit within the canvas boundaries
- Warps the second image using the homography transformation:
- Applies perspective transformation to align the second image with the first
- Combines homography matrix with translation matrix for correct positioning
- Stitches images together:
- Places the first image onto the warped second image canvas
- Creates the final stitched panoramic image containing both input images aligned and blended
- Returns the stitched image, or None if stitching fails (e.g., insufficient matches, transformation calculation failure)
The block uses SIFT for robust feature detection that works well with images containing sufficient detail and texture. The RANSAC-based homography calculation handles perspective distortions and ensures robust alignment even with some incorrect matches. The reprojection error threshold controls the sensitivity of the alignment - lower values require more precise matches, while higher values (useful for low-detail images) allow more tolerance for matching variations.
Common Use Cases¶
- Panoramic Image Creation: Stitch overlapping images together to create wide panoramic views (e.g., create panoramic photos from overlapping camera shots, stitch together images from rotating cameras, combine multiple overlapping images into panoramas), enabling panoramic image generation workflows
- Wide-Area Scene Reconstruction: Combine multiple overlapping views of a scene into a single comprehensive image (e.g., reconstruct wide scenes from multiple camera angles, combine overlapping surveillance camera views, stitch together images from multiple viewpoints), enabling wide-area scene visualization
- Multi-Image Mosaicking: Create image mosaics from overlapping image tiles or sections (e.g., stitch together image tiles for large-scale mapping, combine overlapping satellite image sections, create mosaics from overlapping image captures), enabling image mosaic creation workflows
- Scene Documentation: Combine multiple overlapping images to document large scenes or areas (e.g., document large spaces with multiple overlapping photos, combine overlapping views for scene documentation, stitch together images for comprehensive scene capture), enabling comprehensive scene documentation
- Video Frame Stitching: Stitch together overlapping frames from video sequences (e.g., create panoramic views from video frames, combine overlapping frames from moving cameras, stitch together consecutive video frames), enabling video-based panoramic workflows
- Multi-Camera View Combination: Combine overlapping views from multiple cameras into a single unified view (e.g., stitch together overlapping camera feeds, combine multi-camera views for monitoring, merge overlapping camera perspectives), enabling multi-camera view integration workflows
Connecting to Other Blocks¶
This block receives two images and produces a single stitched image:
- After image input blocks or image preprocessing blocks to stitch preprocessed images together (e.g., stitch images after preprocessing, combine images after enhancement, merge images after filtering), enabling image stitching workflows
- After crop blocks to stitch together cropped image regions from different sources (e.g., stitch cropped regions from different images, combine cropped sections from multiple sources, merge cropped regions into panoramas), enabling cropped region stitching workflows
- After transformation blocks to stitch images that have been transformed or adjusted (e.g., stitch images after perspective correction, combine images after geometric transformations, merge images after adjustments), enabling transformed image stitching workflows
- Before detection or analysis blocks that benefit from panoramic views (e.g., detect objects in stitched panoramic images, analyze wide-area stitched scenes, process comprehensive stitched views), enabling panoramic analysis workflows
- Before visualization blocks to display stitched panoramic images (e.g., visualize stitched panoramas, display wide-area stitched views, show comprehensive stitched scenes), enabling panoramic visualization outputs
- In multi-stage image processing workflows where images need to be stitched before further processing (e.g., stitch images before detection, combine images before analysis, merge images for comprehensive processing), enabling multi-stage panoramic processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/stitch_images@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
max_allowed_reprojection_error |
float |
Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings.. | ✅ |
count_of_best_matches_per_query_descriptor |
int |
Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Stitch Images in version v1.
- inputs:
Clip Comparison,Morphological Transformation,Polygon Zone Visualization,Keypoint Visualization,Camera Focus,Pixel Color Count,Image Threshold,Reference Path Visualization,Camera Focus,Image Slicer,Stability AI Image Generation,Stability AI Outpainting,Stitch Images,Blur Visualization,Detection Event Log,Depth Estimation,Image Preprocessing,Identify Outliers,Image Convert Grayscale,Dynamic Crop,Dot Visualization,Triangle Visualization,Crop Visualization,Perspective Correction,Grid Visualization,Line Counter,Trace Visualization,QR Code Generator,Pixelate Visualization,Detections Consensus,Camera Calibration,Background Subtraction,SIFT Comparison,Bounding Box Visualization,Contrast Equalization,Halo Visualization,Model Comparison Visualization,Label Visualization,Circle Visualization,Image Contours,Background Color Visualization,Image Blur,Mask Visualization,Color Visualization,Corner Visualization,Classification Label Visualization,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Line Counter,Image Slicer,Absolute Static Crop,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Identify Changes,Distance Measurement,Relative Static Crop,SIFT,Text Display - outputs:
Instance Segmentation Model,Clip Comparison,Florence-2 Model,Morphological Transformation,Google Gemini,LMM,Instance Segmentation Model,Motion Detection,Email Notification,Detections Stitch,Polygon Zone Visualization,Keypoint Visualization,Camera Focus,Anthropic Claude,Multi-Label Classification Model,Pixel Color Count,Image Threshold,LMM For Classification,Keypoint Detection Model,Anthropic Claude,Gaze Detection,Reference Path Visualization,Camera Focus,Stability AI Image Generation,Stitch Images,Stability AI Outpainting,Image Slicer,SmolVLM2,OpenAI,Roboflow Dataset Upload,Depth Estimation,YOLO-World Model,Google Gemini,CogVLM,Image Preprocessing,VLM as Detector,Florence-2 Model,Image Convert Grayscale,SAM 3,Byte Tracker,Dynamic Crop,Time in Zone,Perception Encoder Embedding Model,Moondream2,Triangle Visualization,Dot Visualization,OCR Model,Seg Preview,Crop Visualization,Twilio SMS/MMS Notification,Perspective Correction,EasyOCR,SAM 3,Google Gemini,Object Detection Model,Text Display,Trace Visualization,Pixelate Visualization,OpenAI,CLIP Embedding Model,Camera Calibration,Roboflow Dataset Upload,Buffer,Barcode Detection,Object Detection Model,Single-Label Classification Model,QR Code Detection,VLM as Detector,Background Subtraction,Bounding Box Visualization,Contrast Equalization,Model Comparison Visualization,Halo Visualization,Label Visualization,OpenAI,Circle Visualization,Qwen2.5-VL,Image Contours,Image Blur,Background Color Visualization,Mask Visualization,Dominant Color,VLM as Classifier,Google Vision OCR,Llama 3.2 Vision,Color Visualization,Corner Visualization,Classification Label Visualization,Single-Label Classification Model,OpenAI,Segment Anything 2 Model,Clip Comparison,Template Matching,Line Counter Visualization,Icon Visualization,Ellipse Visualization,Image Slicer,Detections Stabilizer,Absolute Static Crop,VLM as Classifier,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Qwen3-VL,SAM 3,Keypoint Detection Model,Relative Static Crop,SIFT,Blur Visualization,Multi-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Stitch Images in version v1 has.
Bindings
-
input
image1(image): First input image to stitch. Should contain overlapping regions with image2 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will determine the optimal positioning and alignment of this image relative to image2 during stitching. Images with rich texture and detail work best for SIFT-based feature matching..image2(image): Second input image to stitch. Should contain overlapping regions with image1 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will warp and align this image to match image1's perspective during stitching. Images with rich texture and detail work best for SIFT-based feature matching..max_allowed_reprojection_error(float_zero_to_one): Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings..count_of_best_matches_per_query_descriptor(integer): Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images..
-
output
stitched_image(image): Image in workflows.
Example JSON definition of step Stitch Images in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/stitch_images@v1",
"image1": "$inputs.image1",
"image2": "$inputs.image2",
"max_allowed_reprojection_error": 3,
"count_of_best_matches_per_query_descriptor": 2
}