Skip to content

Stitch Images

Class: StitchImagesBlockV1

Source: inference.core.workflows.core_steps.transformations.stitch_images.v1.StitchImagesBlockV1

Stitch two overlapping images together into a single panoramic image using SIFT (Scale Invariant Feature Transform) feature matching and homography-based image alignment, automatically detecting common features, calculating geometric transformations, and blending images to create seamless panoramic compositions from overlapping scenes.

How This Block Works

This block stitches two overlapping images together by detecting common features, calculating geometric transformations, and aligning the images into a single panoramic result. The block:

  1. Receives two input images (image1 and image2) that contain overlapping regions with sufficient detail for feature matching
  2. Detects keypoints and computes descriptors using SIFT (Scale Invariant Feature Transform) for both images:
  3. Identifies distinctive feature points (keypoints) in each image that are invariant to scale and rotation
  4. Computes feature descriptors (128-dimensional vectors) describing the visual characteristics around each keypoint
  5. Matches keypoints between the two images using brute force matching:
  6. Finds the best matching descriptors for each keypoint in image1 among all keypoints in image2
  7. Uses k-nearest neighbor matching (configurable via count_of_best_matches_per_query_descriptor) to find multiple potential matches per query keypoint
  8. Filters good matches using Lowe's ratio test:
  9. Compares the distance to the best match with the distance to the second-best match
  10. Keeps matches where the best match distance is less than 0.75 times the second-best match distance (reduces false matches)
  11. Determines image ordering based on keypoint positions (identifies which image should be placed first based on spatial distribution of matched features)
  12. Calculates homography transformation matrix using RANSAC (Random Sample Consensus):
  13. Finds a perspective transformation matrix that maps points from one image to the other
  14. Uses RANSAC to robustly estimate the transformation while filtering out outlier matches
  15. Configurable maximum reprojection error (max_allowed_reprojection_error) controls which point pairs are considered inliers
  16. Calculates canvas size and translation:
  17. Determines the size needed to contain both images after transformation
  18. Calculates translation needed to ensure both images fit within the canvas boundaries
  19. Warps the second image using the homography transformation:
  20. Applies perspective transformation to align the second image with the first
  21. Combines homography matrix with translation matrix for correct positioning
  22. Stitches images together:
  23. Places the first image onto the warped second image canvas
  24. Creates the final stitched panoramic image containing both input images aligned and blended
  25. Returns the stitched image, or None if stitching fails (e.g., insufficient matches, transformation calculation failure)

The block uses SIFT for robust feature detection that works well with images containing sufficient detail and texture. The RANSAC-based homography calculation handles perspective distortions and ensures robust alignment even with some incorrect matches. The reprojection error threshold controls the sensitivity of the alignment - lower values require more precise matches, while higher values (useful for low-detail images) allow more tolerance for matching variations.

Common Use Cases

  • Panoramic Image Creation: Stitch overlapping images together to create wide panoramic views (e.g., create panoramic photos from overlapping camera shots, stitch together images from rotating cameras, combine multiple overlapping images into panoramas), enabling panoramic image generation workflows
  • Wide-Area Scene Reconstruction: Combine multiple overlapping views of a scene into a single comprehensive image (e.g., reconstruct wide scenes from multiple camera angles, combine overlapping surveillance camera views, stitch together images from multiple viewpoints), enabling wide-area scene visualization
  • Multi-Image Mosaicking: Create image mosaics from overlapping image tiles or sections (e.g., stitch together image tiles for large-scale mapping, combine overlapping satellite image sections, create mosaics from overlapping image captures), enabling image mosaic creation workflows
  • Scene Documentation: Combine multiple overlapping images to document large scenes or areas (e.g., document large spaces with multiple overlapping photos, combine overlapping views for scene documentation, stitch together images for comprehensive scene capture), enabling comprehensive scene documentation
  • Video Frame Stitching: Stitch together overlapping frames from video sequences (e.g., create panoramic views from video frames, combine overlapping frames from moving cameras, stitch together consecutive video frames), enabling video-based panoramic workflows
  • Multi-Camera View Combination: Combine overlapping views from multiple cameras into a single unified view (e.g., stitch together overlapping camera feeds, combine multi-camera views for monitoring, merge overlapping camera perspectives), enabling multi-camera view integration workflows

Connecting to Other Blocks

This block receives two images and produces a single stitched image:

  • After image input blocks or image preprocessing blocks to stitch preprocessed images together (e.g., stitch images after preprocessing, combine images after enhancement, merge images after filtering), enabling image stitching workflows
  • After crop blocks to stitch together cropped image regions from different sources (e.g., stitch cropped regions from different images, combine cropped sections from multiple sources, merge cropped regions into panoramas), enabling cropped region stitching workflows
  • After transformation blocks to stitch images that have been transformed or adjusted (e.g., stitch images after perspective correction, combine images after geometric transformations, merge images after adjustments), enabling transformed image stitching workflows
  • Before detection or analysis blocks that benefit from panoramic views (e.g., detect objects in stitched panoramic images, analyze wide-area stitched scenes, process comprehensive stitched views), enabling panoramic analysis workflows
  • Before visualization blocks to display stitched panoramic images (e.g., visualize stitched panoramas, display wide-area stitched views, show comprehensive stitched scenes), enabling panoramic visualization outputs
  • In multi-stage image processing workflows where images need to be stitched before further processing (e.g., stitch images before detection, combine images before analysis, merge images for comprehensive processing), enabling multi-stage panoramic processing workflows

Type identifier

Use the following identifier in step "type" field: roboflow_core/stitch_images@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
max_allowed_reprojection_error float Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings..
count_of_best_matches_per_query_descriptor int Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Stitch Images in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Stitch Images in version v1 has.

Bindings
  • input

    • image1 (image): First input image to stitch. Should contain overlapping regions with image2 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will determine the optimal positioning and alignment of this image relative to image2 during stitching. Images with rich texture and detail work best for SIFT-based feature matching..
    • image2 (image): Second input image to stitch. Should contain overlapping regions with image1 and sufficient detail/texture for SIFT feature detection. The images must have overlapping content for successful stitching. The block will warp and align this image to match image1's perspective during stitching. Images with rich texture and detail work best for SIFT-based feature matching..
    • max_allowed_reprojection_error (float_zero_to_one): Maximum allowed reprojection error (in pixels) to treat a point pair as an inlier during RANSAC homography calculation. This corresponds to cv.findHomography's ransacReprojThreshold parameter. Lower values require more precise matches (stricter alignment) but may fail with noisy matches. Higher values allow more tolerance for matching variations (more lenient alignment) and can improve results for low-detail images or images with imperfect feature matches. Default is 3 pixels. Increase this value (e.g., 5-10) for images with less detail or when stitching fails with default settings..
    • count_of_best_matches_per_query_descriptor (integer): Number of best matches to find per query descriptor during keypoint matching. This corresponds to cv.BFMatcher.knnMatch's k parameter. Must be greater than 0. The block finds the k nearest neighbor matches for each keypoint descriptor in image1 among all descriptors in image2. Then uses Lowe's ratio test to filter good matches (comparing best match distance with second-best match distance). Higher values provide more candidate matches but increase computation. Default is 2 (finds 2 best matches per descriptor). Typical values range from 2-5. Use higher values if you need more match candidates for difficult images..
  • output

    • stitched_image (image): Image in workflows.
Example JSON definition of step Stitch Images in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/stitch_images@v1",
    "image1": "$inputs.image1",
    "image2": "$inputs.image2",
    "max_allowed_reprojection_error": 3,
    "count_of_best_matches_per_query_descriptor": 2
}