SIFT Comparison¶
v2¶
Class: SIFTComparisonBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v2.SIFTComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare two images or their SIFT descriptors using configurable matcher algorithms (FLANN or brute force), automatically computing SIFT features when images are provided, applying Lowe's ratio test filtering, and optionally generating visualizations of keypoints and matches for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares two images or their SIFT descriptors to determine if they match by finding corresponding features and counting good matches. The block:
- Receives two inputs (input_1 and input_2) that can be either images or pre-computed SIFT descriptors
- Processes each input based on its type:
- If input is an image: Automatically computes SIFT keypoints and descriptors using OpenCV's SIFT detector
- Converts image to grayscale
- Detects keypoints and computes 128-dimensional SIFT descriptors
- Optionally creates keypoint visualization if visualize=True
- Converts keypoints to dictionary format for output
- If input is descriptors: Uses the provided descriptors directly (skips SIFT computation)
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering)
- Selects matcher algorithm based on matcher parameter:
- FlannBasedMatcher (default): Uses FLANN for efficient approximate nearest neighbor search, faster for large descriptor sets
- BFMatcher: Uses brute force matching with L2 norm, exact matching but slower for large descriptor sets
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in input_1:
- For each descriptor in descriptors_1, finds the 2 most similar descriptors in descriptors_2
- Uses Euclidean distance (L2 norm) in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Optionally generates visualizations if visualize=True and images were provided:
- Creates keypoint visualizations for each image (images with keypoints drawn)
- Creates a matches visualization showing corresponding keypoints between the two images connected by lines
- Returns match results, keypoints, descriptors, and optional visualizations
The block provides flexibility by accepting either images (with automatic SIFT computation) or pre-computed descriptors. When images are provided, the block handles all SIFT processing internally, making it easier to use without requiring separate SIFT feature detection steps. The optional visualization feature helps debug and understand matching results by showing keypoints and matches visually. SIFT descriptors are scale and rotation invariant, making the block effective for matching images with different scales, rotations, or viewing angles.
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives images or SIFT descriptors and produces match results with optional visualizations:
- After image input blocks to compare images directly (e.g., compare input images, match images from camera feeds, analyze image similarities), enabling direct image comparison workflows
- After SIFT feature detection blocks to compare pre-computed SIFT descriptors (e.g., compare descriptors from different images, match images using existing SIFT features, analyze image similarity with pre-computed descriptors), enabling descriptor-based comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results and visualizations (e.g., store image match results, save similarity scores, record comparison data with visualizations), enabling match result storage workflows
- Before visualization blocks to further process or display visualizations (e.g., display match visualizations, show keypoint images, render comparison results), enabling visualization workflow outputs
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
Version Differences¶
This version (v2) includes several enhancements over v1:
- Flexible Input Types: Accepts both images and pre-computed SIFT descriptors as input (v1 only accepted descriptors), allowing direct image comparison without requiring separate SIFT feature detection steps
- Automatic SIFT Computation: Automatically computes SIFT keypoints and descriptors when images are provided, eliminating the need for separate SIFT feature detection blocks in simple workflows
- Matcher Selection: Added configurable matcher parameter to choose between FlannBasedMatcher (default, faster) and BFMatcher (exact, slower), providing flexibility for different performance requirements
- Visualization Support: Added optional visualization feature that generates keypoint visualizations and match visualizations when images are provided, helping debug and understand matching results
- Enhanced Outputs: Returns keypoints and descriptors for both images, plus optional visualizations (keypoint images and match visualization), providing more comprehensive output data for downstream processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
matcher |
str |
Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force.. | ✅ |
visualize |
bool |
Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v2.
- inputs:
Triangle Visualization,Morphological Transformation,Roboflow Dataset Upload,Ellipse Visualization,LMM,Florence-2 Model,Blur Visualization,Halo Visualization,Anthropic Claude,Google Gemini,Camera Focus,Llama 3.2 Vision,Motion Detection,Model Comparison Visualization,VLM As Detector,Keypoint Visualization,Pixelate Visualization,Image Slicer,SIFT Comparison,Line Counter,Line Counter Visualization,Roboflow Dataset Upload,Label Visualization,SIFT Comparison,QR Code Generator,Stitch OCR Detections,Dynamic Zone,Distance Measurement,Email Notification,Slack Notification,Corner Visualization,Image Slicer,Florence-2 Model,CSV Formatter,EasyOCR,Object Detection Model,Anthropic Claude,OpenAI,Google Gemini,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Background Subtraction,Pixel Color Count,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Polygon Visualization,Image Blur,VLM As Classifier,Relative Static Crop,Clip Comparison,Heatmap Visualization,CogVLM,Mask Visualization,Image Preprocessing,Twilio SMS Notification,VLM As Detector,PTZ Tracking (ONVIF).md),OpenAI,OCR Model,SIFT,Stitch Images,Stability AI Outpainting,Stitch OCR Detections,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Color Visualization,Trace Visualization,OpenAI,Icon Visualization,Dot Visualization,Email Notification,Instance Segmentation Model,Camera Focus,Twilio SMS/MMS Notification,Depth Estimation,Contrast Equalization,LMM For Classification,Roboflow Custom Metadata,Grid Visualization,Text Display,Reference Path Visualization,Image Threshold,Perspective Correction,Image Contours,Polygon Zone Visualization,Multi-Label Classification Model,Detection Event Log,Polygon Visualization,Identify Changes,Local File Sink,Halo Visualization,JSON Parser,Google Vision OCR,Stability AI Inpainting,Identify Outliers,Crop Visualization,Detections Consensus,Template Matching,Google Gemini,Webhook Sink,Absolute Static Crop,Classification Label Visualization,OpenAI,VLM As Classifier,Line Counter,Single-Label Classification Model,Stability AI Image Generation - outputs:
Triangle Visualization,Detections Stitch,Ellipse Visualization,Detections Classes Replacement,Florence-2 Model,Blur Visualization,Anthropic Claude,Google Gemini,Motion Detection,Keypoint Visualization,Qwen3-VL,Pixelate Visualization,SIFT Comparison,Image Slicer,Keypoint Detection Model,Dynamic Zone,Clip Comparison,SAM 3,Image Slicer,EasyOCR,Object Detection Model,Anthropic Claude,Google Gemini,Perception Encoder Embedding Model,Pixel Color Count,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Time in Zone,Image Preprocessing,Byte Tracker,VLM As Detector,PTZ Tracking (ONVIF).md),SIFT,Single-Label Classification Model,Stability AI Outpainting,Moondream2,Stitch OCR Detections,Detection Offset,Trace Visualization,OpenAI,Qwen2.5-VL,Icon Visualization,YOLO-World Model,Dot Visualization,Time in Zone,Email Notification,Instance Segmentation Model,Camera Focus,Contrast Equalization,Segment Anything 2 Model,Text Display,Reference Path Visualization,Image Threshold,Multi-Label Classification Model,Perspective Correction,Image Contours,Identify Changes,Byte Tracker,Google Vision OCR,Crop Visualization,Google Gemini,Barcode Detection,Webhook Sink,Classification Label Visualization,VLM As Classifier,Seg Preview,OpenAI,Gaze Detection,Stability AI Image Generation,Roboflow Dataset Upload,Morphological Transformation,LMM,CLIP Embedding Model,Halo Visualization,Camera Focus,Llama 3.2 Vision,Model Comparison Visualization,VLM As Detector,Stitch OCR Detections,Roboflow Dataset Upload,Line Counter Visualization,SmolVLM2,Label Visualization,SIFT Comparison,QR Code Generator,Email Notification,Buffer,Slack Notification,Detections Stabilizer,Object Detection Model,Corner Visualization,Florence-2 Model,SAM 3,QR Code Detection,OpenAI,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Background Subtraction,Polygon Visualization,Image Blur,VLM As Classifier,Relative Static Crop,Clip Comparison,Heatmap Visualization,CogVLM,Mask Visualization,Twilio SMS Notification,Instance Segmentation Model,OpenAI,OCR Model,Stitch Images,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Byte Tracker,Color Visualization,Twilio SMS/MMS Notification,Depth Estimation,Roboflow Custom Metadata,LMM For Classification,Grid Visualization,Dominant Color,Polygon Zone Visualization,Polygon Visualization,Halo Visualization,SAM 3,Stability AI Inpainting,Time in Zone,Identify Outliers,Detections Consensus,Template Matching,Absolute Static Crop,Multi-Label Classification Model,Single-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v2 has.
Bindings
-
input
input_1(Union[numpy_array,image]): First input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors..input_2(Union[numpy_array,image]): Second input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors. This input will be matched against input_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(float_zero_to_one): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..matcher(string): Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force..visualize(boolean): Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes..
-
output
images_match(boolean): Boolean flag.good_matches_count(integer): Integer value.keypoints_1(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_1(numpy_array): Numpy array.keypoints_2(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_2(numpy_array): Numpy array.visualization_1(image): Image in workflows.visualization_2(image): Image in workflows.visualization_matches(image): Image in workflows.
Example JSON definition of step SIFT Comparison in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v2",
"input_1": "$inputs.image1",
"input_2": "$inputs.image2",
"good_matches_threshold": 50,
"ratio_threshold": 0.7,
"matcher": "FlannBasedMatcher",
"visualize": true
}
v1¶
Class: SIFTComparisonBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v1.SIFTComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare SIFT (Scale Invariant Feature Transform) descriptors from two images using FLANN-based matching and Lowe's ratio test, determining image similarity by counting feature matches and returning a boolean match result based on a configurable threshold for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares SIFT descriptors from two images to determine if they match by finding corresponding features and counting good matches. The block:
- Receives SIFT descriptors from two images (descriptor_1 and descriptor_2) - these descriptors should come from a SIFT feature detection step that has already extracted keypoints and computed descriptors for both images
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering - needs at least 2 nearest neighbors)
- Creates a FLANN (Fast Library for Approximate Nearest Neighbors) based matcher:
- Uses FLANN algorithm for efficient approximate nearest neighbor search in high-dimensional descriptor space
- Configures FLANN with algorithm parameters optimized for SIFT descriptors (algorithm=1, trees=5, checks=50)
- FLANN is faster than brute force matching for large descriptor sets while maintaining good accuracy
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in image 1:
- For each descriptor in descriptor_1, finds the 2 most similar descriptors in descriptor_2
- Uses Euclidean distance in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Returns the count of good matches and the boolean match result
The block uses SIFT descriptors which are scale and rotation invariant, making it effective for matching images with different scales, rotations, or viewing angles. FLANN matching provides efficient approximate nearest neighbor search for fast comparison of large descriptor sets. Lowe's ratio test improves match quality by filtering ambiguous matches where the best match isn't significantly better than alternatives. The threshold-based matching allows configurable sensitivity - lower thresholds require fewer matches (more lenient), higher thresholds require more matches (stricter).
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives SIFT descriptors from two images and produces match results:
- After SIFT feature detection blocks to compare SIFT descriptors from different images (e.g., compare descriptors from multiple images, match images using SIFT features, analyze image similarity with SIFT), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results (e.g., store image match results, save similarity scores, record comparison data), enabling match result storage workflows
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
- Before visualization blocks to visualize match results (e.g., display match results, visualize similar images, show comparison outcomes), enabling match visualization workflows
- In duplicate detection workflows where images need to be checked for duplicates (e.g., detect duplicates in image collections, find repeated images, identify identical images), enabling duplicate detection workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v1.
- inputs:
Depth Estimation,Image Contours,SIFT,Perspective Correction,Detection Event Log,Pixel Color Count,SIFT Comparison,Line Counter,Template Matching,SIFT Comparison,Distance Measurement,Line Counter - outputs:
Triangle Visualization,Morphological Transformation,Roboflow Dataset Upload,Ellipse Visualization,Detections Classes Replacement,Blur Visualization,Halo Visualization,Anthropic Claude,Motion Detection,Model Comparison Visualization,Keypoint Visualization,Pixelate Visualization,SIFT Comparison,Stitch OCR Detections,Image Slicer,Line Counter Visualization,Keypoint Detection Model,Label Visualization,QR Code Generator,SIFT Comparison,Roboflow Dataset Upload,Dynamic Zone,Email Notification,SAM 3,Slack Notification,Detections Stabilizer,Object Detection Model,Corner Visualization,Image Slicer,SAM 3,Object Detection Model,Anthropic Claude,Google Gemini,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Background Subtraction,Pixel Color Count,Background Color Visualization,Camera Calibration,Polygon Visualization,Image Blur,Time in Zone,Heatmap Visualization,Mask Visualization,Image Preprocessing,Byte Tracker,Twilio SMS Notification,PTZ Tracking (ONVIF).md),Instance Segmentation Model,Stitch Images,Stability AI Outpainting,Single-Label Classification Model,Stitch OCR Detections,Model Monitoring Inference Aggregator,Circle Visualization,Detection Offset,Byte Tracker,Trace Visualization,Color Visualization,Icon Visualization,Dot Visualization,Time in Zone,Email Notification,Instance Segmentation Model,Twilio SMS/MMS Notification,Roboflow Custom Metadata,Segment Anything 2 Model,Grid Visualization,Dominant Color,Text Display,Reference Path Visualization,Image Threshold,Perspective Correction,Image Contours,Polygon Zone Visualization,Multi-Label Classification Model,Identify Changes,Polygon Visualization,Byte Tracker,Halo Visualization,Stability AI Inpainting,Time in Zone,Identify Outliers,Crop Visualization,Detections Consensus,Template Matching,Webhook Sink,Absolute Static Crop,Classification Label Visualization,Multi-Label Classification Model,Single-Label Classification Model,Gaze Detection
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v1 has.
Bindings
-
input
descriptor_1(numpy_array): SIFT descriptors from the first image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the first image..descriptor_2(numpy_array): SIFT descriptors from the second image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the second image. These descriptors will be matched against descriptor_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(integer): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..
-
output
Example JSON definition of step SIFT Comparison in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v1",
"descriptor_1": "$steps.sift.descriptors",
"descriptor_2": "$steps.sift.descriptors",
"good_matches_threshold": 50,
"ratio_threshold": 0.7
}