SIFT Comparison¶
v2¶
Class: SIFTComparisonBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v2.SIFTComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare two images or their SIFT descriptors using configurable matcher algorithms (FLANN or brute force), automatically computing SIFT features when images are provided, applying Lowe's ratio test filtering, and optionally generating visualizations of keypoints and matches for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares two images or their SIFT descriptors to determine if they match by finding corresponding features and counting good matches. The block:
- Receives two inputs (input_1 and input_2) that can be either images or pre-computed SIFT descriptors
- Processes each input based on its type:
- If input is an image: Automatically computes SIFT keypoints and descriptors using OpenCV's SIFT detector
- Converts image to grayscale
- Detects keypoints and computes 128-dimensional SIFT descriptors
- Optionally creates keypoint visualization if visualize=True
- Converts keypoints to dictionary format for output
- If input is descriptors: Uses the provided descriptors directly (skips SIFT computation)
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering)
- Selects matcher algorithm based on matcher parameter:
- FlannBasedMatcher (default): Uses FLANN for efficient approximate nearest neighbor search, faster for large descriptor sets
- BFMatcher: Uses brute force matching with L2 norm, exact matching but slower for large descriptor sets
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in input_1:
- For each descriptor in descriptors_1, finds the 2 most similar descriptors in descriptors_2
- Uses Euclidean distance (L2 norm) in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Optionally generates visualizations if visualize=True and images were provided:
- Creates keypoint visualizations for each image (images with keypoints drawn)
- Creates a matches visualization showing corresponding keypoints between the two images connected by lines
- Returns match results, keypoints, descriptors, and optional visualizations
The block provides flexibility by accepting either images (with automatic SIFT computation) or pre-computed descriptors. When images are provided, the block handles all SIFT processing internally, making it easier to use without requiring separate SIFT feature detection steps. The optional visualization feature helps debug and understand matching results by showing keypoints and matches visually. SIFT descriptors are scale and rotation invariant, making the block effective for matching images with different scales, rotations, or viewing angles.
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives images or SIFT descriptors and produces match results with optional visualizations:
- After image input blocks to compare images directly (e.g., compare input images, match images from camera feeds, analyze image similarities), enabling direct image comparison workflows
- After SIFT feature detection blocks to compare pre-computed SIFT descriptors (e.g., compare descriptors from different images, match images using existing SIFT features, analyze image similarity with pre-computed descriptors), enabling descriptor-based comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results and visualizations (e.g., store image match results, save similarity scores, record comparison data with visualizations), enabling match result storage workflows
- Before visualization blocks to further process or display visualizations (e.g., display match visualizations, show keypoint images, render comparison results), enabling visualization workflow outputs
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
Version Differences¶
This version (v2) includes several enhancements over v1:
- Flexible Input Types: Accepts both images and pre-computed SIFT descriptors as input (v1 only accepted descriptors), allowing direct image comparison without requiring separate SIFT feature detection steps
- Automatic SIFT Computation: Automatically computes SIFT keypoints and descriptors when images are provided, eliminating the need for separate SIFT feature detection blocks in simple workflows
- Matcher Selection: Added configurable matcher parameter to choose between FlannBasedMatcher (default, faster) and BFMatcher (exact, slower), providing flexibility for different performance requirements
- Visualization Support: Added optional visualization feature that generates keypoint visualizations and match visualizations when images are provided, helping debug and understand matching results
- Enhanced Outputs: Returns keypoints and descriptors for both images, plus optional visualizations (keypoint images and match visualization), providing more comprehensive output data for downstream processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
matcher |
str |
Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force.. | ✅ |
visualize |
bool |
Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v2.
- inputs:
Email Notification,Contrast Equalization,Instance Segmentation Model,Google Vision OCR,PTZ Tracking (ONVIF),Grid Visualization,S3 Sink,Stability AI Image Generation,Model Comparison Visualization,Identify Changes,Absolute Static Crop,Keypoint Visualization,SIFT,Trace Visualization,Roboflow Dataset Upload,Twilio SMS/MMS Notification,QR Code Generator,Model Monitoring Inference Aggregator,GLM-OCR,Reference Path Visualization,Halo Visualization,OCR Model,SIFT Comparison,VLM As Classifier,VLM As Detector,Image Preprocessing,Crop Visualization,OpenAI,Identify Outliers,Label Visualization,Classification Label Visualization,Pixelate Visualization,OpenAI,Local File Sink,Twilio SMS Notification,Email Notification,Qwen3.5-VL,Stitch OCR Detections,Corner Visualization,Stitch Images,Background Subtraction,Stitch OCR Detections,LMM For Classification,EasyOCR,Morphological Transformation,CSV Formatter,Line Counter,OpenAI,Clip Comparison,Image Threshold,Background Color Visualization,Anthropic Claude,Google Gemini,Camera Calibration,Halo Visualization,Stability AI Outpainting,Roboflow Custom Metadata,CogVLM,OpenAI,Single-Label Classification Model,Ellipse Visualization,Dynamic Zone,VLM As Classifier,Heatmap Visualization,Image Convert Grayscale,Detections Consensus,Triangle Visualization,Image Blur,Depth Estimation,Color Visualization,SIFT Comparison,Camera Focus,Text Display,Anthropic Claude,Dot Visualization,Image Slicer,Template Matching,Keypoint Detection Model,Polygon Visualization,Florence-2 Model,Motion Detection,Circle Visualization,Blur Visualization,Multi-Label Classification Model,Google Gemini,LMM,Slack Notification,Icon Visualization,Camera Focus,Stability AI Inpainting,Polygon Visualization,Detection Event Log,Webhook Sink,Polygon Zone Visualization,Perspective Correction,Florence-2 Model,Anthropic Claude,Mask Visualization,Google Gemini,Image Contours,Dynamic Crop,Distance Measurement,Roboflow Dataset Upload,Llama 3.2 Vision,JSON Parser,Pixel Color Count,VLM As Detector,Object Detection Model,Image Slicer,Line Counter Visualization,Relative Static Crop,Line Counter,Bounding Box Visualization - outputs:
Detections Classes Replacement,Dominant Color,Instance Segmentation Model,PTZ Tracking (ONVIF),ByteTrack Tracker,Google Vision OCR,Contrast Equalization,Grid Visualization,Identify Changes,Trace Visualization,SIFT,Twilio SMS/MMS Notification,QR Code Generator,SAM 3,GLM-OCR,SORT Tracker,Perception Encoder Embedding Model,OpenAI,Identify Outliers,Label Visualization,Pixelate Visualization,Twilio SMS Notification,OpenAI,Email Notification,Corner Visualization,Background Subtraction,Time in Zone,Morphological Transformation,Qwen2.5-VL,SAM 3,OpenAI,Clip Comparison,Background Color Visualization,Camera Calibration,Stability AI Outpainting,Ellipse Visualization,Detections Stitch,Dynamic Zone,VLM As Classifier,Heatmap Visualization,Detections Consensus,Image Convert Grayscale,Barcode Detection,Color Visualization,Camera Focus,Image Slicer,SmolVLM2,SAM 3,Polygon Visualization,Motion Detection,Blur Visualization,Qwen3-VL,LMM,Slack Notification,Stability AI Inpainting,Perspective Correction,Gaze Detection,Instance Segmentation Model,Anthropic Claude,Florence-2 Model,Mask Visualization,Moondream2,QR Code Detection,Image Contours,Clip Comparison,Byte Tracker,YOLO-World Model,Keypoint Detection Model,Object Detection Model,Pixel Color Count,VLM As Detector,Dynamic Crop,Relative Static Crop,Byte Tracker,Email Notification,Stability AI Image Generation,Model Comparison Visualization,Absolute Static Crop,Roboflow Dataset Upload,Keypoint Visualization,CLIP Embedding Model,Buffer,Model Monitoring Inference Aggregator,Reference Path Visualization,Halo Visualization,OCR Model,SIFT Comparison,VLM As Classifier,Time in Zone,Detections Stabilizer,VLM As Detector,Image Preprocessing,Crop Visualization,Classification Label Visualization,Qwen3.5-VL,Stitch OCR Detections,Stitch Images,Time in Zone,Stitch OCR Detections,LMM For Classification,EasyOCR,Single-Label Classification Model,Image Threshold,Anthropic Claude,Google Gemini,Roboflow Custom Metadata,Halo Visualization,CogVLM,Single-Label Classification Model,OpenAI,Triangle Visualization,Semantic Segmentation Model,Image Blur,Depth Estimation,SIFT Comparison,Text Display,Anthropic Claude,Dot Visualization,Template Matching,Keypoint Detection Model,Florence-2 Model,Circle Visualization,Multi-Label Classification Model,Google Gemini,Icon Visualization,Camera Focus,Detection Offset,Polygon Visualization,Webhook Sink,Polygon Zone Visualization,Google Gemini,Roboflow Dataset Upload,Llama 3.2 Vision,Seg Preview,Object Detection Model,Image Slicer,Byte Tracker,Line Counter Visualization,Multi-Label Classification Model,OC-SORT Tracker,Bounding Box Visualization,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v2 has.
Bindings
-
input
input_1(Union[numpy_array,image]): First input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors..input_2(Union[numpy_array,image]): Second input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors. This input will be matched against input_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(float_zero_to_one): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..matcher(string): Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force..visualize(boolean): Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes..
-
output
images_match(boolean): Boolean flag.good_matches_count(integer): Integer value.keypoints_1(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_1(numpy_array): Numpy array.keypoints_2(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_2(numpy_array): Numpy array.visualization_1(image): Image in workflows.visualization_2(image): Image in workflows.visualization_matches(image): Image in workflows.
Example JSON definition of step SIFT Comparison in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v2",
"input_1": "$inputs.image1",
"input_2": "$inputs.image2",
"good_matches_threshold": 50,
"ratio_threshold": 0.7,
"matcher": "FlannBasedMatcher",
"visualize": true
}
v1¶
Class: SIFTComparisonBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v1.SIFTComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare SIFT (Scale Invariant Feature Transform) descriptors from two images using FLANN-based matching and Lowe's ratio test, determining image similarity by counting feature matches and returning a boolean match result based on a configurable threshold for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares SIFT descriptors from two images to determine if they match by finding corresponding features and counting good matches. The block:
- Receives SIFT descriptors from two images (descriptor_1 and descriptor_2) - these descriptors should come from a SIFT feature detection step that has already extracted keypoints and computed descriptors for both images
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering - needs at least 2 nearest neighbors)
- Creates a FLANN (Fast Library for Approximate Nearest Neighbors) based matcher:
- Uses FLANN algorithm for efficient approximate nearest neighbor search in high-dimensional descriptor space
- Configures FLANN with algorithm parameters optimized for SIFT descriptors (algorithm=1, trees=5, checks=50)
- FLANN is faster than brute force matching for large descriptor sets while maintaining good accuracy
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in image 1:
- For each descriptor in descriptor_1, finds the 2 most similar descriptors in descriptor_2
- Uses Euclidean distance in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Returns the count of good matches and the boolean match result
The block uses SIFT descriptors which are scale and rotation invariant, making it effective for matching images with different scales, rotations, or viewing angles. FLANN matching provides efficient approximate nearest neighbor search for fast comparison of large descriptor sets. Lowe's ratio test improves match quality by filtering ambiguous matches where the best match isn't significantly better than alternatives. The threshold-based matching allows configurable sensitivity - lower thresholds require fewer matches (more lenient), higher thresholds require more matches (stricter).
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives SIFT descriptors from two images and produces match results:
- After SIFT feature detection blocks to compare SIFT descriptors from different images (e.g., compare descriptors from multiple images, match images using SIFT features, analyze image similarity with SIFT), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results (e.g., store image match results, save similarity scores, record comparison data), enabling match result storage workflows
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
- Before visualization blocks to visualize match results (e.g., display match results, visualize similar images, show comparison outcomes), enabling match visualization workflows
- In duplicate detection workflows where images need to be checked for duplicates (e.g., detect duplicates in image collections, find repeated images, identify identical images), enabling duplicate detection workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v1.
- inputs:
Detection Event Log,Depth Estimation,Perspective Correction,SIFT Comparison,SIFT,Image Contours,Distance Measurement,Template Matching,Line Counter,Pixel Color Count,SIFT Comparison,Line Counter - outputs:
Detections Classes Replacement,Dominant Color,Byte Tracker,Email Notification,Instance Segmentation Model,PTZ Tracking (ONVIF),ByteTrack Tracker,Grid Visualization,Model Comparison Visualization,Identify Changes,Absolute Static Crop,Keypoint Visualization,Trace Visualization,Roboflow Dataset Upload,Twilio SMS/MMS Notification,QR Code Generator,Model Monitoring Inference Aggregator,Reference Path Visualization,Halo Visualization,SIFT Comparison,Time in Zone,Detections Stabilizer,Image Preprocessing,SORT Tracker,Crop Visualization,Identify Outliers,Label Visualization,Pixelate Visualization,Twilio SMS Notification,Classification Label Visualization,Email Notification,Stitch OCR Detections,Corner Visualization,Stitch Images,Background Subtraction,Time in Zone,Time in Zone,Stitch OCR Detections,Morphological Transformation,SAM 3,Image Threshold,Anthropic Claude,Single-Label Classification Model,Background Color Visualization,Camera Calibration,Stability AI Outpainting,Halo Visualization,Roboflow Custom Metadata,Single-Label Classification Model,Ellipse Visualization,Dynamic Zone,Heatmap Visualization,Detections Consensus,Triangle Visualization,Image Blur,Color Visualization,SIFT Comparison,Text Display,Anthropic Claude,Dot Visualization,Image Slicer,Template Matching,SAM 3,Keypoint Detection Model,Polygon Visualization,Motion Detection,Circle Visualization,Blur Visualization,Multi-Label Classification Model,Google Gemini,Slack Notification,Icon Visualization,Stability AI Inpainting,Detection Offset,Polygon Visualization,Webhook Sink,Polygon Zone Visualization,Instance Segmentation Model,Perspective Correction,Gaze Detection,Anthropic Claude,Mask Visualization,Image Contours,Roboflow Dataset Upload,Byte Tracker,Keypoint Detection Model,Object Detection Model,Pixel Color Count,Object Detection Model,Image Slicer,Byte Tracker,Line Counter Visualization,Multi-Label Classification Model,OC-SORT Tracker,Bounding Box Visualization,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v1 has.
Bindings
-
input
descriptor_1(numpy_array): SIFT descriptors from the first image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the first image..descriptor_2(numpy_array): SIFT descriptors from the second image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the second image. These descriptors will be matched against descriptor_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(integer): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..
-
output
Example JSON definition of step SIFT Comparison in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v1",
"descriptor_1": "$steps.sift.descriptors",
"descriptor_2": "$steps.sift.descriptors",
"good_matches_threshold": 50,
"ratio_threshold": 0.7
}