SIFT Comparison¶
v2¶
Class: SIFTComparisonBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v2.SIFTComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare two images or their SIFT descriptors using configurable matcher algorithms (FLANN or brute force), automatically computing SIFT features when images are provided, applying Lowe's ratio test filtering, and optionally generating visualizations of keypoints and matches for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares two images or their SIFT descriptors to determine if they match by finding corresponding features and counting good matches. The block:
- Receives two inputs (input_1 and input_2) that can be either images or pre-computed SIFT descriptors
- Processes each input based on its type:
- If input is an image: Automatically computes SIFT keypoints and descriptors using OpenCV's SIFT detector
- Converts image to grayscale
- Detects keypoints and computes 128-dimensional SIFT descriptors
- Optionally creates keypoint visualization if visualize=True
- Converts keypoints to dictionary format for output
- If input is descriptors: Uses the provided descriptors directly (skips SIFT computation)
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering)
- Selects matcher algorithm based on matcher parameter:
- FlannBasedMatcher (default): Uses FLANN for efficient approximate nearest neighbor search, faster for large descriptor sets
- BFMatcher: Uses brute force matching with L2 norm, exact matching but slower for large descriptor sets
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in input_1:
- For each descriptor in descriptors_1, finds the 2 most similar descriptors in descriptors_2
- Uses Euclidean distance (L2 norm) in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Optionally generates visualizations if visualize=True and images were provided:
- Creates keypoint visualizations for each image (images with keypoints drawn)
- Creates a matches visualization showing corresponding keypoints between the two images connected by lines
- Returns match results, keypoints, descriptors, and optional visualizations
The block provides flexibility by accepting either images (with automatic SIFT computation) or pre-computed descriptors. When images are provided, the block handles all SIFT processing internally, making it easier to use without requiring separate SIFT feature detection steps. The optional visualization feature helps debug and understand matching results by showing keypoints and matches visually. SIFT descriptors are scale and rotation invariant, making the block effective for matching images with different scales, rotations, or viewing angles.
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives images or SIFT descriptors and produces match results with optional visualizations:
- After image input blocks to compare images directly (e.g., compare input images, match images from camera feeds, analyze image similarities), enabling direct image comparison workflows
- After SIFT feature detection blocks to compare pre-computed SIFT descriptors (e.g., compare descriptors from different images, match images using existing SIFT features, analyze image similarity with pre-computed descriptors), enabling descriptor-based comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results and visualizations (e.g., store image match results, save similarity scores, record comparison data with visualizations), enabling match result storage workflows
- Before visualization blocks to further process or display visualizations (e.g., display match visualizations, show keypoint images, render comparison results), enabling visualization workflow outputs
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
Version Differences¶
This version (v2) includes several enhancements over v1:
- Flexible Input Types: Accepts both images and pre-computed SIFT descriptors as input (v1 only accepted descriptors), allowing direct image comparison without requiring separate SIFT feature detection steps
- Automatic SIFT Computation: Automatically computes SIFT keypoints and descriptors when images are provided, eliminating the need for separate SIFT feature detection blocks in simple workflows
- Matcher Selection: Added configurable matcher parameter to choose between FlannBasedMatcher (default, faster) and BFMatcher (exact, slower), providing flexibility for different performance requirements
- Visualization Support: Added optional visualization feature that generates keypoint visualizations and match visualizations when images are provided, helping debug and understand matching results
- Enhanced Outputs: Returns keypoints and descriptors for both images, plus optional visualizations (keypoint images and match visualization), providing more comprehensive output data for downstream processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
matcher |
str |
Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force.. | ✅ |
visualize |
bool |
Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v2.
- inputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,SIFT Comparison,Anthropic Claude,VLM as Detector,Local File Sink,Polygon Visualization,QR Code Generator,Image Blur,SIFT Comparison,Email Notification,Roboflow Dataset Upload,Text Display,Motion Detection,Model Comparison Visualization,Camera Focus,SIFT,PTZ Tracking (ONVIF).md),LMM,VLM as Detector,Google Vision OCR,Mask Visualization,Anthropic Claude,Relative Static Crop,Circle Visualization,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Reference Path Visualization,VLM as Classifier,Instance Segmentation Model,Perspective Correction,Ellipse Visualization,Crop Visualization,Halo Visualization,Image Threshold,Keypoint Detection Model,CSV Formatter,Florence-2 Model,Twilio SMS Notification,Image Convert Grayscale,Corner Visualization,Image Preprocessing,Line Counter,Dynamic Zone,Identify Changes,Icon Visualization,Background Subtraction,Image Contours,Image Slicer,Detections Consensus,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Stitch Images,Dynamic Crop,Bounding Box Visualization,VLM as Classifier,Model Monitoring Inference Aggregator,Detection Event Log,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Line Counter,Single-Label Classification Model,Email Notification,Keypoint Visualization,Distance Measurement,OCR Model,Roboflow Custom Metadata,Google Gemini,OpenAI,Camera Focus,Trace Visualization,OpenAI,CogVLM,Color Visualization,Absolute Static Crop,Image Slicer,Dot Visualization,Identify Outliers,Label Visualization,Slack Notification,Florence-2 Model,Google Gemini,JSON Parser,Google Gemini,Grid Visualization,Object Detection Model,LMM For Classification,Template Matching,OpenAI,Stitch OCR Detections,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Stitch OCR Detections,Roboflow Dataset Upload,Twilio SMS/MMS Notification,Anthropic Claude,Triangle Visualization,Stability AI Image Generation,Webhook Sink - outputs:
Contrast Equalization,Clip Comparison,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,Text Display,SIFT,Moondream2,Qwen3-VL,Google Vision OCR,Pixelate Visualization,Time in Zone,VLM as Classifier,Detection Offset,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Keypoint Detection Model,Florence-2 Model,Twilio SMS Notification,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Dynamic Zone,Identify Changes,Icon Visualization,SAM 3,Qwen2.5-VL,Detections Consensus,Multi-Label Classification Model,Detections Stitch,QR Code Detection,Dynamic Crop,Bounding Box Visualization,YOLO-World Model,Detections Classes Replacement,Blur Visualization,Camera Calibration,Dominant Color,OpenAI,Camera Focus,Trace Visualization,CogVLM,Image Slicer,Absolute Static Crop,Dot Visualization,Label Visualization,Slack Notification,Google Gemini,Object Detection Model,LMM For Classification,Stitch OCR Detections,OpenAI,Classification Label Visualization,Stitch OCR Detections,Byte Tracker,Twilio SMS/MMS Notification,Gaze Detection,Anthropic Claude,Clip Comparison,VLM as Detector,Webhook Sink,Llama 3.2 Vision,SIFT Comparison,Anthropic Claude,Time in Zone,QR Code Generator,SmolVLM2,Email Notification,CLIP Embedding Model,Roboflow Dataset Upload,Motion Detection,Model Comparison Visualization,Camera Focus,PTZ Tracking (ONVIF).md),LMM,Byte Tracker,Single-Label Classification Model,Mask Visualization,Anthropic Claude,SAM 3,Relative Static Crop,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,EasyOCR,Stability AI Inpainting,Multi-Label Classification Model,Reference Path Visualization,Time in Zone,Ellipse Visualization,Crop Visualization,Image Preprocessing,Barcode Detection,Segment Anything 2 Model,Background Subtraction,Image Slicer,Image Contours,Depth Estimation,Pixel Color Count,Stitch Images,VLM as Classifier,Model Monitoring Inference Aggregator,Instance Segmentation Model,Line Counter Visualization,Morphological Transformation,Single-Label Classification Model,Polygon Zone Visualization,Email Notification,Keypoint Visualization,OCR Model,Roboflow Custom Metadata,Google Gemini,OpenAI,Color Visualization,Byte Tracker,Identify Outliers,Buffer,Florence-2 Model,Google Gemini,Grid Visualization,Template Matching,OpenAI,Background Color Visualization,SAM 3,Roboflow Dataset Upload,Stability AI Outpainting,Triangle Visualization,Stability AI Image Generation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v2 has.
Bindings
-
input
input_1(Union[image,numpy_array]): First input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors..input_2(Union[image,numpy_array]): Second input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors. This input will be matched against input_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(float_zero_to_one): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..matcher(string): Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force..visualize(boolean): Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes..
-
output
images_match(boolean): Boolean flag.good_matches_count(integer): Integer value.keypoints_1(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_1(numpy_array): Numpy array.keypoints_2(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_2(numpy_array): Numpy array.visualization_1(image): Image in workflows.visualization_2(image): Image in workflows.visualization_matches(image): Image in workflows.
Example JSON definition of step SIFT Comparison in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v2",
"input_1": "$inputs.image1",
"input_2": "$inputs.image2",
"good_matches_threshold": 50,
"ratio_threshold": 0.7,
"matcher": "FlannBasedMatcher",
"visualize": true
}
v1¶
Class: SIFTComparisonBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v1.SIFTComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare SIFT (Scale Invariant Feature Transform) descriptors from two images using FLANN-based matching and Lowe's ratio test, determining image similarity by counting feature matches and returning a boolean match result based on a configurable threshold for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares SIFT descriptors from two images to determine if they match by finding corresponding features and counting good matches. The block:
- Receives SIFT descriptors from two images (descriptor_1 and descriptor_2) - these descriptors should come from a SIFT feature detection step that has already extracted keypoints and computed descriptors for both images
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering - needs at least 2 nearest neighbors)
- Creates a FLANN (Fast Library for Approximate Nearest Neighbors) based matcher:
- Uses FLANN algorithm for efficient approximate nearest neighbor search in high-dimensional descriptor space
- Configures FLANN with algorithm parameters optimized for SIFT descriptors (algorithm=1, trees=5, checks=50)
- FLANN is faster than brute force matching for large descriptor sets while maintaining good accuracy
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in image 1:
- For each descriptor in descriptor_1, finds the 2 most similar descriptors in descriptor_2
- Uses Euclidean distance in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Returns the count of good matches and the boolean match result
The block uses SIFT descriptors which are scale and rotation invariant, making it effective for matching images with different scales, rotations, or viewing angles. FLANN matching provides efficient approximate nearest neighbor search for fast comparison of large descriptor sets. Lowe's ratio test improves match quality by filtering ambiguous matches where the best match isn't significantly better than alternatives. The threshold-based matching allows configurable sensitivity - lower thresholds require fewer matches (more lenient), higher thresholds require more matches (stricter).
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives SIFT descriptors from two images and produces match results:
- After SIFT feature detection blocks to compare SIFT descriptors from different images (e.g., compare descriptors from multiple images, match images using SIFT features, analyze image similarity with SIFT), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results (e.g., store image match results, save similarity scores, record comparison data), enabling match result storage workflows
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
- Before visualization blocks to visualize match results (e.g., display match results, visualize similar images, show comparison outcomes), enabling match visualization workflows
- In duplicate detection workflows where images need to be checked for duplicates (e.g., detect duplicates in image collections, find repeated images, identify identical images), enabling duplicate detection workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v1.
- inputs:
Image Contours,Depth Estimation,SIFT Comparison,Pixel Color Count,SIFT Comparison,Perspective Correction,Detection Event Log,SIFT,Template Matching,Line Counter,Line Counter,Distance Measurement - outputs:
SIFT Comparison,Anthropic Claude,Time in Zone,Polygon Visualization,QR Code Generator,Image Blur,SIFT Comparison,Email Notification,Roboflow Dataset Upload,Motion Detection,Text Display,Model Comparison Visualization,PTZ Tracking (ONVIF).md),Byte Tracker,Single-Label Classification Model,Mask Visualization,Anthropic Claude,Object Detection Model,Keypoint Detection Model,Circle Visualization,Pixelate Visualization,Stability AI Inpainting,Multi-Label Classification Model,Reference Path Visualization,Time in Zone,Detection Offset,Time in Zone,Instance Segmentation Model,Perspective Correction,Crop Visualization,Halo Visualization,Ellipse Visualization,Image Threshold,Keypoint Detection Model,Twilio SMS Notification,Detections Stabilizer,Corner Visualization,Image Preprocessing,Dynamic Zone,Identify Changes,Icon Visualization,SAM 3,Background Subtraction,Segment Anything 2 Model,Image Slicer,Detections Consensus,Image Contours,Multi-Label Classification Model,Pixel Color Count,Stitch Images,Bounding Box Visualization,Model Monitoring Inference Aggregator,Instance Segmentation Model,Detections Classes Replacement,Line Counter Visualization,Blur Visualization,Morphological Transformation,Single-Label Classification Model,Polygon Zone Visualization,Email Notification,Keypoint Visualization,Dominant Color,Roboflow Custom Metadata,Trace Visualization,Image Slicer,Absolute Static Crop,Color Visualization,Byte Tracker,Dot Visualization,Identify Outliers,Label Visualization,Slack Notification,Grid Visualization,Object Detection Model,Template Matching,Stitch OCR Detections,Gaze Detection,Classification Label Visualization,Stitch OCR Detections,Stability AI Outpainting,Byte Tracker,SAM 3,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Anthropic Claude,Background Color Visualization,Triangle Visualization,Webhook Sink
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v1 has.
Bindings
-
input
descriptor_1(numpy_array): SIFT descriptors from the first image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the first image..descriptor_2(numpy_array): SIFT descriptors from the second image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the second image. These descriptors will be matched against descriptor_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(integer): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..
-
output
Example JSON definition of step SIFT Comparison in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v1",
"descriptor_1": "$steps.sift.descriptors",
"descriptor_2": "$steps.sift.descriptors",
"good_matches_threshold": 50,
"ratio_threshold": 0.7
}