SIFT Comparison¶
v2¶
Class: SIFTComparisonBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v2.SIFTComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare two images or their SIFT descriptors using configurable matcher algorithms (FLANN or brute force), automatically computing SIFT features when images are provided, applying Lowe's ratio test filtering, and optionally generating visualizations of keypoints and matches for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares two images or their SIFT descriptors to determine if they match by finding corresponding features and counting good matches. The block:
- Receives two inputs (input_1 and input_2) that can be either images or pre-computed SIFT descriptors
- Processes each input based on its type:
- If input is an image: Automatically computes SIFT keypoints and descriptors using OpenCV's SIFT detector
- Converts image to grayscale
- Detects keypoints and computes 128-dimensional SIFT descriptors
- Optionally creates keypoint visualization if visualize=True
- Converts keypoints to dictionary format for output
- If input is descriptors: Uses the provided descriptors directly (skips SIFT computation)
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering)
- Selects matcher algorithm based on matcher parameter:
- FlannBasedMatcher (default): Uses FLANN for efficient approximate nearest neighbor search, faster for large descriptor sets
- BFMatcher: Uses brute force matching with L2 norm, exact matching but slower for large descriptor sets
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in input_1:
- For each descriptor in descriptors_1, finds the 2 most similar descriptors in descriptors_2
- Uses Euclidean distance (L2 norm) in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Optionally generates visualizations if visualize=True and images were provided:
- Creates keypoint visualizations for each image (images with keypoints drawn)
- Creates a matches visualization showing corresponding keypoints between the two images connected by lines
- Returns match results, keypoints, descriptors, and optional visualizations
The block provides flexibility by accepting either images (with automatic SIFT computation) or pre-computed descriptors. When images are provided, the block handles all SIFT processing internally, making it easier to use without requiring separate SIFT feature detection steps. The optional visualization feature helps debug and understand matching results by showing keypoints and matches visually. SIFT descriptors are scale and rotation invariant, making the block effective for matching images with different scales, rotations, or viewing angles.
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives images or SIFT descriptors and produces match results with optional visualizations:
- After image input blocks to compare images directly (e.g., compare input images, match images from camera feeds, analyze image similarities), enabling direct image comparison workflows
- After SIFT feature detection blocks to compare pre-computed SIFT descriptors (e.g., compare descriptors from different images, match images using existing SIFT features, analyze image similarity with pre-computed descriptors), enabling descriptor-based comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results and visualizations (e.g., store image match results, save similarity scores, record comparison data with visualizations), enabling match result storage workflows
- Before visualization blocks to further process or display visualizations (e.g., display match visualizations, show keypoint images, render comparison results), enabling visualization workflow outputs
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
Version Differences¶
This version (v2) includes several enhancements over v1:
- Flexible Input Types: Accepts both images and pre-computed SIFT descriptors as input (v1 only accepted descriptors), allowing direct image comparison without requiring separate SIFT feature detection steps
- Automatic SIFT Computation: Automatically computes SIFT keypoints and descriptors when images are provided, eliminating the need for separate SIFT feature detection blocks in simple workflows
- Matcher Selection: Added configurable matcher parameter to choose between FlannBasedMatcher (default, faster) and BFMatcher (exact, slower), providing flexibility for different performance requirements
- Visualization Support: Added optional visualization feature that generates keypoint visualizations and match visualizations when images are provided, helping debug and understand matching results
- Enhanced Outputs: Returns keypoints and descriptors for both images, plus optional visualizations (keypoint images and match visualization), providing more comprehensive output data for downstream processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
matcher |
str |
Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force.. | ✅ |
visualize |
bool |
Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v2.
- inputs:
Clip Comparison,Florence-2 Model,Morphological Transformation,Google Gemini,LMM,Instance Segmentation Model,Polygon Zone Visualization,Email Notification,Keypoint Visualization,Roboflow Custom Metadata,Motion Detection,Camera Focus,Anthropic Claude,Multi-Label Classification Model,Pixel Color Count,Image Threshold,LMM For Classification,Keypoint Detection Model,Anthropic Claude,Email Notification,Reference Path Visualization,Stitch OCR Detections,Camera Focus,Image Slicer,Stability AI Image Generation,Stability AI Outpainting,Stitch Images,Blur Visualization,OpenAI,Detection Event Log,Roboflow Dataset Upload,Depth Estimation,Google Gemini,CogVLM,Image Preprocessing,Identify Outliers,Local File Sink,Florence-2 Model,Image Convert Grayscale,JSON Parser,VLM as Detector,Dynamic Crop,Dot Visualization,Triangle Visualization,OCR Model,Crop Visualization,PTZ Tracking (ONVIF).md),Twilio SMS Notification,Perspective Correction,Twilio SMS/MMS Notification,EasyOCR,Grid Visualization,Google Gemini,Line Counter,Trace Visualization,QR Code Generator,Pixelate Visualization,Detections Consensus,OpenAI,Camera Calibration,Roboflow Dataset Upload,Webhook Sink,Single-Label Classification Model,Object Detection Model,VLM as Detector,Background Subtraction,SIFT Comparison,Bounding Box Visualization,Contrast Equalization,Halo Visualization,Model Comparison Visualization,Label Visualization,Slack Notification,OpenAI,Dynamic Zone,Circle Visualization,Image Contours,Background Color Visualization,Image Blur,Mask Visualization,VLM as Classifier,Google Vision OCR,Llama 3.2 Vision,Color Visualization,Corner Visualization,Classification Label Visualization,OpenAI,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Model Monitoring Inference Aggregator,Line Counter,Image Slicer,Absolute Static Crop,VLM as Classifier,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Distance Measurement,Identify Changes,Relative Static Crop,SIFT,CSV Formatter,Text Display - outputs:
Clip Comparison,Morphological Transformation,Motion Detection,Email Notification,Detections Stitch,Anthropic Claude,Pixel Color Count,Keypoint Detection Model,Reference Path Visualization,Stitch OCR Detections,Camera Focus,Stability AI Image Generation,Stitch Images,Stability AI Outpainting,Time in Zone,Roboflow Dataset Upload,Depth Estimation,CogVLM,Identify Outliers,SAM 3,Dynamic Crop,Time in Zone,Perception Encoder Embedding Model,Moondream2,Triangle Visualization,Dot Visualization,Crop Visualization,PTZ Tracking (ONVIF).md),Twilio SMS Notification,Perspective Correction,Twilio SMS/MMS Notification,EasyOCR,Pixelate Visualization,Detections Consensus,OpenAI,Roboflow Dataset Upload,Buffer,Object Detection Model,Single-Label Classification Model,Barcode Detection,SIFT Comparison,Contrast Equalization,Byte Tracker,Halo Visualization,Model Comparison Visualization,Slack Notification,Byte Tracker,Dynamic Zone,Qwen2.5-VL,Image Contours,Background Color Visualization,Image Blur,Mask Visualization,Google Vision OCR,Color Visualization,Corner Visualization,Clip Comparison,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Image Slicer,Detections Stabilizer,Absolute Static Crop,Stability AI Inpainting,SAM 3,Relative Static Crop,SIFT,Blur Visualization,Multi-Label Classification Model,Instance Segmentation Model,Florence-2 Model,Google Gemini,LMM,Instance Segmentation Model,Polygon Zone Visualization,Keypoint Visualization,Roboflow Custom Metadata,Camera Focus,Multi-Label Classification Model,Detection Offset,Image Threshold,LMM For Classification,Anthropic Claude,Email Notification,Gaze Detection,Image Slicer,SmolVLM2,OpenAI,YOLO-World Model,Google Gemini,Image Preprocessing,VLM as Detector,Florence-2 Model,Image Convert Grayscale,Time in Zone,Byte Tracker,OCR Model,Seg Preview,SAM 3,Grid Visualization,Google Gemini,Object Detection Model,Trace Visualization,QR Code Generator,CLIP Embedding Model,Camera Calibration,Webhook Sink,QR Code Detection,VLM as Detector,Background Subtraction,Bounding Box Visualization,Label Visualization,OpenAI,Circle Visualization,Dominant Color,VLM as Classifier,Llama 3.2 Vision,Single-Label Classification Model,Classification Label Visualization,Segment Anything 2 Model,OpenAI,Detections Classes Replacement,Model Monitoring Inference Aggregator,VLM as Classifier,Polygon Visualization,SIFT Comparison,Keypoint Detection Model,Qwen3-VL,Identify Changes,Text Display
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v2 has.
Bindings
-
input
input_1(Union[image,numpy_array]): First input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors..input_2(Union[image,numpy_array]): Second input to compare - can be either an image or pre-computed SIFT descriptors (numpy array). If an image is provided, SIFT keypoints and descriptors will be automatically computed. If descriptors are provided, they will be used directly. Supports images from inputs or workflow steps, or descriptors from SIFT feature detection blocks. Images should be in standard image format, descriptors should be numpy arrays of 128-dimensional SIFT descriptors. This input will be matched against input_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(float_zero_to_one): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..matcher(string): Matcher algorithm to use for comparing SIFT descriptors: 'FlannBasedMatcher' (default) uses FLANN for efficient approximate nearest neighbor search - faster for large descriptor sets, suitable for most use cases. 'BFMatcher' uses brute force matching with L2 norm - exact matching but slower for large descriptor sets, useful when you need exact results or have small descriptor sets. Default is 'FlannBasedMatcher' for optimal performance. Choose BFMatcher only if you need exact matching or have performance constraints that favor brute force..visualize(boolean): Whether to generate visualizations of keypoints and matches. When True and images are provided as input, the block generates: (1) visualization_1 and visualization_2 showing keypoints drawn on each image, (2) visualization_matches showing corresponding keypoints between the two images connected by lines. Visualizations are only generated when images (not descriptors) are provided. Default is False. Set to True when you need to debug matching results, understand why images match or don't match, or want visual output for display or analysis purposes..
-
output
images_match(boolean): Boolean flag.good_matches_count(integer): Integer value.keypoints_1(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_1(numpy_array): Numpy array.keypoints_2(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors_2(numpy_array): Numpy array.visualization_1(image): Image in workflows.visualization_2(image): Image in workflows.visualization_matches(image): Image in workflows.
Example JSON definition of step SIFT Comparison in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v2",
"input_1": "$inputs.image1",
"input_2": "$inputs.image2",
"good_matches_threshold": 50,
"ratio_threshold": 0.7,
"matcher": "FlannBasedMatcher",
"visualize": true
}
v1¶
Class: SIFTComparisonBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.sift_comparison.v1.SIFTComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Compare SIFT (Scale Invariant Feature Transform) descriptors from two images using FLANN-based matching and Lowe's ratio test, determining image similarity by counting feature matches and returning a boolean match result based on a configurable threshold for image matching, similarity detection, duplicate detection, and feature-based image comparison workflows.
How This Block Works¶
This block compares SIFT descriptors from two images to determine if they match by finding corresponding features and counting good matches. The block:
- Receives SIFT descriptors from two images (descriptor_1 and descriptor_2) - these descriptors should come from a SIFT feature detection step that has already extracted keypoints and computed descriptors for both images
- Validates that both descriptor arrays have at least 2 descriptors (required for ratio test filtering - needs at least 2 nearest neighbors)
- Creates a FLANN (Fast Library for Approximate Nearest Neighbors) based matcher:
- Uses FLANN algorithm for efficient approximate nearest neighbor search in high-dimensional descriptor space
- Configures FLANN with algorithm parameters optimized for SIFT descriptors (algorithm=1, trees=5, checks=50)
- FLANN is faster than brute force matching for large descriptor sets while maintaining good accuracy
- Performs k-nearest neighbor matching (k=2) to find the 2 closest descriptor matches for each descriptor in image 1:
- For each descriptor in descriptor_1, finds the 2 most similar descriptors in descriptor_2
- Uses Euclidean distance in descriptor space to measure similarity
- Returns matches with distance values indicating how similar the descriptors are
- Filters good matches using Lowe's ratio test:
- For each match, compares the distance to the best match (m.distance) with the distance to the second-best match (n.distance)
- Keeps matches where m.distance < ratio_threshold * n.distance
- This ratio test filters out ambiguous matches where multiple descriptors are similarly close
- Lower ratio_threshold values (e.g., 0.6) require more distinct matches (stricter filtering)
- Higher ratio_threshold values (e.g., 0.8) allow more matches (more lenient filtering)
- Counts the number of good matches after ratio test filtering
- Determines if images match by comparing good_matches_count to good_matches_threshold:
- If good_matches_count >= good_matches_threshold, images_match = True
- If good_matches_count < good_matches_threshold, images_match = False
- Returns the count of good matches and the boolean match result
The block uses SIFT descriptors which are scale and rotation invariant, making it effective for matching images with different scales, rotations, or viewing angles. FLANN matching provides efficient approximate nearest neighbor search for fast comparison of large descriptor sets. Lowe's ratio test improves match quality by filtering ambiguous matches where the best match isn't significantly better than alternatives. The threshold-based matching allows configurable sensitivity - lower thresholds require fewer matches (more lenient), higher thresholds require more matches (stricter).
Common Use Cases¶
- Image Similarity Detection: Determine if two images are similar or match each other (e.g., detect similar images in collections, find matching images in databases, identify duplicate images), enabling image similarity workflows
- Duplicate Image Detection: Identify duplicate or near-duplicate images in image collections (e.g., find duplicate images in photo libraries, detect repeated images in datasets, identify identical images with different scales or orientations), enabling duplicate detection workflows
- Feature-Based Image Matching: Match images based on visual features and keypoints (e.g., match images with similar content, find corresponding images across different views, identify matching images in image sequences), enabling feature-based matching workflows
- Image Verification: Verify if images match expected patterns or references (e.g., verify image authenticity, check if images match reference images, validate image content against templates), enabling image verification workflows
- Image Comparison and Analysis: Compare images to analyze similarities and differences (e.g., compare images for quality control, analyze image variations, measure image similarity scores), enabling image comparison analysis workflows
- Content-Based Image Retrieval: Use feature matching for content-based image search and retrieval (e.g., find similar images in databases, retrieve images by visual similarity, search images by content matching), enabling content-based retrieval workflows
Connecting to Other Blocks¶
This block receives SIFT descriptors from two images and produces match results:
- After SIFT feature detection blocks to compare SIFT descriptors from different images (e.g., compare descriptors from multiple images, match images using SIFT features, analyze image similarity with SIFT), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use match results for decision-making (e.g., filter based on image matches, make decisions based on similarity, apply logic based on match results), enabling match-based conditional workflows
- Before data storage blocks to store match results (e.g., store image match results, save similarity scores, record comparison data), enabling match result storage workflows
- In image comparison pipelines where multiple images need to be compared (e.g., compare images in sequences, analyze image similarities in workflows, process image comparisons in pipelines), enabling image comparison pipeline workflows
- Before visualization blocks to visualize match results (e.g., display match results, visualize similar images, show comparison outcomes), enabling match visualization workflows
- In duplicate detection workflows where images need to be checked for duplicates (e.g., detect duplicates in image collections, find repeated images, identify identical images), enabling duplicate detection workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift_comparison@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
good_matches_threshold |
int |
Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features.. | ✅ |
ratio_threshold |
float |
Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT Comparison in version v1.
- inputs:
Template Matching,SIFT Comparison,Line Counter,Pixel Color Count,SIFT Comparison,Distance Measurement,Image Contours,Perspective Correction,SIFT,Detection Event Log,Line Counter,Depth Estimation - outputs:
Instance Segmentation Model,Morphological Transformation,Instance Segmentation Model,Email Notification,Motion Detection,Keypoint Visualization,Polygon Zone Visualization,Roboflow Custom Metadata,Anthropic Claude,Multi-Label Classification Model,Pixel Color Count,Detection Offset,Image Threshold,Keypoint Detection Model,Anthropic Claude,Email Notification,Gaze Detection,Reference Path Visualization,Stitch OCR Detections,Image Slicer,Stitch Images,Stability AI Outpainting,Blur Visualization,Time in Zone,Roboflow Dataset Upload,Image Preprocessing,Identify Outliers,SAM 3,Byte Tracker,Time in Zone,Time in Zone,Triangle Visualization,Dot Visualization,Crop Visualization,PTZ Tracking (ONVIF).md),Twilio SMS Notification,Twilio SMS/MMS Notification,Perspective Correction,Grid Visualization,Object Detection Model,Trace Visualization,QR Code Generator,Pixelate Visualization,Detections Consensus,Webhook Sink,Roboflow Dataset Upload,Object Detection Model,Single-Label Classification Model,SIFT Comparison,Background Subtraction,Bounding Box Visualization,Byte Tracker,Halo Visualization,Model Comparison Visualization,Label Visualization,Byte Tracker,Dynamic Zone,Slack Notification,Circle Visualization,Dominant Color,Image Blur,Image Contours,Mask Visualization,Background Color Visualization,Color Visualization,Corner Visualization,Classification Label Visualization,Single-Label Classification Model,Segment Anything 2 Model,Detections Classes Replacement,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Model Monitoring Inference Aggregator,Image Slicer,Detections Stabilizer,Absolute Static Crop,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Keypoint Detection Model,SAM 3,Identify Changes,Text Display,Multi-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT Comparison in version v1 has.
Bindings
-
input
descriptor_1(numpy_array): SIFT descriptors from the first image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the first image..descriptor_2(numpy_array): SIFT descriptors from the second image to compare. Should be a numpy array of SIFT descriptors (typically from a SIFT feature detection block). Each descriptor is a 128-dimensional vector describing the visual characteristics around a keypoint. The descriptors should be computed using the same SIFT parameters for both images. At least 2 descriptors are required for the ratio test to work. Use descriptors from a SIFT feature detection step that has processed the second image. These descriptors will be matched against descriptor_1 to determine image similarity..good_matches_threshold(integer): Minimum number of good matches required to consider the images as matching. Must be a positive integer. If the number of good matches (after ratio test filtering) is greater than or equal to this threshold, images_match will be True. Lower values (e.g., 20-30) are more lenient and will match images with fewer feature correspondences. Higher values (e.g., 80-100) are stricter and require more feature matches. Default is 50, which provides a good balance. Adjust based on image content, expected similarity level, and false positive/negative tolerance. Use lower thresholds for images with few features, higher thresholds for images with rich texture and many features..ratio_threshold(integer): Ratio threshold for Lowe's ratio test used to filter ambiguous matches. The ratio test compares the distance to the best match with the distance to the second-best match. Matches are kept only if best_match_distance < ratio_threshold * second_best_match_distance. Lower values (e.g., 0.6) require more distinct matches and are stricter (filter out more matches, leaving only high-confidence matches). Higher values (e.g., 0.8) are more lenient (allow more matches, including some ambiguous ones). Default is 0.7, which provides good balance between match quality and quantity. Typical range is 0.6-0.8. Use lower values when you need high-confidence matches only, higher values when you want more matches or have images with sparse features..
-
output
Example JSON definition of step SIFT Comparison in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift_comparison@v1",
"descriptor_1": "$steps.sift.descriptors",
"descriptor_2": "$steps.sift.descriptors",
"good_matches_threshold": 50,
"ratio_threshold": 0.7
}