SIFT¶
Class: SIFTBlockV1
Source: inference.core.workflows.core_steps.classical_cv.sift.v1.SIFTBlockV1
Detect and describe distinctive visual features in images using SIFT (Scale-Invariant Feature Transform), extracting keypoints (interest points) and computing 128-dimensional feature descriptors that are invariant to scale, rotation, and lighting conditions, enabling feature-based image matching, object recognition, and image similarity detection workflows.
How This Block Works¶
This block detects distinctive visual features in an image using SIFT and computes feature descriptors for each detected keypoint. The block:
- Receives an input image to analyze for feature detection
- Converts the image to grayscale (SIFT operates on grayscale images for efficiency and robustness)
- Creates a SIFT detector using OpenCV's SIFT implementation
- Detects keypoints and computes descriptors simultaneously using detectAndCompute:
- Keypoint Detection: Identifies distinctive interest points (keypoints) in the image that are stable across different viewing conditions
- Keypoints are detected at multiple scales (pyramid of scale-space images) to handle scale variations
- Keypoints are detected with orientation assignment to handle rotation variations
- Each keypoint has properties: position (x, y coordinates), size (scale at which it was detected), angle (orientation), response (strength), octave (scale level), and class_id
- Descriptor Computation: Computes 128-dimensional feature descriptors for each keypoint that describe the local image region around the keypoint
- Descriptors encode gradient information in the local region, making them distinctive and robust to lighting changes
- Descriptors are normalized to be partially invariant to illumination changes
- Draws keypoints on the original image for visualization:
- Uses OpenCV's drawKeypoints to overlay keypoint markers on the image
- Visualizes keypoint locations, orientations, and scales
- Creates a visual representation showing where features were detected
- Converts keypoints to dictionary format:
- Extracts keypoint properties (position, size, angle, response, octave, class_id) into dictionaries
- Makes keypoint data accessible for downstream processing and analysis
- Returns the image with keypoints drawn, the keypoints data (as dictionaries), and the descriptors (as numpy array)
SIFT features are scale-invariant (work at different zoom levels), rotation-invariant (handle rotated images), and partially lighting-invariant (robust to illumination changes). This makes them highly effective for matching the same object or scene across different images taken from different viewpoints, distances, angles, or lighting conditions. The 128-dimensional descriptors provide rich information about local image regions, enabling robust feature matching and comparison.
Common Use Cases¶
- Feature-Based Image Matching: Detect features for matching objects or scenes across different images (e.g., match objects in multiple images, find corresponding features across viewpoints, identify matching regions in image pairs), enabling feature-based matching workflows
- Object Recognition: Use SIFT features for object recognition and identification (e.g., recognize objects using feature matching, identify objects by their distinctive features, match object features for classification), enabling feature-based object recognition workflows
- Image Similarity Detection: Detect similar images by comparing SIFT features (e.g., find similar images in databases, detect duplicate images, identify matching scenes), enabling image similarity workflows
- Feature Extraction for Analysis: Extract distinctive features from images for further analysis (e.g., extract features for processing, analyze image characteristics, identify interesting regions), enabling feature extraction workflows
- Visual Localization: Use SIFT features for visual localization and mapping (e.g., localize objects in scenes, track features across frames, map feature correspondences), enabling visual localization workflows
- Image Registration: Align images using SIFT feature correspondences (e.g., register images for stitching, align images from different viewpoints, match images for alignment), enabling image registration workflows
Connecting to Other Blocks¶
This block receives an image and produces SIFT keypoints and descriptors:
- After image input blocks to extract SIFT features from input images (e.g., detect features in camera feeds, extract features from image inputs, analyze features in images), enabling SIFT feature extraction workflows
- After preprocessing blocks to extract features from preprocessed images (e.g., detect features after filtering, extract features from enhanced images, analyze features after preprocessing), enabling preprocessed feature extraction workflows
- Before SIFT Comparison blocks to provide SIFT descriptors for image comparison (e.g., provide descriptors for matching, prepare features for comparison, supply descriptors for similarity detection), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use feature counts or properties for decision-making (e.g., filter based on feature count, make decisions based on detected features, apply logic based on feature properties), enabling feature-based conditional workflows
- Before data storage blocks to store feature data (e.g., store keypoints and descriptors, save feature information, record feature data for analysis), enabling feature data storage workflows
- Before visualization blocks to display detected features (e.g., visualize keypoints, display feature locations, show feature analysis results), enabling feature visualization workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT in version v1.
- inputs:
QR Code Generator,Pixelate Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Keypoint Visualization,Camera Focus,Background Subtraction,Text Display,Bounding Box Visualization,Image Threshold,Contrast Equalization,Halo Visualization,Model Comparison Visualization,Label Visualization,Circle Visualization,Reference Path Visualization,Camera Focus,Image Contours,Background Color Visualization,Image Blur,Mask Visualization,Image Slicer,Stability AI Image Generation,Stability AI Outpainting,Stitch Images,Color Visualization,Corner Visualization,Classification Label Visualization,Depth Estimation,Image Preprocessing,Image Convert Grayscale,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Dynamic Crop,Image Slicer,Dot Visualization,Absolute Static Crop,Triangle Visualization,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Crop Visualization,Perspective Correction,Relative Static Crop,SIFT,Grid Visualization,Blur Visualization,Trace Visualization - outputs:
Instance Segmentation Model,Clip Comparison,Florence-2 Model,Morphological Transformation,Google Gemini,LMM,Instance Segmentation Model,Motion Detection,Email Notification,Detections Stitch,Polygon Zone Visualization,Keypoint Visualization,Camera Focus,Anthropic Claude,Multi-Label Classification Model,Pixel Color Count,Image Threshold,LMM For Classification,Keypoint Detection Model,Anthropic Claude,Gaze Detection,Reference Path Visualization,Camera Focus,Stability AI Image Generation,Stitch Images,Stability AI Outpainting,Image Slicer,SmolVLM2,OpenAI,Roboflow Dataset Upload,Depth Estimation,YOLO-World Model,Google Gemini,CogVLM,Image Preprocessing,VLM as Detector,Florence-2 Model,Image Convert Grayscale,SAM 3,Byte Tracker,Dynamic Crop,Time in Zone,Perception Encoder Embedding Model,Moondream2,Triangle Visualization,Dot Visualization,OCR Model,Seg Preview,Crop Visualization,Twilio SMS/MMS Notification,Perspective Correction,EasyOCR,SAM 3,Google Gemini,Object Detection Model,Text Display,Trace Visualization,Pixelate Visualization,OpenAI,CLIP Embedding Model,Camera Calibration,Roboflow Dataset Upload,Buffer,Barcode Detection,Object Detection Model,Single-Label Classification Model,QR Code Detection,VLM as Detector,Background Subtraction,SIFT Comparison,Bounding Box Visualization,Contrast Equalization,Model Comparison Visualization,Halo Visualization,Label Visualization,OpenAI,Circle Visualization,Qwen2.5-VL,Image Contours,Image Blur,Background Color Visualization,Mask Visualization,Dominant Color,VLM as Classifier,Google Vision OCR,Llama 3.2 Vision,Color Visualization,Corner Visualization,Classification Label Visualization,Single-Label Classification Model,OpenAI,Segment Anything 2 Model,Clip Comparison,Template Matching,Line Counter Visualization,Icon Visualization,Ellipse Visualization,Image Slicer,Detections Stabilizer,Absolute Static Crop,VLM as Classifier,Polygon Visualization,SIFT Comparison,Stability AI Inpainting,Qwen3-VL,SAM 3,Keypoint Detection Model,Relative Static Crop,SIFT,Blur Visualization,Multi-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT in version v1 has.
Bindings
-
input
image(image): Input image to analyze for SIFT feature detection. The image will be converted to grayscale internally for SIFT processing. SIFT works best on images with good texture and detail - images with rich visual content (edges, corners, patterns) produce more keypoints than uniform or smooth images. Each detected keypoint will have a 128-dimensional descriptor computed. The output includes an image with keypoints drawn for visualization, keypoint data (position, size, angle, response, octave), and descriptor arrays for matching and comparison. SIFT features are scale and rotation invariant, making them effective for matching across different viewpoints and conditions..
-
output
image(image): Image in workflows.keypoints(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors(numpy_array): Numpy array.
Example JSON definition of step SIFT in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift@v1",
"image": "$inputs.image"
}