SIFT¶
Class: SIFTBlockV1
Source: inference.core.workflows.core_steps.classical_cv.sift.v1.SIFTBlockV1
Detect and describe distinctive visual features in images using SIFT (Scale-Invariant Feature Transform), extracting keypoints (interest points) and computing 128-dimensional feature descriptors that are invariant to scale, rotation, and lighting conditions, enabling feature-based image matching, object recognition, and image similarity detection workflows.
How This Block Works¶
This block detects distinctive visual features in an image using SIFT and computes feature descriptors for each detected keypoint. The block:
- Receives an input image to analyze for feature detection
- Converts the image to grayscale (SIFT operates on grayscale images for efficiency and robustness)
- Creates a SIFT detector using OpenCV's SIFT implementation
- Detects keypoints and computes descriptors simultaneously using detectAndCompute:
- Keypoint Detection: Identifies distinctive interest points (keypoints) in the image that are stable across different viewing conditions
- Keypoints are detected at multiple scales (pyramid of scale-space images) to handle scale variations
- Keypoints are detected with orientation assignment to handle rotation variations
- Each keypoint has properties: position (x, y coordinates), size (scale at which it was detected), angle (orientation), response (strength), octave (scale level), and class_id
- Descriptor Computation: Computes 128-dimensional feature descriptors for each keypoint that describe the local image region around the keypoint
- Descriptors encode gradient information in the local region, making them distinctive and robust to lighting changes
- Descriptors are normalized to be partially invariant to illumination changes
- Draws keypoints on the original image for visualization:
- Uses OpenCV's drawKeypoints to overlay keypoint markers on the image
- Visualizes keypoint locations, orientations, and scales
- Creates a visual representation showing where features were detected
- Converts keypoints to dictionary format:
- Extracts keypoint properties (position, size, angle, response, octave, class_id) into dictionaries
- Makes keypoint data accessible for downstream processing and analysis
- Returns the image with keypoints drawn, the keypoints data (as dictionaries), and the descriptors (as numpy array)
SIFT features are scale-invariant (work at different zoom levels), rotation-invariant (handle rotated images), and partially lighting-invariant (robust to illumination changes). This makes them highly effective for matching the same object or scene across different images taken from different viewpoints, distances, angles, or lighting conditions. The 128-dimensional descriptors provide rich information about local image regions, enabling robust feature matching and comparison.
Common Use Cases¶
- Feature-Based Image Matching: Detect features for matching objects or scenes across different images (e.g., match objects in multiple images, find corresponding features across viewpoints, identify matching regions in image pairs), enabling feature-based matching workflows
- Object Recognition: Use SIFT features for object recognition and identification (e.g., recognize objects using feature matching, identify objects by their distinctive features, match object features for classification), enabling feature-based object recognition workflows
- Image Similarity Detection: Detect similar images by comparing SIFT features (e.g., find similar images in databases, detect duplicate images, identify matching scenes), enabling image similarity workflows
- Feature Extraction for Analysis: Extract distinctive features from images for further analysis (e.g., extract features for processing, analyze image characteristics, identify interesting regions), enabling feature extraction workflows
- Visual Localization: Use SIFT features for visual localization and mapping (e.g., localize objects in scenes, track features across frames, map feature correspondences), enabling visual localization workflows
- Image Registration: Align images using SIFT feature correspondences (e.g., register images for stitching, align images from different viewpoints, match images for alignment), enabling image registration workflows
Connecting to Other Blocks¶
This block receives an image and produces SIFT keypoints and descriptors:
- After image input blocks to extract SIFT features from input images (e.g., detect features in camera feeds, extract features from image inputs, analyze features in images), enabling SIFT feature extraction workflows
- After preprocessing blocks to extract features from preprocessed images (e.g., detect features after filtering, extract features from enhanced images, analyze features after preprocessing), enabling preprocessed feature extraction workflows
- Before SIFT Comparison blocks to provide SIFT descriptors for image comparison (e.g., provide descriptors for matching, prepare features for comparison, supply descriptors for similarity detection), enabling SIFT-based image comparison workflows
- Before filtering or logic blocks that use feature counts or properties for decision-making (e.g., filter based on feature count, make decisions based on detected features, apply logic based on feature properties), enabling feature-based conditional workflows
- Before data storage blocks to store feature data (e.g., store keypoints and descriptors, save feature information, record feature data for analysis), enabling feature data storage workflows
- Before visualization blocks to display detected features (e.g., visualize keypoints, display feature locations, show feature analysis results), enabling feature visualization workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/sift@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SIFT in version v1.
- inputs:
Mask Visualization,SIFT,Model Comparison Visualization,Stability AI Outpainting,Contrast Equalization,Absolute Static Crop,Polygon Visualization,Image Slicer,Dynamic Crop,Background Subtraction,Image Threshold,Icon Visualization,Reference Path Visualization,Depth Estimation,Circle Visualization,Stitch Images,Trace Visualization,Stability AI Image Generation,Pixelate Visualization,Triangle Visualization,Line Counter Visualization,Label Visualization,Camera Calibration,Camera Focus,SIFT Comparison,Halo Visualization,Text Display,Blur Visualization,Relative Static Crop,QR Code Generator,Ellipse Visualization,Dot Visualization,Keypoint Visualization,Crop Visualization,Polygon Zone Visualization,Perspective Correction,Image Convert Grayscale,Color Visualization,Grid Visualization,Stability AI Inpainting,Background Color Visualization,Image Slicer,Image Contours,Image Preprocessing,Classification Label Visualization,Bounding Box Visualization,Morphological Transformation,Camera Focus,Image Blur,Corner Visualization - outputs:
Byte Tracker,Google Vision OCR,VLM as Detector,Roboflow Dataset Upload,Single-Label Classification Model,SIFT,Qwen3-VL,Model Comparison Visualization,Google Gemini,Polygon Visualization,EasyOCR,Image Slicer,Dynamic Crop,Image Threshold,Icon Visualization,Reference Path Visualization,OpenAI,Circle Visualization,Trace Visualization,Email Notification,Pixelate Visualization,Twilio SMS/MMS Notification,Line Counter Visualization,Camera Calibration,Time in Zone,Halo Visualization,Blur Visualization,Relative Static Crop,OpenAI,QR Code Detection,Ellipse Visualization,Dominant Color,Seg Preview,Crop Visualization,Polygon Zone Visualization,Buffer,Multi-Label Classification Model,Barcode Detection,Perception Encoder Embedding Model,Detections Stabilizer,Moondream2,OpenAI,Stability AI Inpainting,Image Slicer,CLIP Embedding Model,Keypoint Detection Model,Florence-2 Model,VLM as Classifier,SAM 3,Classification Label Visualization,Instance Segmentation Model,Camera Focus,Image Blur,Corner Visualization,Anthropic Claude,Gaze Detection,OCR Model,LMM For Classification,Object Detection Model,Mask Visualization,Anthropic Claude,Stability AI Outpainting,Contrast Equalization,SAM 3,Absolute Static Crop,SAM 3,Qwen2.5-VL,Clip Comparison,SIFT Comparison,VLM as Detector,Background Subtraction,Detections Stitch,Instance Segmentation Model,SmolVLM2,Depth Estimation,Google Gemini,Segment Anything 2 Model,Stitch Images,Keypoint Detection Model,Template Matching,Stability AI Image Generation,YOLO-World Model,Florence-2 Model,Triangle Visualization,Label Visualization,LMM,Camera Focus,SIFT Comparison,Text Display,Clip Comparison,Keypoint Visualization,Dot Visualization,Pixel Color Count,CogVLM,Perspective Correction,Image Convert Grayscale,VLM as Classifier,Color Visualization,Background Color Visualization,Image Contours,Motion Detection,Object Detection Model,Image Preprocessing,Roboflow Dataset Upload,Anthropic Claude,Bounding Box Visualization,Google Gemini,Morphological Transformation,OpenAI,Llama 3.2 Vision,Multi-Label Classification Model,Single-Label Classification Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SIFT in version v1 has.
Bindings
-
input
image(image): Input image to analyze for SIFT feature detection. The image will be converted to grayscale internally for SIFT processing. SIFT works best on images with good texture and detail - images with rich visual content (edges, corners, patterns) produce more keypoints than uniform or smooth images. Each detected keypoint will have a 128-dimensional descriptor computed. The output includes an image with keypoints drawn for visualization, keypoint data (position, size, angle, response, octave), and descriptor arrays for matching and comparison. SIFT features are scale and rotation invariant, making them effective for matching across different viewpoints and conditions..
-
output
image(image): Image in workflows.keypoints(image_keypoints): Image keypoints detected by classical Computer Vision method.descriptors(numpy_array): Numpy array.
Example JSON definition of step SIFT in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/sift@v1",
"image": "$inputs.image"
}