Segment Anything 2 Model¶
Class: SegmentAnything2BlockV1
Source: inference.core.workflows.core_steps.models.foundation.segment_anything2.v1.SegmentAnything2BlockV1
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/segment_anything@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Segment Anything 2 Model
in version v1
.
- inputs:
Keypoint Detection Model
,Gaze Detection
,Line Counter
,Halo Visualization
,OpenAI
,Stability AI Image Generation
,Model Comparison Visualization
,Keypoint Visualization
,Crop Visualization
,Image Blur
,Bounding Box Visualization
,Roboflow Dataset Upload
,Template Matching
,Mask Visualization
,Image Slicer
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,VLM as Classifier
,JSON Parser
,Ellipse Visualization
,Email Notification
,Bounding Rectangle
,OpenAI
,Dynamic Zone
,Slack Notification
,VLM as Detector
,Twilio SMS Notification
,Byte Tracker
,Webhook Sink
,Label Visualization
,Google Vision OCR
,Stability AI Outpainting
,CSV Formatter
,Single-Label Classification Model
,Polygon Visualization
,Velocity
,Identify Changes
,Model Monitoring Inference Aggregator
,Roboflow Custom Metadata
,Reference Path Visualization
,VLM as Detector
,Florence-2 Model
,Detections Transformation
,OCR Model
,Anthropic Claude
,Camera Calibration
,Image Preprocessing
,Image Contours
,Line Counter Visualization
,Florence-2 Model
,LMM For Classification
,Corner Visualization
,Detections Merge
,Clip Comparison
,Cosine Similarity
,Stitch Images
,Depth Estimation
,SIFT
,Time in Zone
,Blur Visualization
,Image Convert Grayscale
,Background Color Visualization
,Image Slicer
,Dynamic Crop
,Local File Sink
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Dot Visualization
,Byte Tracker
,Detections Filter
,SIFT Comparison
,CogVLM
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,VLM as Classifier
,LMM
,Color Visualization
,Stability AI Inpainting
,Classification Label Visualization
,OpenAI
,Byte Tracker
,Moondream2
,Detection Offset
,Absolute Static Crop
,Image Threshold
,Detections Consensus
,Time in Zone
,Keypoint Detection Model
,Llama 3.2 Vision
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Camera Focus
,Roboflow Dataset Upload
,Grid Visualization
,YOLO-World Model
,Overlap Filter
,Google Gemini
,PTZ Tracking (ONVIF)
.md),SIFT Comparison
,Multi-Label Classification Model
,Relative Static Crop
,Detections Stitch
,Object Detection Model
,Path Deviation
,Polygon Zone Visualization
,Identify Outliers
- outputs:
Line Counter
,Florence-2 Model
,Halo Visualization
,Corner Visualization
,Detections Merge
,Model Comparison Visualization
,Crop Visualization
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Bounding Box Visualization
,Dynamic Crop
,Line Counter
,Distance Measurement
,Perspective Correction
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Detections Filter
,Size Measurement
,Roboflow Dataset Upload
,Mask Visualization
,Path Deviation
,Detections Classes Replacement
,Segment Anything 2 Model
,Ellipse Visualization
,Color Visualization
,Stability AI Inpainting
,Bounding Rectangle
,Byte Tracker
,Dynamic Zone
,Detection Offset
,Byte Tracker
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Label Visualization
,Roboflow Dataset Upload
,Overlap Filter
,Polygon Visualization
,Velocity
,PTZ Tracking (ONVIF)
.md),Model Monitoring Inference Aggregator
,Detections Stitch
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..boxes
(Union[object_detection_prediction
,keypoint_detection_prediction
,instance_segmentation_prediction
]): Bounding boxes (from another model) to convert to polygons.version
(string
): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold
(float
): Threshold for predicted masks scores.multimask_output
(boolean
): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions
(instance_segmentation_prediction
): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}