Segment Anything 2 Model¶
Class: SegmentAnything2BlockV1
Source: inference.core.workflows.core_steps.models.foundation.segment_anything2.v1.SegmentAnything2BlockV1
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/segment_anything@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Segment Anything 2 Model in version v1.
- inputs:
Detections Consensus,Path Deviation,Llama 3.2 Vision,Cosine Similarity,Blur Visualization,SAM 3,Perspective Correction,Polygon Zone Visualization,Bounding Box Visualization,QR Code Generator,Pixelate Visualization,Trace Visualization,Velocity,Roboflow Custom Metadata,Detections Transformation,Segment Anything 2 Model,Image Threshold,Polygon Visualization,Dynamic Crop,Icon Visualization,Image Slicer,Identify Outliers,Stability AI Outpainting,Model Comparison Visualization,Dynamic Zone,LMM,OpenAI,Classification Label Visualization,Stitch Images,Florence-2 Model,Mask Visualization,Single-Label Classification Model,Relative Static Crop,Absolute Static Crop,SIFT Comparison,SAM 3,Time in Zone,Moondream2,Google Gemini,Circle Visualization,Florence-2 Model,LMM For Classification,Ellipse Visualization,Image Convert Grayscale,Time in Zone,Object Detection Model,OCR Model,Image Preprocessing,Color Visualization,Image Blur,Stability AI Image Generation,Google Vision OCR,Anthropic Claude,Keypoint Visualization,Camera Calibration,VLM as Detector,Keypoint Detection Model,EasyOCR,Image Slicer,Line Counter,VLM as Detector,Local File Sink,Detections Combine,Email Notification,Detections Filter,Byte Tracker,Overlap Filter,Background Color Visualization,Triangle Visualization,Gaze Detection,Keypoint Detection Model,Roboflow Dataset Upload,Slack Notification,Halo Visualization,Object Detection Model,Corner Visualization,Detections Stabilizer,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Dot Visualization,Image Contours,Detections Merge,Multi-Label Classification Model,Twilio SMS Notification,Byte Tracker,Instance Segmentation Model,Seg Preview,VLM as Classifier,CSV Formatter,Reference Path Visualization,Morphological Transformation,Motion Detection,OpenAI,Byte Tracker,Webhook Sink,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Instance Segmentation Model,Detections Stitch,Contrast Equalization,Camera Focus,YOLO-World Model,Stitch OCR Detections,Stability AI Inpainting,JSON Parser,CogVLM,Clip Comparison,Line Counter Visualization,Identify Changes,Template Matching,Path Deviation,Email Notification,Crop Visualization,Grid Visualization,VLM as Classifier,OpenAI,Bounding Rectangle,SIFT,Depth Estimation,Background Subtraction,Label Visualization,SAM 3,Anthropic Claude,Time in Zone,Detection Offset,SIFT Comparison,OpenAI - outputs:
Line Counter,Detections Consensus,Path Deviation,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,Detections Merge,Perspective Correction,Byte Tracker,Bounding Box Visualization,Distance Measurement,Pixelate Visualization,Trace Visualization,Roboflow Custom Metadata,Velocity,Byte Tracker,Detections Transformation,PTZ Tracking (ONVIF).md),Segment Anything 2 Model,Polygon Visualization,Dynamic Crop,Icon Visualization,Detections Classes Replacement,Model Comparison Visualization,Dynamic Zone,Detections Stitch,Size Measurement,Mask Visualization,Florence-2 Model,Stability AI Inpainting,Time in Zone,Circle Visualization,Florence-2 Model,Path Deviation,Ellipse Visualization,Time in Zone,Crop Visualization,Color Visualization,Bounding Rectangle,Line Counter,Label Visualization,Detections Combine,Byte Tracker,Roboflow Dataset Upload,Overlap Filter,Triangle Visualization,Background Color Visualization,Detections Filter,Detection Offset,Time in Zone,Halo Visualization,Detections Stabilizer,Corner Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model in version v1 has.
Bindings
-
input
images(image): The image to infer on..boxes(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Bounding boxes (from another model) to convert to polygons.version(string): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold(float): Threshold for predicted masks scores.multimask_output(boolean): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions(instance_segmentation_prediction): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}