Segment Anything 2 Model¶
Class: SegmentAnything2BlockV1
Source: inference.core.workflows.core_steps.models.foundation.segment_anything2.v1.SegmentAnything2BlockV1
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/segment_anything@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Segment Anything 2 Model
in version v1
.
- inputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Twilio SMS Notification
,Slack Notification
,VLM as Detector
,VLM as Classifier
,LMM
,Polygon Zone Visualization
,Camera Focus
,Image Slicer
,Image Blur
,Dot Visualization
,Path Deviation
,Detections Merge
,Detection Offset
,Google Gemini
,Roboflow Dataset Upload
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,OpenAI
,Detections Consensus
,Gaze Detection
,Image Convert Grayscale
,Absolute Static Crop
,Stability AI Image Generation
,Webhook Sink
,Color Visualization
,Image Threshold
,Halo Visualization
,Polygon Visualization
,Detections Classes Replacement
,VLM as Classifier
,Dynamic Zone
,Instance Segmentation Model
,CogVLM
,Camera Calibration
,Email Notification
,Object Detection Model
,Classification Label Visualization
,Cosine Similarity
,Single-Label Classification Model
,Llama 3.2 Vision
,Google Vision OCR
,Roboflow Dataset Upload
,Byte Tracker
,Ellipse Visualization
,Bounding Box Visualization
,JSON Parser
,Object Detection Model
,Line Counter Visualization
,Image Preprocessing
,Keypoint Detection Model
,Trace Visualization
,Label Visualization
,Local File Sink
,Image Slicer
,Detections Transformation
,Anthropic Claude
,Crop Visualization
,Detections Stitch
,OCR Model
,Identify Outliers
,YOLO-World Model
,Relative Static Crop
,Model Comparison Visualization
,Stitch OCR Detections
,Perspective Correction
,Byte Tracker
,OpenAI
,Path Deviation
,Mask Visualization
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,CSV Formatter
,Byte Tracker
,Florence-2 Model
,Instance Segmentation Model
,Keypoint Detection Model
,Image Contours
,SIFT
,SIFT Comparison
,Reference Path Visualization
,Florence-2 Model
,Triangle Visualization
,Bounding Rectangle
,Model Monitoring Inference Aggregator
,Velocity
,SIFT Comparison
,VLM as Detector
,Keypoint Visualization
,Identify Changes
,Multi-Label Classification Model
,LMM For Classification
,Roboflow Custom Metadata
,Grid Visualization
,Segment Anything 2 Model
,Detections Stabilizer
,Stitch Images
,Blur Visualization
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Model Comparison Visualization
,Roboflow Dataset Upload
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,Perspective Correction
,Byte Tracker
,Detections Stabilizer
,Path Deviation
,Detections Consensus
,Distance Measurement
,Mask Visualization
,Time in Zone
,Detections Filter
,Color Visualization
,Time in Zone
,Dynamic Crop
,Halo Visualization
,Polygon Visualization
,Byte Tracker
,Detections Classes Replacement
,Florence-2 Model
,Dynamic Zone
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Bounding Rectangle
,Velocity
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Ellipse Visualization
,Size Measurement
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..boxes
(Union[keypoint_detection_prediction
,instance_segmentation_prediction
,object_detection_prediction
]): Bounding boxes (from another model) to convert to polygons.version
(string
): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold
(float
): Threshold for predicted masks scores.multimask_output
(boolean
): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions
(instance_segmentation_prediction
): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}