Segment Anything 2 Model¶
Version v1
¶
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/segment_anything@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
The unique name of this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Check what blocks you can connect to Segment Anything 2 Model
in version v1
.
- inputs:
Detection Offset
,Template Matching
,Detections Stitch
,Segment Anything 2 Model
,Blur Visualization
,Image Convert Grayscale
,Camera Focus
,Dot Visualization
,VLM as Detector
,Corner Visualization
,Circle Visualization
,Triangle Visualization
,Relative Static Crop
,Absolute Static Crop
,Background Color Visualization
,YOLO-World Model
,Detections Transformation
,Pixelate Visualization
,Polygon Visualization
,Image Threshold
,Detections Consensus
,Label Visualization
,Crop Visualization
,Mask Visualization
,Detections Classes Replacement
,Image Contours
,Bounding Box Visualization
,Color Visualization
,Object Detection Model
,Perspective Correction
,Keypoint Detection Model
,Detections Filter
,Instance Segmentation Model
,Halo Visualization
,SIFT
,Dynamic Crop
,Image Blur
,Ellipse Visualization
,Image Slicer
- outputs:
Detection Offset
,Dynamic Zone
,Detections Consensus
,Label Visualization
,Crop Visualization
,Roboflow Dataset Upload
,Detections Stitch
,Segment Anything 2 Model
,Blur Visualization
,Mask Visualization
,Detections Classes Replacement
,Property Definition
,Bounding Box Visualization
,Dot Visualization
,Color Visualization
,Circle Visualization
,Corner Visualization
,Roboflow Custom Metadata
,Perspective Correction
,Triangle Visualization
,Detections Filter
,Halo Visualization
,Background Color Visualization
,Roboflow Dataset Upload
,Detections Transformation
,Pixelate Visualization
,Polygon Visualization
,Dynamic Crop
,Ellipse Visualization
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on.boxes
(Union[object_detection_prediction
,keypoint_detection_prediction
,instance_segmentation_prediction
]): Boxes (from other model predictions) to ground SAM2.version
(string
): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold
(float
): Threshold for predicted masks scores.multimask_output
(boolean
): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions
(instance_segmentation_prediction
): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}