Segment Anything 2 Model¶
Class: SegmentAnything2BlockV1
Source: inference.core.workflows.core_steps.models.foundation.segment_anything2.v1.SegmentAnything2BlockV1
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/segment_anything@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Segment Anything 2 Model
in version v1
.
- inputs:
Detection Offset
,LMM
,Roboflow Custom Metadata
,Image Convert Grayscale
,VLM as Detector
,Absolute Static Crop
,Multi-Label Classification Model
,Relative Static Crop
,Line Counter Visualization
,Detections Classes Replacement
,Gaze Detection
,Background Color Visualization
,OCR Model
,Camera Focus
,Image Contours
,Image Slicer
,Reference Path Visualization
,Keypoint Detection Model
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Depth Estimation
,Google Vision OCR
,Roboflow Dataset Upload
,Dynamic Zone
,Llama 3.2 Vision
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Crop Visualization
,Webhook Sink
,Identify Changes
,Dot Visualization
,Detections Filter
,Model Comparison Visualization
,Email Notification
,Classification Label Visualization
,Camera Calibration
,Instance Segmentation Model
,Slack Notification
,Stability AI Image Generation
,Detections Merge
,Time in Zone
,Trace Visualization
,Time in Zone
,Corner Visualization
,Image Threshold
,Blur Visualization
,Local File Sink
,CogVLM
,Stability AI Inpainting
,Keypoint Detection Model
,SIFT
,Cosine Similarity
,Circle Visualization
,Overlap Filter
,JSON Parser
,OpenAI
,Moondream2
,Path Deviation
,Florence-2 Model
,Twilio SMS Notification
,Label Visualization
,Stitch Images
,Image Preprocessing
,Bounding Rectangle
,Detections Stitch
,Template Matching
,Byte Tracker
,Grid Visualization
,Polygon Zone Visualization
,Keypoint Visualization
,LMM For Classification
,Stitch OCR Detections
,Bounding Box Visualization
,Image Blur
,OpenAI
,Halo Visualization
,Google Gemini
,Ellipse Visualization
,Color Visualization
,Pixelate Visualization
,SIFT Comparison
,YOLO-World Model
,VLM as Detector
,Velocity
,Roboflow Dataset Upload
,Polygon Visualization
,Segment Anything 2 Model
,Single-Label Classification Model
,VLM as Classifier
,CSV Formatter
,Image Slicer
,Model Monitoring Inference Aggregator
,Mask Visualization
,Anthropic Claude
,Florence-2 Model
,VLM as Classifier
,Identify Outliers
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Bounding Rectangle
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Roboflow Dataset Upload
,Dynamic Zone
,Bounding Box Visualization
,Perspective Correction
,Halo Visualization
,Ellipse Visualization
,Color Visualization
,Crop Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Overlap Filter
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Polygon Visualization
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Stability AI Inpainting
,Mask Visualization
,Florence-2 Model
,Circle Visualization
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..boxes
(Union[instance_segmentation_prediction
,object_detection_prediction
,keypoint_detection_prediction
]): Bounding boxes (from another model) to convert to polygons.version
(string
): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold
(float
): Threshold for predicted masks scores.multimask_output
(boolean
): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions
(instance_segmentation_prediction
): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}