Segment Anything 2 Model¶
Class: SegmentAnything2BlockV1
Source: inference.core.workflows.core_steps.models.foundation.segment_anything2.v1.SegmentAnything2BlockV1
Run Segment Anything 2, a zero-shot instance segmentation model, on an image.
** Dedicated inference server required (GPU recomended) **
You can use pass in boxes/predictions from other models to Segment Anything 2 to use as prompts for the model. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks. If using the model unprompted, the model will assign integers as class names / ids.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/segment_anything@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
version |
str |
Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus. | ✅ |
threshold |
float |
Threshold for predicted masks scores. | ✅ |
multimask_output |
bool |
Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Segment Anything 2 Model
in version v1
.
- inputs:
Identify Changes
,Detection Offset
,Camera Focus
,Line Counter
,Polygon Zone Visualization
,Detections Filter
,Slack Notification
,Time in Zone
,Local File Sink
,Grid Visualization
,YOLO-World Model
,Image Convert Grayscale
,Trace Visualization
,Instance Segmentation Model
,Absolute Static Crop
,Roboflow Custom Metadata
,Perspective Correction
,OpenAI
,Circle Visualization
,Clip Comparison
,Image Slicer
,OpenAI
,Cosine Similarity
,Triangle Visualization
,Halo Visualization
,Gaze Detection
,Byte Tracker
,Byte Tracker
,Corner Visualization
,Email Notification
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,LMM
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,VLM as Classifier
,Depth Estimation
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Stitch Images
,Segment Anything 2 Model
,Object Detection Model
,Llama 3.2 Vision
,Model Comparison Visualization
,Anthropic Claude
,Keypoint Detection Model
,Crop Visualization
,Blur Visualization
,SIFT Comparison
,CogVLM
,Image Threshold
,Stability AI Inpainting
,VLM as Detector
,Relative Static Crop
,Image Preprocessing
,Keypoint Visualization
,Background Color Visualization
,Path Deviation
,Color Visualization
,Moondream2
,Twilio SMS Notification
,OCR Model
,Multi-Label Classification Model
,Classification Label Visualization
,Google Vision OCR
,Camera Calibration
,Pixelate Visualization
,Stitch OCR Detections
,Label Visualization
,Image Slicer
,Time in Zone
,Reference Path Visualization
,Single-Label Classification Model
,Webhook Sink
,Google Gemini
,Roboflow Dataset Upload
,Line Counter Visualization
,JSON Parser
,Identify Outliers
,Byte Tracker
,Image Blur
,Detections Transformation
,Florence-2 Model
,SIFT Comparison
,Detections Stabilizer
,LMM For Classification
,Image Contours
,Polygon Visualization
,Instance Segmentation Model
,CSV Formatter
,SIFT
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Keypoint Detection Model
,Bounding Rectangle
,Bounding Box Visualization
,Dot Visualization
,Detections Stitch
,Stability AI Image Generation
,VLM as Detector
,Path Deviation
,VLM as Classifier
- outputs:
Detection Offset
,Blur Visualization
,Line Counter
,Stability AI Inpainting
,Detections Filter
,Time in Zone
,Dot Visualization
,Background Color Visualization
,Path Deviation
,Trace Visualization
,Roboflow Custom Metadata
,Color Visualization
,Perspective Correction
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Label Visualization
,Halo Visualization
,Time in Zone
,Triangle Visualization
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter
,Size Measurement
,Detections Transformation
,Corner Visualization
,Byte Tracker
,Florence-2 Model
,Detections Stabilizer
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Polygon Visualization
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Bounding Rectangle
,Detections Stitch
,Byte Tracker
,Segment Anything 2 Model
,Bounding Box Visualization
,Model Comparison Visualization
,Path Deviation
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Segment Anything 2 Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..boxes
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Bounding boxes (from another model) to convert to polygons.version
(string
): Model to be used. One of hiera_large, hiera_small, hiera_tiny, hiera_b_plus.threshold
(float
): Threshold for predicted masks scores.multimask_output
(boolean
): Flag to determine whether to use sam2 internal multimask or single mask mode. For ambiguous prompts setting to True is recomended..
-
output
predictions
(instance_segmentation_prediction
): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Segment Anything 2 Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/segment_anything@v1",
"images": "$inputs.image",
"boxes": "$steps.object_detection_model.predictions",
"version": "hiera_large",
"threshold": 0.3,
"multimask_output": true
}