Skip to content

SAM 3

Class: SegmentAnything3BlockV1

Source: inference.core.workflows.core_steps.models.foundation.segment_anything3.v1.SegmentAnything3BlockV1

Run Segment Anything 3, a zero-shot instance segmentation model, on an image.

You can pass in boxes/predictions from other models as prompts, or use a text prompt for open-vocabulary segmentation. If you pass in box detections from another model, the class names of the boxes will be forwarded to the predicted masks.

Type identifier

Use the following identifier in step "type" field: roboflow_core/sam3@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
model_id str model version. You only need to change this for fine tuned sam3 models..
class_names Optional[List[str], str] List of classes to recognise.
threshold float Threshold for predicted mask scores.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to SAM 3 in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds SAM 3 in version v1 has.

Bindings
  • input

    • images (image): The image to infer on..
    • model_id (roboflow_model_id): model version. You only need to change this for fine tuned sam3 models..
    • class_names (Union[string, list_of_values]): List of classes to recognise.
    • threshold (float): Threshold for predicted mask scores.
  • output

Example JSON definition of step SAM 3 in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/sam3@v1",
    "images": "$inputs.image",
    "model_id": "sam3/sam3_final",
    "class_names": [
        "car",
        "person"
    ],
    "threshold": 0.3
}