Skip to content

Qwen2.5-VL

Class: Qwen25VLBlockV1

Source: inference.core.workflows.core_steps.models.foundation.qwen.v1.Qwen25VLBlockV1

This workflow block runs Qwen2.5-VL—a vision language model that accepts an image and an optional text prompt—and returns a text answer based on a conversation template.

Type identifier

Use the following identifier in step "type" field: roboflow_core/qwen25vl@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
prompt str Optional text prompt to provide additional context to Qwen2.5-VL. Otherwise it will just be None.
model_version str The Qwen2.5-VL model to be used for inference..
system_prompt str Optional system prompt to provide additional context to Qwen2.5-VL..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Qwen2.5-VL in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Qwen2.5-VL in version v1 has.

Bindings
  • input

    • images (image): The image to infer on..
    • model_version (roboflow_model_id): The Qwen2.5-VL model to be used for inference..
  • output

Example JSON definition of step Qwen2.5-VL in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/qwen25vl@v1",
    "images": "$inputs.image",
    "prompt": "What is in this image?",
    "model_version": "qwen25-vl-7b-peft",
    "system_prompt": "You are a helpful assistant."
}