Skip to content

Qwen-VL

Class: QwenVlmBlockV1

Source: inference.core.workflows.core_steps.models.foundation.qwen_vlm.v1.QwenVlmBlockV1

Run any Qwen vision-language model โ€” natively on Roboflow infrastructure or via OpenRouter.

You can specify arbitrary text prompts or predefined ones, the block supports the following types of prompt:

  • Open Prompt (unconstrained) - Use any prompt to generate a raw response

  • Text Recognition (OCR) (ocr) - Model recognizes text in the image

  • Visual Question Answering (visual-question-answering) - Model answers the question you submit in the prompt

  • Captioning (short) (caption) - Model provides a short description of the image

  • Captioning (detailed-caption) - Model provides a long description of the image

  • Single-Label Classification (classification) - Model classifies the image content as one of the provided classes

  • Multi-Label Classification (multi-label-classification) - Model classifies the image content as one or more of the provided classes

  • Unprompted Object Detection (object-detection) - Model detects and returns the bounding boxes for prominent objects in the image

  • Structured Output Generation (structured-answering) - Model returns a JSON response with the specified fields

๐Ÿ› ๏ธ Backend selection

  • Native (Roboflow) โ€” small Qwen-VL models (0.8Bโ€“7B) run on the same infrastructure as your other Roboflow models. Lower latency. Recommended for tasks like OCR, captioning, and visual question answering.

  • OpenRouter โ€” large hosted Qwen models (9Bโ€“397B) reached via OpenRouter. Defaults to a Roboflow-managed API key and bills your Roboflow credits. Paste your own sk-or-... key in the api_key field to bypass Roboflow billing. Recommended for structured tasks that benefit from larger models (classification, object-detection, structured-answering).

The model_version dropdown lists every supported variant; each is bound to one backend. A validator catches mismatches between your selected backend and model.

๐Ÿ”’ Privacy filter (OpenRouter only)

  • No data collection (default) โ€“ providers may not train on your inputs.
  • Allow data collection โ€“ broader provider pool.
  • Zero data retention โ€“ strictest, restricts to providers that retain nothing.

Type identifier

Use the following identifier in step "type" field: roboflow_core/qwen_vlm@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step.. โŒ
api_key str OpenRouter API key (only used when backend=openrouter). Defaults to Roboflow's managed key. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing.. โœ…
privacy_level str Provider privacy filter (only used when backend=openrouter). Stricter levels reduce the pool of providers and may increase per-call cost on the managed key.. โŒ
max_tokens int Maximum number of tokens the model can generate in its response.. โŒ
temperature float Sampling temperature (only used when backend=openrouter). The native Qwen-VL runtime doesn't accept a temperature knob. Range 0.0-2.0 โ€” higher = more random / "creative" generations.. โœ…
max_concurrent_requests int Maximum number of OpenRouter requests to run in parallel for a batch of images (only used when backend=openrouter). The native backend processes images sequentially. If unset, falls back to the global Workflows Execution Engine default. Restrict this if you hit OpenRouter rate limits.. โŒ
backend str Where to run inference. Native = Roboflow infrastructure. OpenRouter = large hosted Qwen models via OpenRouter.. โŒ
model_version str Native Qwen-VL variant. Pick a pre-trained model or Fine-tuned model to use a Qwen3 fine-tune from your workspace.. โœ…
fine_tuned_model_id str Fine-tuned Qwen3-VL model from your workspace, in workspace/version form.. โœ…
openrouter_model_version str OpenRouter-hosted Qwen variant.. โœ…
task_type str Task type to be performed by model. Value determines required parameters and output response.. โŒ
prompt str Text prompt to the Qwen model. โœ…
enable_thinking bool Enable Qwen3.5-VL's reasoning mode, where the model emits thinking tokens before its answer. The reasoning trace is returned in the thinking output. Only the Qwen 3.5 VL 2B checkpoint (and Qwen3-VL fine-tunes derived from it) supports this; ignored elsewhere.. โŒ
output_structure Dict[str, str] Dictionary with structure of expected JSON response. โŒ
classes List[str] List of classes to be used. โœ…

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Qwen-VL in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Qwen-VL in version v1 has.

Bindings
  • input

    • api_key (Union[string, ROBOFLOW_MANAGED_KEY, secret]): OpenRouter API key (only used when backend=openrouter). Defaults to Roboflow's managed key. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing..
    • temperature (float): Sampling temperature (only used when backend=openrouter). The native Qwen-VL runtime doesn't accept a temperature knob. Range 0.0-2.0 โ€” higher = more random / "creative" generations..
    • images (image): The image to infer on..
    • model_version (string): Native Qwen-VL variant. Pick a pre-trained model or Fine-tuned model to use a Qwen3 fine-tune from your workspace..
    • fine_tuned_model_id (Union[string, roboflow_model_id]): Fine-tuned Qwen3-VL model from your workspace, in workspace/version form..
    • openrouter_model_version (string): OpenRouter-hosted Qwen variant..
    • prompt (string): Text prompt to the Qwen model.
    • classes (list_of_values): List of classes to be used.
  • output

Example JSON definition of step Qwen-VL in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/qwen_vlm@v1",
    "api_key": "rf_key:account",
    "privacy_level": "<block_does_not_provide_example>",
    "max_tokens": "<block_does_not_provide_example>",
    "temperature": "<block_does_not_provide_example>",
    "max_concurrent_requests": "<block_does_not_provide_example>",
    "images": "$inputs.image",
    "backend": "<block_does_not_provide_example>",
    "model_version": "Qwen 3.5 VL 2B",
    "fine_tuned_model_id": "your-workspace/3",
    "openrouter_model_version": "Qwen 3.6 27B",
    "task_type": "<block_does_not_provide_example>",
    "prompt": "my prompt",
    "enable_thinking": "<block_does_not_provide_example>",
    "output_structure": {
        "my_key": "description"
    },
    "classes": [
        "class-a",
        "class-b"
    ]
}