Qwen-VL¶
Class: QwenVlmBlockV1
Source: inference.core.workflows.core_steps.models.foundation.qwen_vlm.v1.QwenVlmBlockV1
Run any Qwen vision-language model โ natively on Roboflow infrastructure or via OpenRouter.
You can specify arbitrary text prompts or predefined ones, the block supports the following types of prompt:
-
Open Prompt (
unconstrained) - Use any prompt to generate a raw response -
Text Recognition (OCR) (
ocr) - Model recognizes text in the image -
Visual Question Answering (
visual-question-answering) - Model answers the question you submit in the prompt -
Captioning (short) (
caption) - Model provides a short description of the image -
Captioning (
detailed-caption) - Model provides a long description of the image -
Single-Label Classification (
classification) - Model classifies the image content as one of the provided classes -
Multi-Label Classification (
multi-label-classification) - Model classifies the image content as one or more of the provided classes -
Unprompted Object Detection (
object-detection) - Model detects and returns the bounding boxes for prominent objects in the image -
Structured Output Generation (
structured-answering) - Model returns a JSON response with the specified fields
๐ ๏ธ Backend selection¶
-
Native (Roboflow) โ small Qwen-VL models (0.8Bโ7B) run on the same infrastructure as your other Roboflow models. Lower latency. Recommended for tasks like OCR, captioning, and visual question answering.
-
OpenRouter โ large hosted Qwen models (9Bโ397B) reached via OpenRouter. Defaults to a Roboflow-managed API key and bills your Roboflow credits. Paste your own
sk-or-...key in theapi_keyfield to bypass Roboflow billing. Recommended for structured tasks that benefit from larger models (classification, object-detection, structured-answering).
The model_version dropdown lists every supported variant; each is bound to one backend.
A validator catches mismatches between your selected backend and model.
๐ Privacy filter (OpenRouter only)¶
- No data collection (default) โ providers may not train on your inputs.
- Allow data collection โ broader provider pool.
- Zero data retention โ strictest, restricts to providers that retain nothing.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/qwen_vlm@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
api_key |
str |
OpenRouter API key (only used when backend=openrouter). Defaults to Roboflow's managed key. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing.. |
โ |
privacy_level |
str |
Provider privacy filter (only used when backend=openrouter). Stricter levels reduce the pool of providers and may increase per-call cost on the managed key.. | โ |
max_tokens |
int |
Maximum number of tokens the model can generate in its response.. | โ |
temperature |
float |
Sampling temperature (only used when backend=openrouter). The native Qwen-VL runtime doesn't accept a temperature knob. Range 0.0-2.0 โ higher = more random / "creative" generations.. | โ |
max_concurrent_requests |
int |
Maximum number of OpenRouter requests to run in parallel for a batch of images (only used when backend=openrouter). The native backend processes images sequentially. If unset, falls back to the global Workflows Execution Engine default. Restrict this if you hit OpenRouter rate limits.. | โ |
backend |
str |
Where to run inference. Native = Roboflow infrastructure. OpenRouter = large hosted Qwen models via OpenRouter.. | โ |
model_version |
str |
Native Qwen-VL variant. Pick a pre-trained model or Fine-tuned model to use a Qwen3 fine-tune from your workspace.. |
โ |
fine_tuned_model_id |
str |
Fine-tuned Qwen3-VL model from your workspace, in workspace/version form.. |
โ |
openrouter_model_version |
str |
OpenRouter-hosted Qwen variant.. | โ |
task_type |
str |
Task type to be performed by model. Value determines required parameters and output response.. | โ |
prompt |
str |
Text prompt to the Qwen model. | โ |
enable_thinking |
bool |
Enable Qwen3.5-VL's reasoning mode, where the model emits thinking tokens before its answer. The reasoning trace is returned in the thinking output. Only the Qwen 3.5 VL 2B checkpoint (and Qwen3-VL fine-tunes derived from it) supports this; ignored elsewhere.. |
โ |
output_structure |
Dict[str, str] |
Dictionary with structure of expected JSON response. | โ |
classes |
List[str] |
List of classes to be used. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Qwen-VL in version v1.
- inputs:
S3 Sink,Email Notification,Clip Comparison,Morphological Transformation,VLM As Detector,Qwen-VL,Keypoint Detection Model,Twilio SMS/MMS Notification,MoonshotAI Kimi,Polygon Zone Visualization,Stitch OCR Detections,OpenAI-Compatible LLM,OpenAI,Heatmap Visualization,Email Notification,Keypoint Visualization,Llama 3.2 Vision,Anthropic Claude,Stability AI Image Generation,Google Vision OCR,Camera Focus,Label Visualization,Instance Segmentation Model,Local File Sink,Multi-Label Classification Model,Google Gemini,Motion Detection,Background Color Visualization,Instance Segmentation Model,Qwen 3.5 API,Google Gemini,Polygon Visualization,SIFT Comparison,Grid Visualization,Florence-2 Model,OCR Model,Single-Label Classification Model,VLM As Classifier,LMM For Classification,Keypoint Detection Model,Image Preprocessing,Roboflow Dataset Upload,SIFT,Dynamic Zone,Corner Visualization,Stability AI Outpainting,Multi-Label Classification Model,Halo Visualization,Qwen3.5-VL,Semantic Segmentation Model,Blur Visualization,Detections List Roll-Up,Morphological Transformation,Trace Visualization,Stitch OCR Detections,Gaze Detection,Reference Path Visualization,Halo Visualization,Model Comparison Visualization,Dot Visualization,Background Subtraction,Text Display,Absolute Static Crop,CSV Formatter,Florence-2 Model,Icon Visualization,Object Detection Model,Perspective Correction,Stability AI Inpainting,Image Convert Grayscale,Object Detection Model,QR Code Generator,OpenRouter,Model Monitoring Inference Aggregator,OpenAI,Llama 3.2 Vision,Image Threshold,Anthropic Claude,Dynamic Crop,Size Measurement,Clip Comparison,Contrast Enhancement,Bounding Box Visualization,Depth Estimation,Keypoint Detection Model,Image Contours,EasyOCR,Relative Static Crop,Multi-Label Classification Model,Polygon Visualization,Google Gemma API,Qwen 3.6 API,Single-Label Classification Model,Image Blur,Anthropic Claude,Object Detection Model,Triangle Visualization,Roboflow Custom Metadata,OpenAI,Slack Notification,Image Stack,Pixelate Visualization,Single-Label Classification Model,OpenAI,Stitch Images,Instance Segmentation Model,Buffer,Image Slicer,Line Counter Visualization,Image Slicer,Cosine Similarity,Semantic Segmentation Model,LMM,Roboflow Dataset Upload,Color Visualization,Google Gemini,Classification Label Visualization,Camera Focus,Camera Calibration,Ellipse Visualization,Identify Changes,Mask Visualization,GLM-OCR,Crop Visualization,CogVLM,Circle Visualization,Dimension Collapse,Contrast Equalization,Roboflow Vision Events,Webhook Sink,Twilio SMS Notification,MoonshotAI Kimi,Google Gemma - outputs:
S3 Sink,Email Notification,Keypoint Detection Model,Morphological Transformation,Path Deviation,Qwen-VL,Clip Comparison,SAM 3,VLM As Detector,Twilio SMS/MMS Notification,YOLO-World Model,Line Counter,Time in Zone,MoonshotAI Kimi,Stitch OCR Detections,Polygon Zone Visualization,OpenAI-Compatible LLM,OpenAI,VLM As Detector,Heatmap Visualization,Email Notification,Keypoint Visualization,Llama 3.2 Vision,Anthropic Claude,Stability AI Image Generation,Seg Preview,Google Vision OCR,Label Visualization,SAM 3,Instance Segmentation Model,Path Deviation,Local File Sink,Google Gemini,Motion Detection,Background Color Visualization,Instance Segmentation Model,Qwen 3.5 API,Google Gemini,Polygon Visualization,Moondream2,SIFT Comparison,Grid Visualization,Florence-2 Model,Time in Zone,Single-Label Classification Model,VLM As Classifier,LMM For Classification,Keypoint Detection Model,Image Preprocessing,Roboflow Dataset Upload,Segment Anything 2 Model,Stability AI Outpainting,Corner Visualization,Halo Visualization,Time in Zone,Semantic Segmentation Model,Detections List Roll-Up,Perception Encoder Embedding Model,Distance Measurement,Morphological Transformation,Trace Visualization,VLM As Classifier,Stitch OCR Detections,Reference Path Visualization,Halo Visualization,Model Comparison Visualization,Dot Visualization,Pixel Color Count,JSON Parser,Text Display,Florence-2 Model,Icon Visualization,Object Detection Model,Perspective Correction,SAM 3,Stability AI Inpainting,Object Detection Model,Line Counter,QR Code Generator,OpenRouter,Model Monitoring Inference Aggregator,OpenAI,Llama 3.2 Vision,Image Threshold,Anthropic Claude,Dynamic Crop,Size Measurement,Detections Consensus,Clip Comparison,Cache Set,Bounding Box Visualization,Depth Estimation,Keypoint Detection Model,CLIP Embedding Model,Multi-Label Classification Model,Polygon Visualization,Google Gemma API,Qwen 3.6 API,Image Blur,Anthropic Claude,Triangle Visualization,Object Detection Model,Roboflow Custom Metadata,OpenAI,Slack Notification,OpenAI,Instance Segmentation Model,Buffer,Line Counter Visualization,Detections Classes Replacement,Cache Get,LMM,Roboflow Dataset Upload,Color Visualization,Google Gemini,Classification Label Visualization,Detections Stitch,Ellipse Visualization,PTZ Tracking (ONVIF),Mask Visualization,GLM-OCR,Crop Visualization,CogVLM,Circle Visualization,Contrast Equalization,Roboflow Vision Events,Webhook Sink,Twilio SMS Notification,MoonshotAI Kimi,Google Gemma
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Qwen-VL in version v1 has.
Bindings
-
input
api_key(Union[string,ROBOFLOW_MANAGED_KEY,secret]): OpenRouter API key (only used when backend=openrouter). Defaults to Roboflow's managed key. Provide your ownsk-or-...key to call OpenRouter directly without Roboflow billing..temperature(float): Sampling temperature (only used when backend=openrouter). The native Qwen-VL runtime doesn't accept a temperature knob. Range 0.0-2.0 โ higher = more random / "creative" generations..images(image): The image to infer on..model_version(string): Native Qwen-VL variant. Pick a pre-trained model orFine-tuned modelto use a Qwen3 fine-tune from your workspace..fine_tuned_model_id(Union[string,roboflow_model_id]): Fine-tuned Qwen3-VL model from your workspace, inworkspace/versionform..openrouter_model_version(string): OpenRouter-hosted Qwen variant..prompt(string): Text prompt to the Qwen model.classes(list_of_values): List of classes to be used.
-
output
output(Union[string,language_model_output]): String value ifstringor LLM / VLM output iflanguage_model_output.classes(list_of_values): List of values of any type.thinking(string): String value.
Example JSON definition of step Qwen-VL in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/qwen_vlm@v1",
"api_key": "rf_key:account",
"privacy_level": "<block_does_not_provide_example>",
"max_tokens": "<block_does_not_provide_example>",
"temperature": "<block_does_not_provide_example>",
"max_concurrent_requests": "<block_does_not_provide_example>",
"images": "$inputs.image",
"backend": "<block_does_not_provide_example>",
"model_version": "Qwen 3.5 VL 2B",
"fine_tuned_model_id": "your-workspace/3",
"openrouter_model_version": "Qwen 3.6 27B",
"task_type": "<block_does_not_provide_example>",
"prompt": "my prompt",
"enable_thinking": "<block_does_not_provide_example>",
"output_structure": {
"my_key": "description"
},
"classes": [
"class-a",
"class-b"
]
}