OpenRouter¶
Class: OpenRouterBlockV1
Source: inference.core.workflows.core_steps.models.foundation.openrouter.v1.OpenRouterBlockV1
Run any vision-language model available on OpenRouter by
pasting its model slug into the model_id field โ e.g.
openai/gpt-4o-mini, anthropic/claude-3.5-sonnet, google/gemini-2.5-pro,
qwen/qwen3.6-27b.
This is the generic escape hatch for OpenRouter โ when you want a model that doesn't have a dedicated block (Qwen-VL, Kimi, Gemma, Llama Vision) and you want to try it out without waiting for a new block to be added.
The block supports the standard VLM task-type surface:
-
Open Prompt (
unconstrained) - Use any prompt to generate a raw response -
Text Recognition (OCR) (
ocr) - Model recognizes text in the image -
Visual Question Answering (
visual-question-answering) - Model answers the question you submit in the prompt -
Captioning (short) (
caption) - Model provides a short description of the image -
Captioning (
detailed-caption) - Model provides a long description of the image -
Single-Label Classification (
classification) - Model classifies the image content as one of the provided classes -
Multi-Label Classification (
multi-label-classification) - Model classifies the image content as one or more of the provided classes -
Unprompted Object Detection (
object-detection) - Model detects and returns the bounding boxes for prominent objects in the image -
Structured Output Generation (
structured-answering) - Model returns a JSON response with the specified fields
๐ ๏ธ API key¶
By default the block uses the Roboflow-managed OpenRouter key and bills your
Roboflow credits โ no extra setup needed. To bypass Roboflow billing, paste your
own sk-or-... key into the api_key field.
๐ Privacy filter¶
- No data collection (default) โ providers may not train on your inputs.
- Allow data collection โ broader provider pool.
- Zero data retention โ strictest, restricts to providers that retain nothing.
Model availability
OpenRouter exposes hundreds of models with different capabilities. Not every
model supports image inputs, and some are text-only or reasoning-only. If
the model can't return a visible response (e.g. a reasoning model that
burns all of max_tokens on internal thinking), try increasing
max_tokens or pick a different model.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/openrouter@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
api_key |
str |
OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing.. |
โ |
privacy_level |
str |
Provider privacy filter. Stricter levels reduce the pool of providers and may increase per-call cost on the managed key.. | โ |
max_tokens |
int |
Maximum number of tokens the model can generate in its response.. | โ |
temperature |
float |
Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are.. | โ |
max_concurrent_requests |
int |
Number of concurrent requests for batches of images. If not given - block defaults to value configured globally in Workflows Execution Engine. Restrict if you hit rate limits.. | โ |
model_id |
str |
OpenRouter model slug, e.g. openai/gpt-4o-mini, anthropic/claude-3.5-sonnet, qwen/qwen3.6-27b. See https://openrouter.ai/models for the full list.. |
โ |
task_type |
str |
Task type to be performed by model. Value determines required parameters and output response.. | โ |
prompt |
str |
Text prompt to send to the model.. | โ |
output_structure |
Dict[str, str] |
Dictionary with structure of expected JSON response.. | โ |
classes |
List[str] |
List of classes to be used.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to OpenRouter in version v1.
- inputs:
S3 Sink,Email Notification,Clip Comparison,Morphological Transformation,VLM As Detector,Qwen-VL,Twilio SMS/MMS Notification,MoonshotAI Kimi,Polygon Zone Visualization,Stitch OCR Detections,OpenAI-Compatible LLM,OpenAI,Heatmap Visualization,Email Notification,Keypoint Visualization,Llama 3.2 Vision,Anthropic Claude,Stability AI Image Generation,Google Vision OCR,Camera Focus,Label Visualization,Instance Segmentation Model,Local File Sink,Google Gemini,Motion Detection,Background Color Visualization,Qwen 3.5 API,Google Gemini,Polygon Visualization,SIFT Comparison,Grid Visualization,Florence-2 Model,OCR Model,VLM As Classifier,LMM For Classification,Keypoint Detection Model,Image Preprocessing,Roboflow Dataset Upload,SIFT,Dynamic Zone,Corner Visualization,Stability AI Outpainting,Multi-Label Classification Model,Halo Visualization,Qwen3.5-VL,Detections List Roll-Up,Blur Visualization,Morphological Transformation,Trace Visualization,Stitch OCR Detections,Gaze Detection,Reference Path Visualization,Halo Visualization,Model Comparison Visualization,Dot Visualization,Background Subtraction,Text Display,Absolute Static Crop,CSV Formatter,Florence-2 Model,Icon Visualization,Perspective Correction,Stability AI Inpainting,Image Convert Grayscale,QR Code Generator,OpenRouter,Model Monitoring Inference Aggregator,OpenAI,Llama 3.2 Vision,Image Threshold,Anthropic Claude,Dynamic Crop,Size Measurement,Clip Comparison,Contrast Enhancement,Bounding Box Visualization,Depth Estimation,Image Contours,EasyOCR,Relative Static Crop,Polygon Visualization,Google Gemma API,Qwen 3.6 API,Image Blur,Anthropic Claude,Object Detection Model,Triangle Visualization,Roboflow Custom Metadata,OpenAI,Slack Notification,Image Stack,Pixelate Visualization,Single-Label Classification Model,OpenAI,Stitch Images,Buffer,Image Slicer,Line Counter Visualization,Image Slicer,Cosine Similarity,LMM,Roboflow Dataset Upload,Color Visualization,Google Gemini,Classification Label Visualization,Camera Focus,Camera Calibration,Ellipse Visualization,Identify Changes,Mask Visualization,GLM-OCR,Crop Visualization,CogVLM,Circle Visualization,Dimension Collapse,Contrast Equalization,Roboflow Vision Events,Webhook Sink,Twilio SMS Notification,MoonshotAI Kimi,Google Gemma - outputs:
S3 Sink,Email Notification,Keypoint Detection Model,Morphological Transformation,Path Deviation,Qwen-VL,Clip Comparison,SAM 3,VLM As Detector,Twilio SMS/MMS Notification,YOLO-World Model,Line Counter,Time in Zone,MoonshotAI Kimi,Stitch OCR Detections,Polygon Zone Visualization,OpenAI-Compatible LLM,OpenAI,VLM As Detector,Heatmap Visualization,Email Notification,Keypoint Visualization,Llama 3.2 Vision,Anthropic Claude,Stability AI Image Generation,Seg Preview,Google Vision OCR,Label Visualization,SAM 3,Instance Segmentation Model,Path Deviation,Local File Sink,Google Gemini,Motion Detection,Background Color Visualization,Instance Segmentation Model,Qwen 3.5 API,Google Gemini,Polygon Visualization,Moondream2,SIFT Comparison,Grid Visualization,Florence-2 Model,Time in Zone,Single-Label Classification Model,VLM As Classifier,LMM For Classification,Keypoint Detection Model,Image Preprocessing,Roboflow Dataset Upload,Segment Anything 2 Model,Stability AI Outpainting,Corner Visualization,Halo Visualization,Time in Zone,Semantic Segmentation Model,Detections List Roll-Up,Perception Encoder Embedding Model,Distance Measurement,Morphological Transformation,Trace Visualization,VLM As Classifier,Stitch OCR Detections,Reference Path Visualization,Halo Visualization,Model Comparison Visualization,Dot Visualization,Pixel Color Count,JSON Parser,Text Display,Florence-2 Model,Icon Visualization,Object Detection Model,Perspective Correction,SAM 3,Stability AI Inpainting,Object Detection Model,Line Counter,QR Code Generator,OpenRouter,Model Monitoring Inference Aggregator,OpenAI,Llama 3.2 Vision,Image Threshold,Anthropic Claude,Dynamic Crop,Size Measurement,Detections Consensus,Clip Comparison,Cache Set,Bounding Box Visualization,Depth Estimation,Keypoint Detection Model,CLIP Embedding Model,Multi-Label Classification Model,Polygon Visualization,Google Gemma API,Qwen 3.6 API,Image Blur,Anthropic Claude,Triangle Visualization,Object Detection Model,Roboflow Custom Metadata,OpenAI,Slack Notification,OpenAI,Instance Segmentation Model,Buffer,Line Counter Visualization,Detections Classes Replacement,Cache Get,LMM,Roboflow Dataset Upload,Color Visualization,Google Gemini,Classification Label Visualization,Detections Stitch,Ellipse Visualization,PTZ Tracking (ONVIF),Mask Visualization,GLM-OCR,Crop Visualization,CogVLM,Circle Visualization,Contrast Equalization,Roboflow Vision Events,Webhook Sink,Twilio SMS Notification,MoonshotAI Kimi,Google Gemma
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
OpenRouter in version v1 has.
Bindings
-
input
api_key(Union[string,ROBOFLOW_MANAGED_KEY,secret]): OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your ownsk-or-...key to call OpenRouter directly without Roboflow billing..temperature(float): Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are..images(image): The image to infer on..model_id(string): OpenRouter model slug, e.g.openai/gpt-4o-mini,anthropic/claude-3.5-sonnet,qwen/qwen3.6-27b. See https://openrouter.ai/models for the full list..prompt(string): Text prompt to send to the model..classes(list_of_values): List of classes to be used..
-
output
output(Union[string,language_model_output]): String value ifstringor LLM / VLM output iflanguage_model_output.classes(list_of_values): List of values of any type.
Example JSON definition of step OpenRouter in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/openrouter@v1",
"api_key": "rf_key:account",
"privacy_level": "<block_does_not_provide_example>",
"max_tokens": "<block_does_not_provide_example>",
"temperature": "<block_does_not_provide_example>",
"max_concurrent_requests": "<block_does_not_provide_example>",
"images": "$inputs.image",
"model_id": "openai/gpt-4o-mini",
"task_type": "<block_does_not_provide_example>",
"prompt": "my prompt",
"output_structure": {
"my_key": "description"
},
"classes": [
"class-a",
"class-b"
]
}