Skip to content

OpenRouter

Class: OpenRouterBlockV1

Source: inference.core.workflows.core_steps.models.foundation.openrouter.v1.OpenRouterBlockV1

Run any vision-language model available on OpenRouter by pasting its model slug into the model_id field โ€” e.g. openai/gpt-4o-mini, anthropic/claude-3.5-sonnet, google/gemini-2.5-pro, qwen/qwen3.6-27b.

This is the generic escape hatch for OpenRouter โ€” when you want a model that doesn't have a dedicated block (Qwen-VL, Kimi, Gemma, Llama Vision) and you want to try it out without waiting for a new block to be added.

The block supports the standard VLM task-type surface:

  • Open Prompt (unconstrained) - Use any prompt to generate a raw response

  • Text Recognition (OCR) (ocr) - Model recognizes text in the image

  • Visual Question Answering (visual-question-answering) - Model answers the question you submit in the prompt

  • Captioning (short) (caption) - Model provides a short description of the image

  • Captioning (detailed-caption) - Model provides a long description of the image

  • Single-Label Classification (classification) - Model classifies the image content as one of the provided classes

  • Multi-Label Classification (multi-label-classification) - Model classifies the image content as one or more of the provided classes

  • Unprompted Object Detection (object-detection) - Model detects and returns the bounding boxes for prominent objects in the image

  • Structured Output Generation (structured-answering) - Model returns a JSON response with the specified fields

๐Ÿ› ๏ธ API key

By default the block uses the Roboflow-managed OpenRouter key and bills your Roboflow credits โ€” no extra setup needed. To bypass Roboflow billing, paste your own sk-or-... key into the api_key field.

๐Ÿ”’ Privacy filter

  • No data collection (default) โ€“ providers may not train on your inputs.
  • Allow data collection โ€“ broader provider pool.
  • Zero data retention โ€“ strictest, restricts to providers that retain nothing.

Model availability

OpenRouter exposes hundreds of models with different capabilities. Not every model supports image inputs, and some are text-only or reasoning-only. If the model can't return a visible response (e.g. a reasoning model that burns all of max_tokens on internal thinking), try increasing max_tokens or pick a different model.

Type identifier

Use the following identifier in step "type" field: roboflow_core/openrouter@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step.. โŒ
api_key str OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing.. โœ…
privacy_level str Provider privacy filter. Stricter levels reduce the pool of providers and may increase per-call cost on the managed key.. โŒ
max_tokens int Maximum number of tokens the model can generate in its response.. โŒ
temperature float Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are.. โœ…
max_concurrent_requests int Number of concurrent requests for batches of images. If not given - block defaults to value configured globally in Workflows Execution Engine. Restrict if you hit rate limits.. โŒ
model_id str OpenRouter model slug, e.g. openai/gpt-4o-mini, anthropic/claude-3.5-sonnet, qwen/qwen3.6-27b. See https://openrouter.ai/models for the full list.. โœ…
task_type str Task type to be performed by model. Value determines required parameters and output response.. โŒ
prompt str Text prompt to send to the model.. โœ…
output_structure Dict[str, str] Dictionary with structure of expected JSON response.. โŒ
classes List[str] List of classes to be used.. โœ…

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to OpenRouter in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds OpenRouter in version v1 has.

Bindings
  • input

    • api_key (Union[string, ROBOFLOW_MANAGED_KEY, secret]): OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing..
    • temperature (float): Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are..
    • images (image): The image to infer on..
    • model_id (string): OpenRouter model slug, e.g. openai/gpt-4o-mini, anthropic/claude-3.5-sonnet, qwen/qwen3.6-27b. See https://openrouter.ai/models for the full list..
    • prompt (string): Text prompt to send to the model..
    • classes (list_of_values): List of classes to be used..
  • output

Example JSON definition of step OpenRouter in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/openrouter@v1",
    "api_key": "rf_key:account",
    "privacy_level": "<block_does_not_provide_example>",
    "max_tokens": "<block_does_not_provide_example>",
    "temperature": "<block_does_not_provide_example>",
    "max_concurrent_requests": "<block_does_not_provide_example>",
    "images": "$inputs.image",
    "model_id": "openai/gpt-4o-mini",
    "task_type": "<block_does_not_provide_example>",
    "prompt": "my prompt",
    "output_structure": {
        "my_key": "description"
    },
    "classes": [
        "class-a",
        "class-b"
    ]
}