Skip to content

Google Gemma

Class: GoogleGemmaBlockV2

Source: inference.core.workflows.core_steps.models.foundation.google_gemma.v2.GoogleGemmaBlockV2

Ask a question to Google's Gemma model with vision capabilities.

You can specify arbitrary text prompts or predefined ones, the block supports the following types of prompt:

  • Open Prompt (unconstrained) - Use any prompt to generate a raw response

  • Text Recognition (OCR) (ocr) - Model recognizes text in the image

  • Visual Question Answering (visual-question-answering) - Model answers the question you submit in the prompt

  • Captioning (short) (caption) - Model provides a short description of the image

  • Captioning (detailed-caption) - Model provides a long description of the image

  • Single-Label Classification (classification) - Model classifies the image content as one of the provided classes

  • Multi-Label Classification (multi-label-classification) - Model classifies the image content as one or more of the provided classes

  • Unprompted Object Detection (object-detection) - Model detects and returns the bounding boxes for prominent objects in the image

  • Structured Output Generation (structured-answering) - Model returns a JSON response with the specified fields

๐Ÿ› ๏ธ API providers and model variants

Gemma is exposed via OpenRouter. By default this block uses the Roboflow-managed OpenRouter key and bills your Roboflow credits โ€” no extra setup needed. To bypass Roboflow billing, paste your own sk-or-... key into the api_key field.

The privacy_level field controls which OpenRouter providers may serve the request:

  • No data collection (default) โ€“ providers may not train on your inputs.
  • Allow data collection โ€“ broader provider pool, including providers that train on inputs.
  • Zero data retention โ€“ strictest, restricts to providers that retain nothing.

๐Ÿ’ก Further reading and Acceptable Use Policy

Model license

Check the Gemma Terms of Use before use.

Type identifier

Use the following identifier in step "type" field: roboflow_core/google_gemma@v2to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step.. โŒ
api_key str OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing.. โœ…
privacy_level str Provider privacy filter. Stricter levels reduce the pool of providers and may increase per-call cost on the managed key.. โŒ
max_tokens int Maximum number of tokens the model can generate in its response.. โŒ
temperature float Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are.. โœ…
max_concurrent_requests int Number of concurrent requests for batches of images. If not given - block defaults to value configured globally in Workflows Execution Engine. Restrict if you hit rate limits.. โŒ
task_type str Task type to be performed by model. Value determines required parameters and output response.. โŒ
prompt str Text prompt to the Gemma model. โœ…
output_structure Dict[str, str] Dictionary with structure of expected JSON response. โŒ
classes List[str] List of classes to be used. โœ…
model_version str Model to be used. โœ…

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Google Gemma in version v2.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Google Gemma in version v2 has.

Bindings
  • input

    • api_key (Union[string, ROBOFLOW_MANAGED_KEY, secret]): OpenRouter API key. Defaults to Roboflow's managed key, billed in credits via Roboflow. Provide your own sk-or-... key to call OpenRouter directly without Roboflow billing..
    • temperature (float): Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are..
    • images (image): The image to infer on..
    • prompt (string): Text prompt to the Gemma model.
    • classes (list_of_values): List of classes to be used.
    • model_version (string): Model to be used.
  • output

Example JSON definition of step Google Gemma in version v2
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/google_gemma@v2",
    "api_key": "rf_key:account",
    "privacy_level": "<block_does_not_provide_example>",
    "max_tokens": "<block_does_not_provide_example>",
    "temperature": "<block_does_not_provide_example>",
    "max_concurrent_requests": "<block_does_not_provide_example>",
    "images": "$inputs.image",
    "task_type": "<block_does_not_provide_example>",
    "prompt": "my prompt",
    "output_structure": {
        "my_key": "description"
    },
    "classes": [
        "class-a",
        "class-b"
    ],
    "model_version": "Gemma 4 31B - OpenRouter"
}