Skip to content

GLM-OCR

Class: GLMOCRBlockV1

Source: inference.core.workflows.core_steps.models.foundation.glm_ocr.v1.GLMOCRBlockV1

Recognize text in images using GLM-OCR, a vision language model by Zhipu AI specialized for optical character recognition.

GLM-OCR supports three built-in recognition modes:

  • Text Recognition — General-purpose text recognition for serial numbers, labels, scene text, and documents.
  • Formula Recognition — Recognizes mathematical formulas and equations.
  • Table Recognition — Recognizes table structures and content.

You can also select Custom Prompt to provide your own prompt for specialized recognition tasks.

This block pairs well with detection models and DynamicCropBlock to isolate regions of interest before running OCR. For example, use an object detection model to find labels or text regions, crop them, then pass the crops to GLM-OCR.

Note: GLM-OCR requires a GPU for inference.

Type identifier

Use the following identifier in step "type" field: roboflow_core/glm_ocr@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
task_type str Recognition task to perform. Determines the prompt sent to GLM-OCR..
prompt str Custom text prompt for GLM-OCR. Only used when task_type is 'custom'..
model_version str The GLM-OCR model to be used for inference..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to GLM-OCR in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds GLM-OCR in version v1 has.

Bindings
  • input

    • images (image): The image to infer on..
    • prompt (string): Custom text prompt for GLM-OCR. Only used when task_type is 'custom'..
    • model_version (roboflow_model_id): The GLM-OCR model to be used for inference..
  • output

    • parsed_output (string): String value.
Example JSON definition of step GLM-OCR in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/glm_ocr@v1",
    "images": "$inputs.image",
    "task_type": "<block_does_not_provide_example>",
    "prompt": "Describe the text in the image.",
    "model_version": "glm-ocr"
}