GLM-OCR¶
Class: GLMOCRBlockV1
Source: inference.core.workflows.core_steps.models.foundation.glm_ocr.v1.GLMOCRBlockV1
Recognize text in images using GLM-OCR, a vision language model by Zhipu AI specialized for optical character recognition.
GLM-OCR supports three built-in recognition modes:
- Text Recognition — General-purpose text recognition for serial numbers, labels, scene text, and documents.
- Formula Recognition — Recognizes mathematical formulas and equations.
- Table Recognition — Recognizes table structures and content.
You can also select Custom Prompt to provide your own prompt for specialized recognition tasks.
This block pairs well with detection models and DynamicCropBlock to isolate regions of interest before running OCR. For example, use an object detection model to find labels or text regions, crop them, then pass the crops to GLM-OCR.
Note: GLM-OCR requires a GPU for inference.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/glm_ocr@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
task_type |
str |
Recognition task to perform. Determines the prompt sent to GLM-OCR.. | ❌ |
prompt |
str |
Custom text prompt for GLM-OCR. Only used when task_type is 'custom'.. | ✅ |
model_version |
str |
The GLM-OCR model to be used for inference.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to GLM-OCR in version v1.
- inputs:
Local File Sink,Background Color Visualization,OCR Model,Bounding Box Visualization,Roboflow Dataset Upload,Polygon Zone Visualization,Line Counter Visualization,Ellipse Visualization,Image Preprocessing,Triangle Visualization,Keypoint Detection Model,Instance Segmentation Model,Florence-2 Model,SIFT Comparison,Llama 3.2 Vision,Absolute Static Crop,Qwen3.5-VL,LMM,Camera Focus,Keypoint Detection Model,Clip Comparison,Stability AI Outpainting,Heatmap Visualization,Anthropic Claude,EasyOCR,Mask Visualization,Text Display,Image Slicer,Label Visualization,Stitch Images,Florence-2 Model,Roboflow Custom Metadata,Image Contours,Email Notification,OpenAI,Blur Visualization,Icon Visualization,Stability AI Image Generation,Anthropic Claude,Multi-Label Classification Model,GLM-OCR,Image Convert Grayscale,Google Gemini,Background Subtraction,QR Code Generator,Google Gemini,Multi-Label Classification Model,Crop Visualization,Object Detection Model,Webhook Sink,Corner Visualization,OpenAI,Stitch OCR Detections,Stitch OCR Detections,Semantic Segmentation Model,Twilio SMS Notification,CogVLM,S3 Sink,CSV Formatter,Polygon Visualization,Model Monitoring Inference Aggregator,SIFT,Single-Label Classification Model,Circle Visualization,Relative Static Crop,Polygon Visualization,Slack Notification,Trace Visualization,Camera Calibration,Camera Focus,Pixelate Visualization,OpenAI,LMM For Classification,Object Detection Model,Stability AI Inpainting,Reference Path Visualization,Keypoint Visualization,Contrast Equalization,Twilio SMS/MMS Notification,Classification Label Visualization,Dot Visualization,Color Visualization,Dynamic Crop,Halo Visualization,Anthropic Claude,Image Threshold,Perspective Correction,Grid Visualization,Morphological Transformation,Email Notification,Depth Estimation,VLM As Classifier,Single-Label Classification Model,Image Slicer,Model Comparison Visualization,Instance Segmentation Model,Image Blur,Google Gemini,VLM As Detector,OpenAI,Halo Visualization,Roboflow Dataset Upload,Google Vision OCR - outputs:
Local File Sink,Background Color Visualization,Cache Set,Bounding Box Visualization,Roboflow Dataset Upload,Polygon Zone Visualization,Path Deviation,Line Counter Visualization,Ellipse Visualization,Size Measurement,Image Preprocessing,Triangle Visualization,SAM 3,Instance Segmentation Model,Florence-2 Model,SIFT Comparison,Segment Anything 2 Model,Llama 3.2 Vision,Detections Stitch,LMM,Clip Comparison,YOLO-World Model,Stability AI Outpainting,Heatmap Visualization,Anthropic Claude,Text Display,Mask Visualization,Label Visualization,Florence-2 Model,Roboflow Custom Metadata,Time in Zone,Time in Zone,PTZ Tracking (ONVIF),Email Notification,OpenAI,Icon Visualization,Stability AI Image Generation,Anthropic Claude,SAM 3,GLM-OCR,Cache Get,Time in Zone,Perception Encoder Embedding Model,Google Gemini,QR Code Generator,Distance Measurement,Google Gemini,Moondream2,Crop Visualization,Webhook Sink,OpenAI,Corner Visualization,Stitch OCR Detections,SAM 3,Stitch OCR Detections,Twilio SMS Notification,CogVLM,S3 Sink,Line Counter,Polygon Visualization,Model Monitoring Inference Aggregator,Circle Visualization,Polygon Visualization,Slack Notification,Trace Visualization,Pixel Color Count,OpenAI,LMM For Classification,Reference Path Visualization,Line Counter,Stability AI Inpainting,Contrast Equalization,Keypoint Visualization,Twilio SMS/MMS Notification,Classification Label Visualization,Dot Visualization,Color Visualization,CLIP Embedding Model,Dynamic Crop,Halo Visualization,Anthropic Claude,Image Threshold,Detections Classes Replacement,Perspective Correction,Morphological Transformation,Email Notification,Depth Estimation,Model Comparison Visualization,Instance Segmentation Model,Image Blur,Google Gemini,Seg Preview,OpenAI,Halo Visualization,Path Deviation,Roboflow Dataset Upload,Google Vision OCR
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
GLM-OCR in version v1 has.
Bindings
-
input
images(image): The image to infer on..prompt(string): Custom text prompt for GLM-OCR. Only used when task_type is 'custom'..model_version(roboflow_model_id): The GLM-OCR model to be used for inference..
-
output
parsed_output(string): String value.
Example JSON definition of step GLM-OCR in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/glm_ocr@v1",
"images": "$inputs.image",
"task_type": "<block_does_not_provide_example>",
"prompt": "Describe the text in the image.",
"model_version": "glm-ocr"
}