OCR Model¶
Class: OCRModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.ocr.v1.OCRModelBlockV1
Retrieve the characters in an image using Optical Character Recognition (OCR).
This block returns the text within an image.
You may want to use this block in combination with a detections-based block (i.e. ObjectDetectionBlock). An object detection model could isolate specific regions from an image (i.e. a shipping container ID in a logistics use case) for further processing. You can then use a DynamicCropBlock to crop the region of interest before running OCR.
Using a detections model then cropping detections allows you to isolate your analysis on particular regions of an image.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/ocr_model@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to OCR Model
in version v1
.
- inputs:
Reference Path Visualization
,Blur Visualization
,Pixelate Visualization
,Classification Label Visualization
,Background Color Visualization
,Dynamic Crop
,Keypoint Visualization
,Camera Focus
,Mask Visualization
,Image Slicer
,Absolute Static Crop
,Stability AI Image Generation
,Image Blur
,Circle Visualization
,Grid Visualization
,Crop Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Camera Calibration
,SIFT
,Corner Visualization
,Image Contours
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Line Counter Visualization
,Image Slicer
,Perspective Correction
,Image Preprocessing
,SIFT Comparison
,Label Visualization
,Relative Static Crop
,Color Visualization
,Ellipse Visualization
- outputs:
Classification Label Visualization
,Webhook Sink
,Background Color Visualization
,Dynamic Crop
,Cache Get
,Mask Visualization
,Clip Comparison
,Twilio SMS Notification
,Google Vision OCR
,Segment Anything 2 Model
,Model Monitoring Inference Aggregator
,Stability AI Image Generation
,LMM For Classification
,Florence-2 Model
,Image Blur
,Cache Set
,Roboflow Dataset Upload
,CogVLM
,Circle Visualization
,Crop Visualization
,Path Deviation
,OpenAI
,Detections Stitch
,OpenAI
,Pixel Color Count
,Label Visualization
,Path Deviation
,Line Counter
,Time in Zone
,Model Comparison Visualization
,Bounding Box Visualization
,Perspective Correction
,SIFT Comparison
,Slack Notification
,Color Visualization
,Ellipse Visualization
,Reference Path Visualization
,Anthropic Claude
,Email Notification
,LMM
,Llama 3.2 Vision
,Instance Segmentation Model
,Keypoint Visualization
,Time in Zone
,Florence-2 Model
,YOLO-World Model
,Trace Visualization
,Image Threshold
,Triangle Visualization
,Polygon Visualization
,Stability AI Inpainting
,Halo Visualization
,Dot Visualization
,Google Gemini
,Polygon Zone Visualization
,CLIP Embedding Model
,Local File Sink
,Size Measurement
,Instance Segmentation Model
,Roboflow Custom Metadata
,Corner Visualization
,Roboflow Dataset Upload
,Line Counter Visualization
,Image Preprocessing
,Line Counter
,Distance Measurement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
OCR Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..
-
output
result
(string
): String value.parent_id
(parent_id
): Identifier of parent for step output.root_parent_id
(parent_id
): Identifier of parent for step output.prediction_type
(prediction_type
): String value with type of prediction.
Example JSON definition of step OCR Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/ocr_model@v1",
"images": "$inputs.image"
}