JSON Parser¶
Class: JSONParserBlockV1
Source: inference.core.workflows.core_steps.formatters.json_parser.v1.JSONParserBlockV1
Parse JSON strings (raw JSON or JSON wrapped in Markdown code blocks) into structured data by extracting specified fields and exposing them as individual outputs, enabling JSON parsing, LLM/VLM output processing, structured data extraction, and configuration parsing workflows where JSON strings need to be converted into usable workflow data.
How This Block Works¶
This block parses JSON strings and extracts specified fields as individual outputs. The block:
- Receives a JSON string input (typically from LLM/VLM blocks or workflow inputs)
- Detects and extracts JSON content:
Handles Markdown-wrapped JSON:
- Searches for JSON wrapped in Markdown code blocks (json ...)
- This format is very common in LLM/VLM responses (e.g., GPT responses)
- If multiple markdown JSON blocks are found, only the first block is parsed
- Extracts the JSON content from within the markdown tags
Handles raw JSON strings:
- If no markdown blocks are found, attempts to parse the entire string as JSON
- Supports standard JSON format strings
3. Parses JSON content:
- Uses Python's JSON parser to convert the string into a JSON object/dictionary
- Handles parsing errors gracefully (returns None for all fields if parsing fails)
4. Extracts expected fields:
- Retrieves values for each field specified in expected_fields parameter
- For each expected field, looks up the corresponding key in the parsed JSON
- Returns the field value (or None if the field is missing)
5. Sets error status:
- error_status is set to True if at least one expected field cannot be retrieved from the parsed JSON
- error_status is set to False if all expected fields are found (even if multiple markdown blocks exist, only first is parsed)
- Error status is always included as an output, allowing downstream blocks to check parsing success
6. Exposes fields as outputs:
- Each field in expected_fields becomes a separate output with the field name
- Field values are extracted from the parsed JSON and made available as outputs
- Missing fields are set to None
- All outputs can be referenced using $steps.block_name.field_name syntax
7. Returns parsed data:
- Outputs include: error_status (boolean) and all expected fields
- Fields contain the extracted values from the JSON (or None if missing)
- Outputs can be used in subsequent workflow steps
The block is particularly useful for processing LLM/VLM outputs that return JSON, extracting structured configuration from JSON strings, and parsing JSON responses into workflow-usable data. It handles the common case where LLMs wrap JSON in markdown code blocks.
Common Use Cases¶
- LLM/VLM Output Processing: Parse JSON outputs from Large Language Models and Visual Language Models (e.g., parse GPT JSON responses, extract structured data from LLM outputs, process VLM JSON responses), enabling LLM/VLM output processing workflows
- Structured Data Extraction: Extract structured data from JSON strings for use in workflows (e.g., extract configuration parameters, parse JSON responses, extract structured fields), enabling structured data extraction workflows
- Configuration Parsing: Parse JSON configuration strings into workflow parameters (e.g., parse model configuration, extract workflow parameters, parse JSON configs), enabling configuration parsing workflows
- JSON Response Processing: Process JSON responses from APIs or models (e.g., parse API JSON responses, extract fields from JSON, process JSON data), enabling JSON response processing workflows
- Dynamic Parameter Extraction: Extract dynamic parameters from JSON strings for use in workflow steps (e.g., extract model IDs from JSON, parse dynamic configs, extract parameters dynamically), enabling dynamic parameter workflows
- Data Format Conversion: Convert JSON strings into structured workflow data (e.g., convert JSON to workflow inputs, parse JSON for workflow use, extract JSON fields), enabling data format conversion workflows
Connecting to Other Blocks¶
This block receives JSON strings and produces parsed field outputs:
- After LLM/VLM blocks to parse JSON outputs into structured data (e.g., parse LLM JSON outputs, extract VLM JSON fields, process model JSON responses), enabling LLM/VLM-to-parser workflows
- After workflow inputs to parse JSON input parameters (e.g., parse JSON config inputs, extract JSON parameters, process JSON workflow inputs), enabling input-parser workflows
- Before model blocks to use parsed fields as model parameters (e.g., use parsed model_id for models, use parsed configs for model setup, provide parsed parameters to models), enabling parser-to-model workflows
- Before logic blocks to use parsed fields in conditions (e.g., use parsed values in Continue If, filter based on parsed fields, make decisions using parsed data), enabling parser-to-logic workflows
- Before data storage blocks to store parsed field values (e.g., store parsed JSON fields, log parsed values, save parsed data), enabling parser-to-storage workflows
- In workflow outputs to provide parsed fields as final output (e.g., JSON parsing outputs, structured data outputs, parsed field outputs), enabling parser-to-output workflows
Requirements¶
This block requires a JSON string input (raw JSON or JSON wrapped in Markdown code blocks). The expected_fields parameter specifies which JSON fields to extract as outputs (field names must be valid JSON keys). The error_status field name is reserved and cannot be used in expected_fields. The block supports both raw JSON strings and JSON wrapped in markdown code blocks (json ...). If multiple markdown blocks are found, only the first is parsed. If parsing fails or expected fields are missing, fields are set to None and error_status is set to True. All expected fields become separate outputs that can be referenced in subsequent workflow steps.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/json_parser@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
raw_json |
str |
JSON string to parse. Can be raw JSON string (e.g., '{"key": "value"}') or JSON wrapped in Markdown code blocks (e.g., json {"key": "value"}). Markdown-wrapped JSON is common in LLM/VLM responses. If multiple markdown JSON blocks are present, only the first block is parsed. The string is parsed using Python's JSON parser, and specified fields are extracted as outputs.. |
✅ |
expected_fields |
List[str] |
List of JSON field names to extract from the parsed JSON. Each field becomes a separate output that can be referenced in subsequent workflow steps (e.g., $steps.block_name.field_name). Fields that exist in the JSON are extracted with their values; missing fields are set to None. The 'error_status' field name is reserved (always included as output) and cannot be used in this list. Field names must match JSON keys exactly.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to JSON Parser in version v1.
- inputs:
Florence-2 Model,Google Gemini,OpenAI,Llama 3.2 Vision,Anthropic Claude,Florence-2 Model,Anthropic Claude,Google Gemini,Anthropic Claude,OpenAI,OpenAI,Google Gemini - outputs:
Contrast Equalization,Clip Comparison,VLM as Detector,Detections Transformation,Polygon Visualization,Image Blur,SIFT Comparison,First Non Empty Or Default,Text Display,SIFT,Moondream2,Qwen3-VL,Google Vision OCR,Pixelate Visualization,Time in Zone,VLM as Classifier,Detection Offset,Detections Filter,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Path Deviation,Keypoint Detection Model,CSV Formatter,Florence-2 Model,Twilio SMS Notification,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Dynamic Zone,Identify Changes,Icon Visualization,Expression,SAM 3,Qwen2.5-VL,Detections Consensus,Multi-Label Classification Model,Detections Stitch,QR Code Detection,Dynamic Crop,Continue If,Bounding Box Visualization,YOLO-World Model,Detection Event Log,Detections Classes Replacement,Blur Visualization,Camera Calibration,Line Counter,Dominant Color,Path Deviation,OpenAI,Camera Focus,Trace Visualization,CogVLM,Image Slicer,Absolute Static Crop,Dot Visualization,Label Visualization,Slack Notification,Google Gemini,Object Detection Model,LMM For Classification,Stitch OCR Detections,OpenAI,Classification Label Visualization,Stitch OCR Detections,Byte Tracker,Twilio SMS/MMS Notification,Velocity,Gaze Detection,Anthropic Claude,Clip Comparison,VLM as Detector,Webhook Sink,Llama 3.2 Vision,SIFT Comparison,Anthropic Claude,Delta Filter,Time in Zone,Local File Sink,QR Code Generator,SmolVLM2,Email Notification,CLIP Embedding Model,Roboflow Dataset Upload,Motion Detection,Model Comparison Visualization,Camera Focus,PTZ Tracking (ONVIF),LMM,Byte Tracker,Single-Label Classification Model,Mask Visualization,Anthropic Claude,Relative Static Crop,Cosine Similarity,Object Detection Model,SAM 3,Detections Merge,Keypoint Detection Model,Circle Visualization,Seg Preview,Property Definition,EasyOCR,Stability AI Inpainting,Multi-Label Classification Model,Reference Path Visualization,Time in Zone,Detections Combine,Ellipse Visualization,Crop Visualization,Overlap Filter,Line Counter,Image Preprocessing,Barcode Detection,Detections List Roll-Up,Segment Anything 2 Model,Background Subtraction,Image Slicer,Image Contours,Cache Set,Depth Estimation,Pixel Color Count,Stitch Images,VLM as Classifier,Model Monitoring Inference Aggregator,Cache Get,Instance Segmentation Model,Line Counter Visualization,Morphological Transformation,Single-Label Classification Model,Polygon Zone Visualization,Email Notification,Keypoint Visualization,OCR Model,Roboflow Custom Metadata,Google Gemini,Distance Measurement,OpenAI,Color Visualization,Size Measurement,Data Aggregator,Byte Tracker,Identify Outliers,Buffer,Florence-2 Model,Google Gemini,JSON Parser,Grid Visualization,Rate Limiter,Template Matching,OpenAI,Dimension Collapse,Bounding Rectangle,Background Color Visualization,SAM 3,Roboflow Dataset Upload,Stability AI Outpainting,Triangle Visualization,Stability AI Image Generation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
JSON Parser in version v1 has.
Bindings
-
input
raw_json(language_model_output): JSON string to parse. Can be raw JSON string (e.g., '{"key": "value"}') or JSON wrapped in Markdown code blocks (e.g.,json {"key": "value"}). Markdown-wrapped JSON is common in LLM/VLM responses. If multiple markdown JSON blocks are present, only the first block is parsed. The string is parsed using Python's JSON parser, and specified fields are extracted as outputs..
-
output
Example JSON definition of step JSON Parser in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/json_parser@v1",
"raw_json": "$steps.lmm.output",
"expected_fields": [
"field_a",
"field_b"
]
}