CSV Formatter¶
Class: CSVFormatterBlockV1
Source: inference.core.workflows.core_steps.formatters.csv.v1.CSVFormatterBlockV1
Convert workflow data into structured CSV format by defining custom columns, applying data transformations, and aggregating batch data into CSV documents with automatic timestamp tracking for logging, reporting, and data export workflows.
How This Block Works¶
This block formats workflow data into CSV (Comma-Separated Values) format by organizing data from multiple sources into structured columns. The block:
- Takes data references from
columns_datadictionary that maps column names to workflow data sources (selectors, static values, or workflow inputs) - Optionally applies data transformation operations using
columns_operations, which uses the Query Language (UQL) to transform column data (e.g., extract properties from detections, perform calculations, format values) - Automatically adds a
timestampcolumn with the current UTC time in ISO format (e.g.,2024-10-18T14:09:57.622297+00:00) to each row - note that "timestamp" is a reserved column name - Handles batch inputs by aggregating multiple data points into rows:
- For single input (
batch_size=1): Creates CSV with header row and one data row - For batch inputs (
batch_size>1): Creates CSV with header row and one row per input, aggregating all rows into a single CSV document that is output only in the last batch element (earlier elements return empty CSV content) - Aligns batch parameters when multiple batch inputs are provided, broadcasting non-batch parameters to match the maximum batch size
- Converts the structured data dictionary into CSV format using pandas DataFrame serialization
- Returns
csv_contentas a string containing the complete CSV document (header and data rows)
The block supports flexible column definition where each column can reference different workflow data sources (detection predictions, classification results, workflow inputs, computed values, etc.) and optionally apply transformations to extract specific properties or format data. The automatic timestamp column enables temporal tracking of when each CSV row was generated, useful for logging and time-series data collection. Batch aggregation allows the block to collect data from multiple workflow executions and combine them into a single CSV document, which is particularly useful for batch processing workflows where you want to log multiple detections, images, or analysis results into one CSV file.
Common Use Cases¶
- Detection Logging and Reporting: Create CSV logs of detection results (e.g., log class names, confidence scores, bounding box coordinates from object detection models), enabling structured logging of inference results for analysis, debugging, or audit trails
- Time-Series Data Collection: Aggregate workflow metrics, counts, or analysis results over time into CSV format (e.g., log line counter counts, zone occupancy, detection frequencies), creating time-stamped datasets for trend analysis or reporting
- Batch Data Export: Collect and aggregate data from batch processing workflows into CSV files (e.g., export all detections from a batch of images, collect metrics from multiple workflow runs), enabling efficient bulk data export and reporting
- Structured Data Transformation: Extract and format specific properties from complex workflow outputs (e.g., extract class names from detections, convert nested data structures into flat CSV columns), enabling data transformation for downstream analysis or external systems
- Integration with External Systems: Format workflow data for compatibility with external tools (e.g., create CSV files for spreadsheet analysis, database import, or business intelligence tools), enabling seamless data export and integration workflows
- Data Aggregation and Analysis: Combine data from multiple workflow sources into structured CSV format (e.g., merge detection results with metadata, combine model outputs with reference data), enabling comprehensive data collection and analysis workflows
Connecting to Other Blocks¶
The CSV content from this block can be connected to:
- Detection or analysis blocks (e.g., Object Detection Model, Instance Segmentation Model, Classification Model, Keypoint Detection Model, Line Counter, Time in Zone) to format their outputs into CSV columns, enabling structured logging and export of inference results and analytics data
- Data storage blocks (e.g., Local File Sink) to save CSV files to disk, enabling persistent storage of formatted workflow data for later analysis or reporting
- Notification blocks (e.g., Email Notification, Slack Notification) to attach or include CSV content in notifications, enabling CSV reports to be sent as email attachments or included in message bodies
- Webhook blocks (e.g., Webhook Sink) to send CSV content to external APIs or services, enabling integration with external systems that consume CSV data
- Other formatter blocks (e.g., JSON Parser, Expression) to further process CSV content or convert it to other formats, enabling multi-stage data transformation workflows
- Batch processing workflows where multiple data points need to be aggregated into a single CSV document, allowing comprehensive logging and export of batch processing results
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/csv_formatter@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
columns_data |
Dict[str, Union[bool, float, int, str]] |
Dictionary mapping column names to data sources for constructing CSV columns. Keys are column names (note: 'timestamp' is reserved and cannot be used). Values can be selectors referencing workflow data (e.g., '$steps.model.predictions', '$inputs.data'), static values (strings, numbers, booleans), or a mix of both. Each key-value pair creates one CSV column. Supports batch inputs - if values are batches, the CSV will aggregate all batch elements into rows. Example: {'predictions': '$steps.object_detection.predictions', 'count': '$steps.line_counter.count_in'} creates CSV columns named 'predictions' and 'count'.. | ✅ |
columns_operations |
Dict[str, List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]]] |
Optional dictionary mapping column names to Query Language (UQL) operation definitions for transforming column data before CSV formatting. Keys must match column names defined in columns_data. Values are lists of UQL operations (e.g., DetectionsPropertyExtract to extract class names from detections, string operations, calculations) that transform the raw column data. Operations are applied in sequence to each column's data. If a column name is not in this dictionary, the data is used as-is without transformation. Example: {'predictions': [{'type': 'DetectionsPropertyExtract', 'property_name': 'class_name'}]} extracts class names from detection predictions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CSV Formatter in version v1.
- inputs:
Moondream2,Image Threshold,Stitch Images,Byte Tracker,Size Measurement,Multi-Label Classification Model,Keypoint Detection Model,Mask Visualization,Instance Segmentation Model,Path Deviation,Crop Visualization,QR Code Generator,Detections Stabilizer,Continue If,Clip Comparison,Property Definition,Segment Anything 2 Model,Stability AI Image Generation,VLM As Detector,VLM As Classifier,Google Gemini,Overlap Filter,Qwen3.5-VL,Object Detection Model,Slack Notification,Velocity,Dot Visualization,OpenAI,Motion Detection,Email Notification,Rate Limiter,Detections List Roll-Up,Instance Segmentation Model,Roboflow Dataset Upload,Depth Estimation,Contrast Equalization,Cache Get,Label Visualization,Stitch OCR Detections,Llama 3.2 Vision,Camera Focus,Polygon Zone Visualization,Detections Filter,Color Visualization,OpenAI,Dimension Collapse,Template Matching,Florence-2 Model,Model Monitoring Inference Aggregator,JSON Parser,Dynamic Crop,Background Color Visualization,Object Detection Model,Clip Comparison,Line Counter Visualization,SIFT Comparison,Image Preprocessing,PTZ Tracking (ONVIF),Blur Visualization,CSV Formatter,Triangle Visualization,Gaze Detection,OCR Model,Trace Visualization,Email Notification,Twilio SMS/MMS Notification,CLIP Embedding Model,Byte Tracker,Image Convert Grayscale,First Non Empty Or Default,Reference Path Visualization,Expression,Single-Label Classification Model,YOLO-World Model,LMM For Classification,Florence-2 Model,Perspective Correction,Stitch OCR Detections,OpenAI,Time in Zone,Circle Visualization,EasyOCR,Detections Consensus,Seg Preview,Multi-Label Classification Model,Detections Transformation,SAM 3,Stability AI Outpainting,Text Display,Anthropic Claude,Line Counter,Path Deviation,QR Code Detection,Relative Static Crop,OpenAI,Detections Combine,Local File Sink,Google Gemini,Image Slicer,Keypoint Detection Model,Bounding Rectangle,Distance Measurement,Ellipse Visualization,Byte Tracker,Halo Visualization,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Buffer,Environment Secrets Store,Identify Outliers,Absolute Static Crop,Image Contours,Classification Label Visualization,Dominant Color,Image Slicer,Detections Stitch,Camera Focus,Barcode Detection,Time in Zone,Grid Visualization,Cosine Similarity,Background Subtraction,Qwen2.5-VL,SAM 3,Heatmap Visualization,SIFT,CogVLM,Identify Changes,Line Counter,Cache Set,Roboflow Dataset Upload,Bounding Box Visualization,Polygon Visualization,Pixelate Visualization,Roboflow Custom Metadata,Pixel Color Count,Image Blur,SIFT Comparison,Detections Classes Replacement,Morphological Transformation,Stability AI Inpainting,Webhook Sink,Perception Encoder Embedding Model,LMM,Detection Offset,Detection Event Log,Icon Visualization,VLM As Classifier,Qwen3-VL,SmolVLM2,Twilio SMS Notification,Google Vision OCR,Data Aggregator,Polygon Visualization,Google Gemini,Anthropic Claude,Mask Area Measurement,SAM 3,Time in Zone,Single-Label Classification Model,Detections Merge,Delta Filter,Dynamic Zone,Halo Visualization,Camera Calibration,VLM As Detector,Keypoint Visualization - outputs:
Moondream2,Stitch OCR Detections,Image Threshold,OpenAI,Size Measurement,Mask Visualization,Time in Zone,Instance Segmentation Model,Circle Visualization,Path Deviation,Seg Preview,Crop Visualization,SAM 3,Stability AI Outpainting,QR Code Generator,Text Display,Anthropic Claude,Line Counter,Path Deviation,Clip Comparison,OpenAI,Segment Anything 2 Model,Stability AI Image Generation,Local File Sink,Google Gemini,Halo Visualization,Google Gemini,Slack Notification,Distance Measurement,Ellipse Visualization,Dot Visualization,Halo Visualization,Anthropic Claude,Model Comparison Visualization,OpenAI,Corner Visualization,Email Notification,Classification Label Visualization,Instance Segmentation Model,Roboflow Dataset Upload,Depth Estimation,Contrast Equalization,Cache Get,Detections Stitch,Label Visualization,Stitch OCR Detections,Llama 3.2 Vision,Time in Zone,Polygon Zone Visualization,Color Visualization,SAM 3,Heatmap Visualization,OpenAI,CogVLM,Florence-2 Model,Model Monitoring Inference Aggregator,Line Counter,Cache Set,Roboflow Dataset Upload,Polygon Visualization,Bounding Box Visualization,Roboflow Custom Metadata,Pixel Color Count,Image Blur,SIFT Comparison,Detections Classes Replacement,Webhook Sink,Dynamic Crop,Stability AI Inpainting,Background Color Visualization,Morphological Transformation,Perception Encoder Embedding Model,LMM,Line Counter Visualization,Image Preprocessing,Icon Visualization,PTZ Tracking (ONVIF),Twilio SMS Notification,Triangle Visualization,Google Vision OCR,Polygon Visualization,Google Gemini,Anthropic Claude,Trace Visualization,Email Notification,Twilio SMS/MMS Notification,CLIP Embedding Model,SAM 3,Time in Zone,Reference Path Visualization,YOLO-World Model,LMM For Classification,Florence-2 Model,Perspective Correction,Keypoint Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CSV Formatter in version v1 has.
Bindings
-
input
columns_data(*): Dictionary mapping column names to data sources for constructing CSV columns. Keys are column names (note: 'timestamp' is reserved and cannot be used). Values can be selectors referencing workflow data (e.g., '$steps.model.predictions', '$inputs.data'), static values (strings, numbers, booleans), or a mix of both. Each key-value pair creates one CSV column. Supports batch inputs - if values are batches, the CSV will aggregate all batch elements into rows. Example: {'predictions': '$steps.object_detection.predictions', 'count': '$steps.line_counter.count_in'} creates CSV columns named 'predictions' and 'count'..
-
output
csv_content(string): String value.
Example JSON definition of step CSV Formatter in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/csv_formatter@v1",
"columns_data": {
"predictions": "$steps.model.predictions",
"reference": "$inputs.reference_class_names"
},
"columns_operations": {
"predictions": [
{
"property_name": "class_name",
"type": "DetectionsPropertyExtract"
}
]
}
}