S3 Sink¶
Class: S3SinkBlockV1
Source: inference.core.workflows.core_steps.sinks.s3.v1.S3SinkBlockV1
Save workflow data directly to an AWS S3 bucket, supporting CSV, JSON, and text file formats with configurable output modes for aggregating multiple entries into single objects or saving each entry as a separate S3 object.
How This Block Works¶
This block uploads string content from workflow steps to S3 objects. The block:
- Takes string content (from formatters, predictions, or other string-producing blocks) and S3 configuration as input
- Connects to AWS S3 using the provided credentials (or the default AWS credential chain if none are supplied)
- Selects the appropriate upload strategy based on
output_mode: - Separate Files Mode: Creates a new S3 object for each input, generating unique keys with timestamps
- Append Log Mode: Buffers content in memory, uploading a complete object when
max_entries_per_fileis reached or when the block is destroyed - For separate files mode: Generates a unique S3 key from the prefix, file name prefix, file type, and a timestamp, then uploads the content directly
- For append log mode:
- Buffers content entries in memory under a single S3 key
- Applies format-specific handling for appending:
- CSV: Removes the header row from subsequent appends (CSV content must include headers on first write)
- JSON: Converts to JSONL (JSON Lines) format, parsing and re-serializing each JSON document to fit on a single line
- TXT: Appends content directly with newlines
- Tracks entry count and uploads the full buffer as a complete S3 object when
max_entries_per_fileis reached, then starts a fresh buffer with a new key - Uploads any remaining buffered data when the block is destroyed
- Returns error status and messages indicating save success or failure
The block supports two storage strategies: separate files mode creates individual timestamped S3 objects per input (useful for organizing outputs by execution), while append log mode accumulates entries in memory and writes them as complete S3 objects on rotation (useful for time-series logging with controlled upload frequency). S3 key names include timestamps (format: YYYY_MM_DD_HH_MM_SS_microseconds) for unique keys and chronological ordering.
AWS Credentials¶
Credentials can be supplied in two ways:
1. Workflow inputs — declare aws_access_key_id and aws_secret_access_key as workflow inputs of kind parameter and connect them to the corresponding fields. This keeps credentials out of the workflow definition and allows them to be supplied at runtime.
2. Secrets provider block — connect the credential fields to the output of an Environment Secrets Store block, which reads values from server-side environment variables without embedding them in the workflow. Note: this is only available on self-hosted inference servers and cannot be used on the Roboflow hosted platform.
S3 Key Structure¶
The final S3 key is composed of:
{s3_prefix}/{file_name_prefix}_{timestamp}.{extension}
s3_prefix="logs/detections", file_name_prefix="run", and file_type="csv":
logs/detections/run_2024_10_18_14_09_57_622297.csv
s3_prefix is empty, the key starts directly with the file name.
Note on Append Log Mode¶
In append log mode, data is buffered in memory and only uploaded to S3 when:
- The max_entries_per_file limit is reached (object rotation), or
- The block instance is destroyed at workflow teardown
This means data may not be immediately visible in S3 after each step execution. Use separate_files mode if immediate S3 visibility is required.
Common Use Cases¶
- Cloud Data Logging: Upload detection results, metrics, or workflow outputs directly to S3 for durable cloud storage and downstream processing
- Data Pipeline Integration: Export formatted CSV or JSONL files to S3 for consumption by data pipelines, analytics tools, or ML training jobs
- Batch Result Archival: Store individual inference results as separate S3 objects organized by timestamp and prefix
- Time-Series Collection: Aggregate workflow outputs into batched JSONL or CSV files in S3 for cost-efficient log storage
- Cross-Service Integration: Write data to S3 to trigger Lambda functions, feed SQS queues, or integrate with other AWS services
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/s3_sink@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
file_type |
str |
Type of file to create: 'csv' (CSV format), 'json' (JSON format, or JSONL in append_log mode), or 'txt' (plain text). In append_log mode, JSON files are stored as .jsonl (JSON Lines) format with one JSON object per line.. | ❌ |
output_mode |
str |
Upload strategy: 'append_log' buffers multiple entries and uploads them as a single S3 object when the entry limit is reached (useful for batched logging), or 'separate_files' uploads each input as a new S3 object with a unique timestamp-based key (useful for per-execution outputs).. | ❌ |
bucket_name |
str |
Name of the target S3 bucket. Can be a static string or a selector resolving to a string at runtime.. | ✅ |
s3_prefix |
str |
S3 key prefix (folder path) where objects will be stored. Trailing slashes are normalized automatically. Combined with file_name_prefix and a timestamp to form the full object key. Example: 'logs/detections' produces keys like 'logs/detections/workflow_output_2024_10_18_14_09_57_622297.csv'.. | ✅ |
file_name_prefix |
str |
Prefix used to generate S3 object names. Combined with a timestamp (format: YYYY_MM_DD_HH_MM_SS_microseconds) and file extension to create unique keys like 'workflow_output_2024_10_18_14_09_57_622297.csv'.. | ✅ |
max_entries_per_file |
int |
Maximum number of buffered entries before uploading to S3 and starting a new object in append_log mode. When this limit is reached, the accumulated buffer is uploaded as a complete S3 object and a new buffer starts with a fresh key. Only applies when output_mode is 'append_log'. Must be at least 1.. | ✅ |
aws_access_key_id |
str |
AWS access key ID for authentication. If not provided, boto3's default credential chain is used (environment variables, ~/.aws/credentials, or IAM role). Recommended: connect this to an Environment Secrets Store block rather than hardcoding.. | ✅ |
aws_secret_access_key |
str |
AWS secret access key for authentication. If not provided, boto3's default credential chain is used. Recommended: connect this to an Environment Secrets Store block rather than hardcoding.. | ✅ |
aws_region |
str |
AWS region where the bucket is located (e.g., 'us-east-1'). If not provided, boto3's default region is used (AWS_DEFAULT_REGION environment variable or ~/.aws/config).. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to S3 Sink in version v1.
- inputs:
OCR Model,Email Notification,OpenAI,Google Vision OCR,Google Gemini,Instance Segmentation Model,Local File Sink,Single-Label Classification Model,Model Monitoring Inference Aggregator,Anthropic Claude,Multi-Label Classification Model,Keypoint Detection Model,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Florence-2 Model,Roboflow Dataset Upload,CSV Formatter,Stitch OCR Detections,OpenAI,Qwen3.5-VL,Google Gemini,VLM As Detector,LMM,CogVLM,VLM As Classifier,Stitch OCR Detections,Llama 3.2 Vision,OpenAI,Clip Comparison,Webhook Sink,Florence-2 Model,Roboflow Custom Metadata,LMM For Classification,Object Detection Model,Anthropic Claude,Google Gemini,EasyOCR,S3 Sink,Anthropic Claude,Twilio SMS Notification,OpenAI,Roboflow Dataset Upload - outputs:
Dynamic Crop,Image Blur,Google Vision OCR,Google Gemini,Image Preprocessing,Object Detection Model,Local File Sink,Single-Label Classification Model,Model Monitoring Inference Aggregator,Multi-Label Classification Model,Bounding Box Visualization,Keypoint Detection Model,Gaze Detection,Dot Visualization,Florence-2 Model,Roboflow Dataset Upload,Depth Estimation,Polygon Visualization,OpenAI,Line Counter,Line Counter Visualization,Heatmap Visualization,Google Gemini,Stability AI Image Generation,Morphological Transformation,Distance Measurement,Keypoint Visualization,Keypoint Detection Model,Background Color Visualization,Label Visualization,Polygon Visualization,LMM,CogVLM,Time in Zone,Single-Label Classification Model,Triangle Visualization,Stability AI Outpainting,Mask Visualization,Color Visualization,Text Display,Reference Path Visualization,OpenAI,Llama 3.2 Vision,Image Threshold,Clip Comparison,Classification Label Visualization,Polygon Zone Visualization,Roboflow Custom Metadata,Dynamic Zone,LMM For Classification,Halo Visualization,Blur Visualization,Path Deviation,Anthropic Claude,SAM 3,Ellipse Visualization,Crop Visualization,Path Deviation,Trace Visualization,Twilio SMS Notification,Size Measurement,Time in Zone,Motion Detection,Email Notification,SIFT Comparison,OpenAI,Seg Preview,Time in Zone,Instance Segmentation Model,Multi-Label Classification Model,Anthropic Claude,Email Notification,Slack Notification,Twilio SMS/MMS Notification,Detections Stitch,Cache Set,SAM 3,Stitch OCR Detections,Perspective Correction,PTZ Tracking (ONVIF),Moondream2,Icon Visualization,Corner Visualization,Camera Calibration,Halo Visualization,Pixelate Visualization,Contrast Equalization,Instance Segmentation Model,Detections Classes Replacement,Line Counter,Stitch OCR Detections,Webhook Sink,Circle Visualization,Florence-2 Model,SAM 3,Perception Encoder Embedding Model,Cache Get,YOLO-World Model,Template Matching,Object Detection Model,Detections Consensus,Anthropic Claude,Google Gemini,Model Comparison Visualization,QR Code Generator,S3 Sink,CLIP Embedding Model,Stability AI Inpainting,Segment Anything 2 Model,OpenAI,Pixel Color Count,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
S3 Sink in version v1 has.
Bindings
-
input
content(string): String content to upload to S3. This should be formatted data from other workflow blocks (e.g., CSV content from CSV Formatter, JSON strings, or plain text). The content format should match the specified file_type. For CSV files in append_log mode, content must include header rows on the first write..bucket_name(string): Name of the target S3 bucket. Can be a static string or a selector resolving to a string at runtime..s3_prefix(string): S3 key prefix (folder path) where objects will be stored. Trailing slashes are normalized automatically. Combined with file_name_prefix and a timestamp to form the full object key. Example: 'logs/detections' produces keys like 'logs/detections/workflow_output_2024_10_18_14_09_57_622297.csv'..file_name_prefix(string): Prefix used to generate S3 object names. Combined with a timestamp (format: YYYY_MM_DD_HH_MM_SS_microseconds) and file extension to create unique keys like 'workflow_output_2024_10_18_14_09_57_622297.csv'..max_entries_per_file(string): Maximum number of buffered entries before uploading to S3 and starting a new object in append_log mode. When this limit is reached, the accumulated buffer is uploaded as a complete S3 object and a new buffer starts with a fresh key. Only applies when output_mode is 'append_log'. Must be at least 1..aws_access_key_id(Union[string,secret]): AWS access key ID for authentication. If not provided, boto3's default credential chain is used (environment variables, ~/.aws/credentials, or IAM role). Recommended: connect this to an Environment Secrets Store block rather than hardcoding..aws_secret_access_key(Union[string,secret]): AWS secret access key for authentication. If not provided, boto3's default credential chain is used. Recommended: connect this to an Environment Secrets Store block rather than hardcoding..aws_region(string): AWS region where the bucket is located (e.g., 'us-east-1'). If not provided, boto3's default region is used (AWS_DEFAULT_REGION environment variable or ~/.aws/config)..
-
output
Example JSON definition of step S3 Sink in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/s3_sink@v1",
"content": "$steps.csv_formatter.csv_content",
"file_type": "csv",
"output_mode": "append_log",
"bucket_name": "my-inference-results",
"s3_prefix": "logs/detections",
"file_name_prefix": "my_output",
"max_entries_per_file": 1024,
"aws_access_key_id": "$steps.secrets.aws_access_key_id",
"aws_secret_access_key": "$steps.secrets.aws_secret_access_key",
"aws_region": "us-east-1"
}