Skip to content

S3 Sink

Class: S3SinkBlockV1

Source: inference.core.workflows.core_steps.sinks.s3.v1.S3SinkBlockV1

Save workflow data directly to an AWS S3 bucket, supporting CSV, JSON, and text file formats with configurable output modes for aggregating multiple entries into single objects or saving each entry as a separate S3 object.

How This Block Works

This block uploads string content from workflow steps to S3 objects. The block:

  1. Takes string content (from formatters, predictions, or other string-producing blocks) and S3 configuration as input
  2. Connects to AWS S3 using the provided credentials (or the default AWS credential chain if none are supplied)
  3. Selects the appropriate upload strategy based on output_mode:
  4. Separate Files Mode: Creates a new S3 object for each input, generating unique keys with timestamps
  5. Append Log Mode: Buffers content in memory, uploading a complete object when max_entries_per_file is reached or when the block is destroyed
  6. For separate files mode: Generates a unique S3 key from the prefix, file name prefix, file type, and a timestamp, then uploads the content directly
  7. For append log mode:
  8. Buffers content entries in memory under a single S3 key
  9. Applies format-specific handling for appending:
    • CSV: Removes the header row from subsequent appends (CSV content must include headers on first write)
    • JSON: Converts to JSONL (JSON Lines) format, parsing and re-serializing each JSON document to fit on a single line
    • TXT: Appends content directly with newlines
  10. Tracks entry count and uploads the full buffer as a complete S3 object when max_entries_per_file is reached, then starts a fresh buffer with a new key
  11. Uploads any remaining buffered data when the block is destroyed
  12. Returns error status and messages indicating save success or failure

The block supports two storage strategies: separate files mode creates individual timestamped S3 objects per input (useful for organizing outputs by execution), while append log mode accumulates entries in memory and writes them as complete S3 objects on rotation (useful for time-series logging with controlled upload frequency). S3 key names include timestamps (format: YYYY_MM_DD_HH_MM_SS_microseconds) for unique keys and chronological ordering.

AWS Credentials

Credentials can be supplied in two ways: 1. Workflow inputs — declare aws_access_key_id and aws_secret_access_key as workflow inputs of kind parameter and connect them to the corresponding fields. This keeps credentials out of the workflow definition and allows them to be supplied at runtime. 2. Secrets provider block — connect the credential fields to the output of an Environment Secrets Store block, which reads values from server-side environment variables without embedding them in the workflow. Note: this is only available on self-hosted inference servers and cannot be used on the Roboflow hosted platform.

S3 Key Structure

The final S3 key is composed of:

{s3_prefix}/{file_name_prefix}_{timestamp}.{extension}
For example, with s3_prefix="logs/detections", file_name_prefix="run", and file_type="csv":
logs/detections/run_2024_10_18_14_09_57_622297.csv
If s3_prefix is empty, the key starts directly with the file name.

Note on Append Log Mode

In append log mode, data is buffered in memory and only uploaded to S3 when: - The max_entries_per_file limit is reached (object rotation), or - The block instance is destroyed at workflow teardown

This means data may not be immediately visible in S3 after each step execution. Use separate_files mode if immediate S3 visibility is required.

Common Use Cases

  • Cloud Data Logging: Upload detection results, metrics, or workflow outputs directly to S3 for durable cloud storage and downstream processing
  • Data Pipeline Integration: Export formatted CSV or JSONL files to S3 for consumption by data pipelines, analytics tools, or ML training jobs
  • Batch Result Archival: Store individual inference results as separate S3 objects organized by timestamp and prefix
  • Time-Series Collection: Aggregate workflow outputs into batched JSONL or CSV files in S3 for cost-efficient log storage
  • Cross-Service Integration: Write data to S3 to trigger Lambda functions, feed SQS queues, or integrate with other AWS services

Type identifier

Use the following identifier in step "type" field: roboflow_core/s3_sink@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
file_type str Type of file to create: 'csv' (CSV format), 'json' (JSON format, or JSONL in append_log mode), or 'txt' (plain text). In append_log mode, JSON files are stored as .jsonl (JSON Lines) format with one JSON object per line..
output_mode str Upload strategy: 'append_log' buffers multiple entries and uploads them as a single S3 object when the entry limit is reached (useful for batched logging), or 'separate_files' uploads each input as a new S3 object with a unique timestamp-based key (useful for per-execution outputs)..
bucket_name str Name of the target S3 bucket. Can be a static string or a selector resolving to a string at runtime..
s3_prefix str S3 key prefix (folder path) where objects will be stored. Trailing slashes are normalized automatically. Combined with file_name_prefix and a timestamp to form the full object key. Example: 'logs/detections' produces keys like 'logs/detections/workflow_output_2024_10_18_14_09_57_622297.csv'..
file_name_prefix str Prefix used to generate S3 object names. Combined with a timestamp (format: YYYY_MM_DD_HH_MM_SS_microseconds) and file extension to create unique keys like 'workflow_output_2024_10_18_14_09_57_622297.csv'..
max_entries_per_file int Maximum number of buffered entries before uploading to S3 and starting a new object in append_log mode. When this limit is reached, the accumulated buffer is uploaded as a complete S3 object and a new buffer starts with a fresh key. Only applies when output_mode is 'append_log'. Must be at least 1..
aws_access_key_id str AWS access key ID for authentication. If not provided, boto3's default credential chain is used (environment variables, ~/.aws/credentials, or IAM role). Recommended: connect this to an Environment Secrets Store block rather than hardcoding..
aws_secret_access_key str AWS secret access key for authentication. If not provided, boto3's default credential chain is used. Recommended: connect this to an Environment Secrets Store block rather than hardcoding..
aws_region str AWS region where the bucket is located (e.g., 'us-east-1'). If not provided, boto3's default region is used (AWS_DEFAULT_REGION environment variable or ~/.aws/config)..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to S3 Sink in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds S3 Sink in version v1 has.

Bindings
  • input

    • content (string): String content to upload to S3. This should be formatted data from other workflow blocks (e.g., CSV content from CSV Formatter, JSON strings, or plain text). The content format should match the specified file_type. For CSV files in append_log mode, content must include header rows on the first write..
    • bucket_name (string): Name of the target S3 bucket. Can be a static string or a selector resolving to a string at runtime..
    • s3_prefix (string): S3 key prefix (folder path) where objects will be stored. Trailing slashes are normalized automatically. Combined with file_name_prefix and a timestamp to form the full object key. Example: 'logs/detections' produces keys like 'logs/detections/workflow_output_2024_10_18_14_09_57_622297.csv'..
    • file_name_prefix (string): Prefix used to generate S3 object names. Combined with a timestamp (format: YYYY_MM_DD_HH_MM_SS_microseconds) and file extension to create unique keys like 'workflow_output_2024_10_18_14_09_57_622297.csv'..
    • max_entries_per_file (string): Maximum number of buffered entries before uploading to S3 and starting a new object in append_log mode. When this limit is reached, the accumulated buffer is uploaded as a complete S3 object and a new buffer starts with a fresh key. Only applies when output_mode is 'append_log'. Must be at least 1..
    • aws_access_key_id (Union[string, secret]): AWS access key ID for authentication. If not provided, boto3's default credential chain is used (environment variables, ~/.aws/credentials, or IAM role). Recommended: connect this to an Environment Secrets Store block rather than hardcoding..
    • aws_secret_access_key (Union[string, secret]): AWS secret access key for authentication. If not provided, boto3's default credential chain is used. Recommended: connect this to an Environment Secrets Store block rather than hardcoding..
    • aws_region (string): AWS region where the bucket is located (e.g., 'us-east-1'). If not provided, boto3's default region is used (AWS_DEFAULT_REGION environment variable or ~/.aws/config)..
  • output

    • error_status (boolean): Boolean flag.
    • message (string): String value.
Example JSON definition of step S3 Sink in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/s3_sink@v1",
    "content": "$steps.csv_formatter.csv_content",
    "file_type": "csv",
    "output_mode": "append_log",
    "bucket_name": "my-inference-results",
    "s3_prefix": "logs/detections",
    "file_name_prefix": "my_output",
    "max_entries_per_file": 1024,
    "aws_access_key_id": "$steps.secrets.aws_access_key_id",
    "aws_secret_access_key": "$steps.secrets.aws_secret_access_key",
    "aws_region": "us-east-1"
}