Transfer data at scale from your warehouse to S3
To get started, you will need an S3 bucket and AWS credentials. The "Bucket Name" should just be the name of the bucket, not a URL. The IAM user needs to have programmatic access enabled and permission to write to the S3 path you want to use.
See the guide for configuring AWS credentials.
- All - All Mode creates a CSV file with all the rows in the query results, every time the sync is run.
- Insert - Insert mode creates a CSV file with the rows that were added since the last sync.
The object key field will allow you to specify the prefix and the name of the file that you want to use for your results. In the screenshot below, we've specified that we want the file to be called results.csv, with a custom prefix.
You can also timestamp variables in the file name, surrounding each with
We currently support these timestamp variables: YYYY, MM, DD, HH, mm. All dates and times will be UTC.
For this destination, we give you the ability to export all columns as they are represented in your model.
If you need remap the fields that you're exporting, maybe because you don't want to alter your model, you can manually map fields. Only the fields that you map will be exported in this instance. In this example, we're exporting id, email, name, and eventtype. These fields are mapped to new fields in the CSV as user_id, email, and username, and event, respectively. All other columns from your results are ignored.
We provide you the option to include a CSV header with your exported results.
If a file at that path already exists at the time of a sync, Hightouch will overwrite it. To keep different versions of the same results file, you can enable versioning in your bucket, or your application can copy the data to another location.
This destination will not respect any sorting that you have in your model. It will export results file sorted by ID.