Data Export

Use Bucket-enriched data to augment and improve your own insights and workflows.

You can export your app data manually, which gives you a CSV file with company data. Navigate to SettingsData export. Manual data export requires a minimum payment plan of Business.

Bucket application navigated to the data export section of settings

Bucket data export is found in the app settings

The downloaded file will be in CSV format and contain the Company data model.

Historical data

The manual data export allows you to select between four different historical data aggregations in the resulting export:

  • Just today's snapshot: Download a snapshot of the current state of the data. The Date column will contain the current date.
  • 30 days, daily granularity: The file will contain data rolled up in daily increments for the last 30 days. The Date column will be the end of each rolled-up day data
  • 3 months, weekly granularity: The file will contain data rolled up in weekly increments for the last 3 months. A weekly increment is a full calendar week (mon-sun). The Date column will be the data at the end of each rolled-up interval.
  • 6 months, weekly granularity: The file will contain data rolled up in weekly increments for the last 6 months. A weekly increment is a full calendar week (mon-sun). The Date column will be the data at the end of each rolled-up interval.

Company data model

The company model contains information about the company usage of all your features

ColumnData typeFormatDescription
Company IDstringCompany ID from your tracking calls
Company Namestring | nullThe company's name, if sent through tracking
Feature NamestringBucket feature name
Feature IDstringBucket feature ID
DatestringUTC Date (ISO-8601)The date of data point
First SeenstringUTC Date (ISO-8601)The company's first tracking date
Last SeenstringUTC Date (ISO-8601)The company's latest tracking date
First Usedstring | nullUTC Date (ISO-8601)Company first feature usage
Last Usedstring | nullUTC Date (ISO-8601)The company's latest feature usage
STARS State Labelstring| nullOne of the:"never" | "tried" | "retained" | "churned"Human readable value for the STARS step
STARS StatenumberOne of the: 1 | 2 | 3| 4The company STARS-state for the feature
Frequency LabelstringHuman readable label for the frequency value
FrequencynumberOne of: 0 | 1 | 2 | 3Company feature usage frequency. Higher means more frequent
Satisfaction LabelstringHuman readable label for the satisfaction score
Satisfactionnumber | nullOne of: 1 | 2 | 3 | 4 | 5Company feature satisfaction score. Higher is better
Feedback CountnumberTotal number of feedbacks recorded
Segment NamesstringComma separated listList of segment names the company is a part of
Segment IDsstringComma separated listList of segment id's the company is a part of

Scheduled export

Bucket includes the ability to set a scheduled data export process that runs on a daily or weekly cadence. This feature, if enabled, automatically generates an export CSV file for each day or week, and saves it on the customer's S3-compatible file storage provider (such as AWS S3, Google Cloud Storage, Cloudflare R2, and others).

The daily export runs every day after 00:00 UTC and exports a snapshot of the data as it was at 00:00 UTC. For weekly exports, the process runs every Monday after 00:00 UTC and exports a snapshot of the data as it was at 00:00 UTC on Monday.

To enable this functionality, toggle the Enable automatic export to ON and fill in all the required configuration details in the fields below the toggle:

  • Periodic schedule: The cadence of data export. Can be either daily or weekly,
  • S3 Provider: The S3-compatible cloud storage. Support is available for AWS, GCP and Custom which allows specifying custom Endpoint value (useful for Cloudflare R2),
  • S3 bucket: The details of the S3 bucket that will be used to store the exported data files,
    • For AWS, enter the S3 bucket's public URL in the following format: https://<bucket>.s3.<region>.amazonaws.com[/key]. The key component of the URL is optional. It can be used to point to a specific directory within the bucket. For more details see AWS S3 setup guide,
    • For GCP and Custom, the Bucket, Region, and Key properties need to be inserted separately in each of the input boxes presented on the settings page,
    • And finally, for Custom, another option becomes available: Endpoint. It needs to be filled in with the URL of the cloud storage provider API. For instance, Cloudflare R2 will provide an individual endpoint value for each of their customers in the following format: https://<ACCOUNT_ID>.eu.r2.cloudflarestorage.com,
  • Access key: The access key obtained from the cloud storage provider,
  • Secret access key: The secret access key is obtained from the cloud storage provider.

Upon clicking Save, Bucket will validate that the supplied details are correct by attempting to list the files in the S3 bucket. If there is any error (such as misconfigured credentials, missing privileges or simply misspelled names) an error will be displayed giving one the chance to correct the settings.

Assuming the settings have been set up correctly, Bucket will run the data export the next day or Monday (depending on the selected cadence) and will record the status of the operation:

Notice the status is displayed below the toggle and indicates:

  • when the last export ran,
  • the status of the last export, which can be one of never ran before, successful and failed,
  • when the next export will run,
  • and finally, in case of an error, the error message will be displayed as well.

Articles of interest: