Launch Darkly Collect Slice

Overview

Launch Darkly is the feature management platform that software teams use to build better software, faster.

The Datacoral Launch Darkly slice collects data from a Launch Darkly API and writes it to S3 and Redshift.

Steps to add this slice to your installation

The steps to launch your slice are:

  1. Generate Launch Darkly API keys
  2. Specify the slice config
  3. Add the Launch Darkly slice

1. Generate Launch Darkly API keys

Setup requirements

Before getting started please make sure to have the following information:

  • Access to an active Launch Darkly account

Setup instructions

  1. Sign into your Launch Darkly account.
  2. In the left sidebar menu, navigate to Account settings > Authorization > Access tokens.
  3. If a key has never been generated for your account, click 'New Token +' button.
  4. Fill Name field and select Role then click 'Save Token' button.
  5. Your API token will display on the page. Copy the API token.

2. Specify the slice config

To get a template for the Launch Darkly slice configuration save the output of the describe --input-parameters command as follows:

datacoral collect describe --slice-type launchdarkly \
--input-parameters > launchdarkly_parameters_file.json

Necessary input parameters:

  • api_key - your Launch Darkly API token

Optional input parameters:

{
"api_key": "YOUR_API_KEY"
}

Modify the launchdarkly_parameters_file file to add the token generated from Launch Darkly

3. Add the Slice

datacoral collect add --slice-type launchdarkly --slice-name <slice-name> --parameters-file <params-file>
  • slice-name Name of your slice. A schema with your slice-name is automatically created in your warehouse
  • params-file File path to your input parameters file. Ex. launchdarkly_parameters_file.json

Supported load units

There is also user loadunit which do not produce any data, it fans out users loadunit which perform calls to API to fetch data for each user using id from split.

Slice output

Output of this slice is stored in S3 and Redshift.

AWS S3 Data stored in AWS S3 is partitioned by date and time in the following bucket s3//:customer_installation.datacoral/<sliceName>

AWS Redshift: Schema - schema name will be same as a slice-name. Tables produced by the slice are:

- schema.projects
- schema.environments
- schema.users
- schema.users_flags

Questions? Interested?

If you have questions or feedback, feel free to reach out at hello@datacoral.co or Request a demo