Datadog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS-based data analytics platform.
The Datacoral Datadog slice collects data from a Datadog account and enables data flow of metrics into a data warehouse, such as Redshift.
Steps to add this slice to your installation
The steps to launch your slice are:
- Generate Datadog API keys
- Specify the slice config
- Add the Datadog slice
1. Generate Datadog API keys
Before getting started please make sure to have the following information:
- Access to an active Datadog account
- Generate a new Application key a. Go to URL: https://app.datadoghq.com/account/settings#api b. Navigate to "Application Keys" section c. Enter "Datacoral" as App Key Name & click "Create Application Key" d. Copy generated key
- Copy API Key a. Go to URL: https://app.datadoghq.com/account/settings#api b. Navigate to "API Keys" section c. Enter "Datacoral" as API Key Name & click "Create API Key" d. Copy generated key
2. Specify the slice config
To get a template for the Datadog slice configuration save the output of the
describe --input-parameters command as follows:
Necessary input parameters:
api_key- your Datadog API token
app_key- your Datadog Application key
Optional input parameters:
schedule- in cron format (note: you can specify different schedules for 'metric_list' and 'metadata_list' to query the metrics and metadata at different rates)
filterByMetric- array of metric names that defines which metrics will be collected, if absent all active metrics will be queried
- collect all active metrics
- collect only YOUR_METRIC_1 and YOUR_METRIC_2
datadog_parameters_file file to add the auth_token generated from Datadog
3. Add the Slice
slice-nameName of your slice. A schema with your slice-name is automatically created in your warehouse
params-fileFile path to your input parameters file. Ex. datadog_parameters_file.json
Supported load units
By default, the slice runs daily. If desired, you can change the slice configuration and specify different schedules for the metadata_list and metric_list loadunits. This way, updates on the metadata and metric are run at different rates.
Output of this slice is stored in S3 and Redshift.
Data stored in AWS S3 is partitioned by date and time in the following bucket
AWS Redshift: Schema - schema name will be same as a slice-name. Tables produced by the slice are: