Datadog Collect Slice
Overview
Datadog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS-based data analytics platform.
The Datacoral Datadog slice collects data from a Datadog account and enables data flow of metrics into a data warehouse, such as Redshift.
Steps to add this slice to your installation
The steps to launch your slice are:
- Generate Datadog API keys
- Specify the slice config
- Add the Datadog slice
1. Generate Datadog API keys
Setup requirements
Before getting started please make sure to have the following information:
- Access to an active Datadog account
Setup instructions
- Generate a new Application key a. Go to URL: https://app.datadoghq.com/account/settings#api b. Navigate to "Application Keys" section c. Enter "Datacoral" as App Key Name & click "Create Application Key" d. Copy generated key
- Copy API Key a. Go to URL: https://app.datadoghq.com/account/settings#api b. Navigate to "API Keys" section c. Enter "Datacoral" as API Key Name & click "Create API Key" d. Copy generated key
2. Specify the slice config
To get a template for the Datadog slice configuration save the output of the describe --input-parameters
command as follows:
Necessary input parameters:
api_key
- your Datadog API tokenapp_key
- your Datadog Application key
Optional input parameters:
schedule
- in cron format (note: you can specify different schedules for 'metric_list' and 'metadata_list' to query the metrics and metadata at different rates)filterByMetric
- array of metric names that defines which metrics will be collected, if absent all active metrics will be queriedExample templates:
- collect all active metrics
- collect only YOUR_METRIC_1 and YOUR_METRIC_2
Modify the datadog_parameters_file
file to add the auth_token generated from Datadog
3. Add the Slice
slice-name
Name of your slice. A schema with your slice-name is automatically created in your warehouseparams-file
File path to your input parameters file. Ex. datadog_parameters_file.json
Supported load units
metric_list
metadata_list
metric
metadata
Notes
By default, the slice runs daily. If desired, you can change the slice configuration and specify different schedules for the metadata_list and metric_list loadunits. This way, updates on the metadata and metric are run at different rates.
Slice output
Output of this slice is stored in S3 and Redshift.
AWS S3
Data stored in AWS S3 is partitioned by date and time in the following bucket
s3://datacoral-data-bucket/<sliceName>
AWS Redshift: Schema - schema name will be same as a slice-name. Tables produced by the slice are:
Questions? Interested?
If you have questions or feedback, feel free to reach out at hello@datacoral.co or Request a demo