Pingdom Collect Slice


Pingdom offers you a single platform solution to monitor the availability and performance of your websites, servers or web applications.

Steps to add this slice to your installation

The steps to launch your slice are:

  1. Generate Pingdom application key
  2. Specify the slice config
  3. Add the Pingdom slice

1. Generate Pingdom application key

Setup requirements

Before getting started please make sure to have the following information:

  • Access to an active Pingdom account

Setup instructions

  1. Generate a new Pingdom application key a. Open b. Click "register application" button c. Fill inputs and click "register" d. Copy application key

2. Specify the slice config

To get a template for the Pingdom slice configuration save the output of the describe --input-parameters command as follows:

datacoral collect describe --slice-type pingdom \
--input-parameters > pingdom_parameters_file.json

Necessary input parameters:

  • username - your Pingdom username
  • password - your Pingdom password
  • token - your Pingdom application key

Optional input parameters:

  • schedule - in cron format

  • tags - Tag list separated by commas. As an example "nginx,apache" would filter out all responses except those tagged nginx or apache.

    Example templates:

  1. collect all active metrics
"username": "YOUR_API_KEY",
"password": "YOUR_API_KEY",
"token": "YOUR_API_KEY",
"tags": ["nginx", "apache"]

Modify the pingdom_parameters_file file to add the token generated from Pingdom

3. Add the Slice

datacoral collect add --slice-type pingdom --slice-name <slice-name> --parameters-file <params-file>
  • slice-name Name of your slice. A schema with your slice-name is automatically created in your warehouse
  • params-file File path to your input parameters file. Ex. pingdom_parameters_file.json

Supported load units

  • actions
  • checkdetail
  • checks
  • probes
  • summaryaverage
  • summaryaveragebycountry
  • summaryaveragebyprobe
  • summaryoutage

Slice output

Output of this slice is stored in S3 and Redshift.

AWS S3 Data stored in AWS S3 is partitioned by date and time in the following bucket s3://datacoral-data-bucket/<sliceName>

AWS Redshift: Schema - schema name will be same as a slice-name. Tables produced by the slice are:

- schema.actions
- schema.checkdetail
- schema.checks
- schema.probes
- schema.summaryaverage
- schema.summaryaveragebycountry
- schema.summaryaveragebyprobe
- schema.summaryoutage

Questions? Interested?

If you have questions or feedback, feel free to reach out at or Request a demo