Outreach Collect Slice
Overview
Outreach is a sales engagement software/platform that provides lead generators, sales representatives, and sales managers an account-based sales communication solution in order for them to secure prospects and identify and increase sales opportunities.
This datasource slice collects data from Outreach and writes it to S3 and Redshift.
Steps to add this slice to your installation
The steps to launch your slice are:
- Generate Outreach keys
- Specify the slice config
- Add the Outreach slice
1. Generate Outreach keys
Setup requirements
Before getting started please make sure to have the following information:
- Access to an active Outreach account
Prerequisites
Please refer to the documentation at https://api.outreach.io/api/v2/docs#api-reference to understand and obtain the tokens/urls needed for the slice to pull data from Outreach. You will need to obtain the clientId, clientSecret, and refreshToken.
2. Specify the slice config
To get a template for the Outreach slice configuration save the output of the describe --input-parameters
command as follows:
Necessary input parameters:
clientId
- The Consumer Key from the connected app definition.clientSecret
- The Consumer Secret from the connected app definition.refreshToken
- The refresh token the client application already received.
Optional input parameters:
for datasource:
schedule
- in cron format
for loadunit:
schedule
- cron format. You can define a different schedule for the loadunit from the global schedule set above.skipErrors
- boolean parameter, if set totrue
all errors will be silently skippedExample templates:
3. Add the Slice
slice-name
Name of your slice. A schema with your slice-name is automatically created in your warehouseparams-file
File path to your input parameters file. Ex. outreach_parameters_file.json
Supported load units
accounts
calls
mailings
personas
phone_numbers
prospects
sequence_states
sequence_steps
sequences
stages
tasks
user_duties
users
Slice output
Output of this slice is stored in S3 and Redshift.
AWS S3
Data stored in AWS S3 is partitioned by date and time in the following bucket
s3://datacoral-data-bucket/<sliceName>
AWS Redshift: Schema - schema name will be same as a slice-name. Tables produced by the slice are:
Questions? Interested?
If you have questions or feedback, feel free to reach out at hello@datacoral.co or Request a demo