Fountain is a hiring automation platform for franchise operators and on-demand distributed workforces, which offers features including custom applicant workflows, a hiring dashboard, screening functionality, automated communications, a scheduling toolkit, video interviews, and more.
The Datacoral Fountain slice collects data from a Fountain account and enables data flow of jobs, locations, and applicants into a data warehouse, such as Redshift.
Steps to add this slice to your installation
The steps to launch your slice are:
- Generate Fountain API keys
- Specify the slice config
- Add the Fountain slice
1. Generate Fountain API keys
Before getting started please make sure to have the following information:
- Access to an active Fountain account
The Fountain slice requires an API key to collect data. An API key can be obtained from Fountain through the following steps:
- Log in to your Fountain account and go to dashboard
- Click your account name → "Company Settings" in the top-right corner
- On the API page, click the Show API Key button to view your keys.
2. Specify the slice config
To get a template for the Fountain slice configuration save the output of the
describe --input-parameters command as follows:
fountain_parameters_file file to add the api_key generated from Fountain.
3. Add the Slice
slice-nameName of your slice. A schema with your slice-name is automatically created in your warehouse
params-fileFile path to your input parameters file. Ex. fountain_parameters_file.json
Supported load units
funnels: this loadunit fetches data for funnels
funnels_fields: this loadunit is used to store information about particular funnel fields
stages: this loadunit is used to store information about particular funnel stages. If
stagesdeploy param is specified, only specified subset of stages data will be stored, otherwise all stages data.
stages_applicants: this loadunit does not store any data, it fetches list of specific stage applicants and fans out next loadunit with list of applicants id. This loadunit uses API endpoint filtration by transition date and datasource runs periodically, so in case if run it once per day:
last_transitioned_at[gt]=2017-04-14T00:00:00&last_transitioned_at[lt]=2017-04-15T00:00:00(where 2017-04-15 is current date) to get information about recent changes.
applicants: this loadunit is used to store information about particular funnel stage applicants.
applicant_background_checks: this loadunit is used to store
background_checksinformation about particular applicant
applicant_booked_slots: this loadunit is used to store information about a particular applicant's booked slots.
applicant_document_signatures: this loadunit is used to store
document_signaturesinformation about particular applicant
applicant_labels: this loadunit is used to store
labelsinformation about a particular applicant
applicant_score_cards_results: this loadunit is used to store
score_cards_resultsinformation about a particular applicant
applicant_score_cards_results_answers: this loadunit is used to store
answersinformation about a particular applicant score card
applicant_transitions: this loadunit is used to store information about a particular applicant's transitions
Output of this slice is stored in S3 and the destination warehouse.
Data stored in AWS S3 is partitioned by date and time in the following bucket
Destination Warehouse: Schema - schema name will be same as a slice-name. Tables produced by the slice are: