Fountain Collect Slice
Overview
Fountain is a hiring automation platform for franchise operators and on-demand distributed workforces, which offers features including custom applicant workflows, a hiring dashboard, screening functionality, automated communications, a scheduling toolkit, video interviews, and more.
The Datacoral Fountain slice collects data from a Fountain account and enables data flow of jobs, locations, and applicants into a data warehouse, such as Redshift.
Steps to add this slice to your installation
The steps to launch your slice are:
- Generate Fountain API keys
- Specify the slice config
- Add the Fountain slice
1. Generate Fountain API keys
Setup requirements
Before getting started please make sure to have the following information:
- Access to an active Fountain account
Setup instructions
The Fountain slice requires an API key to collect data. An API key can be obtained from Fountain through the following steps:
- Log in to your Fountain account and go to dashboard
- Click your account name → "Company Settings" in the top-right corner
- On the API page, click the Show API Key button to view your keys.
2. Specify the slice config
To get a template for the Fountain slice configuration save the output of the describe --input-parameters
command as follows:
Example templates:
Modify the fountain_parameters_file
file to add the api_key generated from Fountain.
3. Add the Slice
slice-name
Name of your slice. A schema with your slice-name is automatically created in your warehouseparams-file
File path to your input parameters file. Ex. fountain_parameters_file.json
Supported load units
funnels
: this loadunit fetches data for funnelsfunnels_fields
: this loadunit is used to store information about particular funnel fieldsstages
: this loadunit is used to store information about particular funnel stages. Ifstages
deploy param is specified, only specified subset of stages data will be stored, otherwise all stages data.stages_applicants
: this loadunit does not store any data, it fetches list of specific stage applicants and fans out next loadunit with list of applicants id. This loadunit uses API endpoint filtration by transition date and datasource runs periodically, so in case if run it once per day:last_transitioned_at[gt]=2017-04-14T00:00:00&last_transitioned_at[lt]=2017-04-15T00:00:00
(where 2017-04-15 is current date) to get information about recent changes.applicants
: this loadunit is used to store information about particular funnel stage applicants.applicant_background_checks
: this loadunit is used to storebackground_checks
information about particular applicantapplicant_booked_slots
: this loadunit is used to store information about a particular applicant's booked slots.applicant_document_signatures
: this loadunit is used to storedocument_signatures
information about particular applicantapplicant_labels
: this loadunit is used to storelabels
information about a particular applicantapplicant_score_cards_results
: this loadunit is used to storescore_cards_results
information about a particular applicantapplicant_score_cards_results_answers
: this loadunit is used to storeanswers
information about a particular applicant score cardapplicant_transitions
: this loadunit is used to store information about a particular applicant's transitions
Slice output
Output of this slice is stored in S3 and the destination warehouse.
AWS S3
Data stored in AWS S3 is partitioned by date and time in the following bucket
s3//:customer_installation.datacoral/<sliceName>
Destination Warehouse: Schema - schema name will be same as a slice-name. Tables produced by the slice are:
Questions? Interested?
If you have questions or feedback, feel free to reach out at hello@datacoral.co or Request a demo