Greenhouse Connector Overview
Greenhouse is an applicant tracking system and recruiting software, is designed to help make companies great at hiring, improving the process for everyone.
The Datacoral Greenhouse connector collects data from the Greenhouse Harvest API and enables data flow into a data warehouse, such as Redshift or Snowflake.
Features & Capabilities
- Backfill: Full historical sync of your entire data
- Data Extraction Modes: snapshot, incremental with pagination
- Data Load Modes: replace, append and merge
- Tables and Columns selection: Ability to select tables and columns
- Customizations: Update the configurations easily using the UI
- Scheduling: Highly flexible scheduling system
- Capture Deletes: Setup webhooks to capture deleted records in Greenhouse objects
Supported load units
The Greenhouse connector automatically collects the following loadunits from the Greenhouse API and makes them available in your warehouse for analysis.
Loadunit | Endpoint | Description |
---|---|---|
application | /applications | Captures all the attributes for applications |
approvals | /jobs/{id}/approval_flows | Capture all the attributes for approvals |
candidates | /candidates | Captures all the first-level attributes for candidates. |
demographic_answer_options | /demographics/answer_options | Capture all the attributes for demographic_answer_options |
demographic_answers | /demographics/answers | Capture all the attributes for The demographic answers |
demographic_question_sets | /demographics/question_sets | Capture all the attributes for demographic question sets |
demographic_questions | /demographics/questions | Capture all the attributes for demographic questions |
departments | /departments | Capture all the attributes for departments |
eeoc | /eeoc | Capture all the attributes for eeoc |
job_openings | /jobs/{job_id}/openings | Capture all the attributes for job openings |
job_posts | /job_posts | Capture all the attributes for job posts |
job_stages | /job_stages | Capture all the attributes for job stages |
jobs | /jobs | Capture all the first-level attributes for jobs. |
offers | /offers | Capture all the attributes for offers |
offices | /offices | Capture all the attributes for offices |
rejection_reasons | /rejection_reasons | Capture all the attributes for rejection reasons |
scheduled_interviews | /scheduled_interviews | Capture all the attributes for scheduled_interviews |
scorecards | /scorecards | Capture all the attributes for scorecards |
sources | /sources | Capture all the attributes for sources |
tags | /tags/candidate | Capture all the attributes for tags |
user_permissions | /users/{id}/permissions/jobs | Capture all the attributes for user_permissions |
user_roles | /user_roles | Capture all the attributes for user_roles |
users | /users | Capture all the attributes for users |
Connector output
Output of this connector is stored in S3 and Redshift.
AWS S3
Data stored in AWS S3 is partitioned by date and time
s3://customer_installation.datacoral/<connector-name>
Destination warehouse: Schema - schema name will be same as the connector name. Tables produced by the connector are:
Next Steps
- Create a Greenhouse Connector through UI or CLI
- Schedule a Demo
Additional Information
Got a question?
Please contact Datacoral's Support Team, we'd be more than happy to answer any of your questions.