Mailgun is an email automation service provided by Rackspace. It offers a complete cloud-based email service for sending, receiving and tracking email sent through your websites and applications.
Table of contents
- Features and capabilities
- Supported load units
- Connector output
- Next Step
- Additional Information
Features & Capabilities
- Backfill: Full historical sync of your entire data
- Data Extraction Modes: snapshot, incremental with pagination
- Tables and Columns selection: Ability to select tables
- Datalayout: changing the data type of your columns
- Customizations: Update the configurations easily using the UI
- Scheduling: Highly flexible scheduling system
Read more about our Features and Capabilities in the next tab.
Supported load units
The Mailgun connector automatically collects the following seven loadunits from the Mailgun API and makes them available in your warehouse for analysis.
|Loadunit||Default mode||API Endpoint|
domain_stats loadunits :
domain_stats_hour- statistics are fetched at hour resolution, hourly statistics are preserved by Mailgun for 28 days
domain_stats_day- statistics are fetched at day resolution, daily statistics are preserved by Mailgun for 365 days (1 year)
domain_stats_month- statistics are fetched at month resolution, monthly statistics are preserved by Mailgun for the entire lifespan of a domain
- Make sure to mention the valid timelabel ranges for the first two loadunits -
domain_stats_hour(upto 28 days back from the current date) and
domain_stats_day(upto 1 year back from the current date).For invalid timelabels in these two cases, a validation exception would be surfaced on the UI.
- The resolution of stats data in
domain_statsloadunits is set hourly and we support daily syncs only
- Pagination is supported for
- Supported data format is JSON
- API rate limits for the connector is 300 requests/min
Output of this connector is stored in S3 and destination warehouse.
Data stored in AWS S3 is partitioned by date and time
Warehouse: Schema name will be same as the connector name. Tables produced by the connector are:
Got a question?
Please contact Datacoral's Support Team, we'd be more than happy to answer any of your questions.
Features and Capabilities
Datacoral is a cloud-based data pipeline platform. It provides an infrastructure for ingesting and integrating data from a variety of data sources as the data gets generated by different operational systems. Various transformations can be defined that will combine or aggregate the data from the different sources and publish it to different target systems that are dedicated to Analytics, Machine Learning, or Data Warehousing.
- Data Extract/Ingest
- Data Loading Modes
- Data Extract and Load Combinations
- Data Transformation
- Data Quality Checks
Datacoral’s connectors extracts/ingest data in multiple ways; traditional pooling (extracting data on a predefined schedule) or by enabling webhooks (ingesting data on-the-fly) for data to be pushed to our connectors. For Mailgun connector, the extraction configuration features are as follows.
- Selection: set dynamic rules for inclusion or exclusion at schema/table/column level
- Extraction modes:
- Extract full snapshots from source (snapshot mode)
- Extract only updated records in database sources (incremental mode). There are two types of incremental modes, incrementalappend and incrementalupdate.
- Schedule extraction: Set and update extraction frequency of each of the tables. We support historic sync as well.
- Data visibility: Complete visibility of source metadata and the data layout of all tables.
Data Loading Modes
- Replace - “wipe-and-load” operation, DELETE existing records and INSERT new ones
- Merge - updates are merged into the destination table. DELETE operations will result in a SOFT DELETE (records are marked-as-deleted)
- Append - data is appended to the destination warehouse table, so that there is a full audit of all the changes
- Configure warehouse tables based on size into regular or partitioned tables
- Easily add new tables at the destination and update exiting through UI/CLI
- Datacoral also supports movement of data from one warehouse to another
Data Extract and Load Combinations
Supported data extract/load combinations
- All Loadunits (except domain_events and domain_stats) support snapshot mode by default.
domain_events: supports incremental update by default on the column timestamp which is the creation time of an event.
domain_stats: supports incremental update by default through request parameters start & end.
- Transformation methods- create declarative transformations through SQL and highly customised transformations through Datacoral’s batch compute feature.
- Materialised Views - storing transformations that can auto detect data changes
- Auto triggers multi level (dependencies) transformation upon data changes
- Data visibility see the table lineage graph (Directed acyclic graph) of data dependencies
- After the data has gone through the relevant transformations, the publishing phase will push the data to a target of your choice.
- Check our complete list of Supported Publishers
Data Quality Checks
- Quality check : The connectors copy data without missing or duplicating any data with the help of built in checks.
- Full timestamp visibility for every step of the data pipeline and for every batch of processed data to ensure freshness audits
- Datacoral detects and notifies schema changes
- Datacoral can detect and collect deleted issues in the connector
UI Installation Overview
- Step 1: Select Mailgun connector
- Step 2: Configure connection parameters
- Step 3: Configure source information
- Step 4: Configure loadunits information
- Step 5: Edit data layouts
- Step 6: Configure warehouse
- Step 7: Confirm the configurations
Before adding the connector, please get the API token from your control panel.
Step 1: Select Mailgun connector
- From the main menu, click on Add connector
- In the drop down list, find and select Mailgun Ingest connector
Step 2: Configure connection parameters
- Input the connector name and warehouse and click Next
Please note that the connector name once set cannot be edited later
- Fill in the API key to connect to your Mailgun account, click on Check Connection and Next
Step 3: Configure source information
- Interval : Set the frequency of data extraction
- Sync Historical data : It will load the entire past database as a one-time activity.
- Click on Fetch Source Metadata to see all the load units and click Next
Step 4: Configure load units information
The list of loadunits with extraction mode and schedule is displayed.
Extraction modes are pre configured since these are static loadunits. Click on Edit to update edit configuration per loadunit.
- Extraction mode: Can be snapshot and incrementalpaginate
- Interval: The frequency of the extraction mode ranges in discrete interval from 5 minutes
- Timestampcol: Is auto-detected for 'incrementalupdate' extraction mode
Step 5: Edit data layouts
Update data type as needed and click on Next to add the connector.
Except an increase in the size of a column, no updates are allowed to the data-layout.
Step 6: Configure warehouse
- Primary Key: This is a mandatory key for Incrementalappend extraction load mode.
- Copy options: Add the copy options (For more information visit Redshift documentation and Snowflake documentation )
Please click on Next on the top right.
Please note that the configuration should not be changed as these are static loadunits
Step 7: Confirm the configuration
You will see a pop-up dialog box, click Next to confirm addition of the connector.The connecter will be added once the tables are updated in the warehouse.
You have successfully added the connector once you have landed on the below page.
- Step 1 : Download an existing configuration
- Step 2: Update the parameters file
- Step 3: Add the connector
- Next steps
Prerequiste : Use the UI Setup guide to add a mailgun connector. The CLI guide is for downloading and adding a connector using exisinting configuration
Step 1 : Download an existing configuration
To download an existing connector configuration run the below command
datacoral connector download --connector-name <connector-name> --download-directory <download-dir>
<connector-name>- Name of the existing connector that needs to be downloaded
<download-dir>- The input parameters file is downloaded in this folder
You can also copy the command directly from the UI by clicking on the download icon against the existing connector
Step 2: Update the Parameters File
Within the download directory update the parameters (json file). Update the "slicename" key with the name of the new connenctor
Step 3: Add the connector
Now we will be creating a new connector from the updated parameters file
datacoral connector add --connector-name <connector-name> --config-directory <config-dir>
<connector-name>- Name of your connector set in Step 2.
A schema with the connector name will be automatically created in your warehouse
<config-dir>- Input parameters file should be in the folder
Please note that there should be only one input parameters file (json) in the
Please click here to view all the connector commands