Your new role
Responsible for developing and maintaining data pipelines from our various data sources to our cloud data warehouse and modelling that data in a variety of formats to support business intelligence and data science objectives.
This role provides the opportunity to perform end-to-end development of data solutions using a modern
data engineering technology stack, incorporating best practices from software development including
automated testing, continuous integration, and continuous deployment.
- Define, design, and implement data pipelines from ingestion to consumption using a varied toolset which might include Fivetran, AWS Glue, Python, Scala and DBT
- Architect data models optimized for business intelligence consumption
- Identify and expose data pipeline risks, issues, and dependencies
- Build and follow modern data security and data governance principles
- Evaluate and identify new sources of data located throughout the organization
- Collaborate with and support analytics and data science teams to streamline organization of analytics
- Integrate disparate datasets into common data models
- Troubleshoot poorly performing data workflows and queries inside and outside the team
- Detect data quality issues, identify their root causes, implement fixes, and design data audits to capture issues
- Support business decisions with ad hoc analyses as needed
- Strong technical background
- Strong problem-solving skills
- Ability to multi-task
- Bachelor’s degree in Computer Science or related technical field
- Strong foundational knowledge of SQL
- Knowledge of data modeling in analytic environments (Kimball, Star schema, snowflake schema,
galaxy schema) a plus
- Familiarity with cloud data warehouses like Snowflake a plus
- Familiarity with Amazon Web Services, especially services in the data migration arena.
- Familiarity with modern ETL tools like Fivetran or Stitch and DBT.
- Experience with Agile development practices.
If you are interested apply and I will contact as soon as possible. #1088610