Analytics Engineer II
.png)
We’re Spark, a mission-driven company helping independent Medicare brokers build the insurance business of their dreams. The vast majority of beneficiaries select benefits with the help of an independent broker, but technology and support for brokers is woefully antiquated. We provide workflows and services to help brokers achieve transformative growth.
Job Description
Summary
-------
Spark is seeking an Analytics Engineer II to join our data team. Historically, the medicare distribution industry has had a lack of transparency, but Spark is on a mission to change that. You can be a key part of the change by helping Spark and our customers wrangle, structure and draw valuable insights from a variety of data sources.
In this role, you will be responsible for improving core datasets and building the pipelines that power our business. This work will enable Spark to create data-driven products for our customers and gain actionable insights into our business. As a key member of the Spark Data team, you'll collaborate closely with product, engineering, carrier relations, and other teams to deliver high-quality datasets.
What you’ll do
--------------
- Design, optimize, and maintain scalable ELT data pipelines using GCP tools, focusing on BigQuery and Dataform/dbt.
- Automate and standardize data processes to improve efficiency, reduce manual effort, and ensure consistent data flow.
- Partner with cross-functional teams to understand business needs and translate them into technical data solutions that support data-driven insights.
- Mentor and upskill junior engineers, fostering a collaborative and growth-oriented environment.
- Ensure data quality through rigorous testing, analysis, and continuous improvement of data processes.
- Own and deliver medium-scale data projects, managing end-to-end execution and stakeholder communication.
What we’re looking for
----------------------
- 4+ years of experience in data or analytics engineering.
- Advanced proficiency in SQL and experience with DBT/Dataform for building scalable pipelines
- Some exposure to Python and data visualization tools (e.g. Metabase, Hex, Tableau, PowerBI)
- Detail-oriented with the ability to analyze complex data and draw conclusions even when faced with messy/incomplete data
- Proven ability to automate data workflows using modern tools
- Comfortable giving and receiving feedback on standardized processes to ensure quality and consistency
- Strong communication skills for working cross-functionally with various teams
Nice to Haves
-------------
- Additional Python experience for data manipulation or automation
- Familiarity with managing transformation tools and cloud resources in AWS/GCP
- Background in analytics or business intelligence to drive insights from data
- Experience with data warehousing, performance tuning, and large datasets
Compensation
------------
Summary
Spark is seeking an Analytics Engineer II to join our data team. Historically, the medicare distribution industry has had a lack of transparency, but Spark is on a mission to change that. You can be a key part of the change by helping Spark and our customers wrangle, structure and draw valuable insights from a variety of data sources.
In this role, you will be responsible for improving core datasets and building the pipelines that power our business. This work will enable Spark to create data-driven products for our customers and gain actionable insights into our business. As a key member of the Spark Data team, you'll collaborate closely with product, engineering, carrier relations, and other teams to deliver high-quality datasets.
What you’ll do
- Design, optimize, and maintain scalable ELT data pipelines using GCP tools, focusing on BigQuery and Dataform/dbt.
- Automate and standardize data processes to improve efficiency, reduce manual effort, and ensure consistent data flow.
- Partner with cross-functional teams to understand business needs and translate them into technical data solutions that support data-driven insights.
- Mentor and upskill junior engineers, fostering a collaborative and growth-oriented environment.
- Ensure data quality through rigorous testing, analysis, and continuous improvement of data processes.
- Own and deliver medium-scale data projects, managing end-to-end execution and stakeholder communication.
What we’re looking for
- 4+ years of experience in data or analytics engineering.
- Advanced proficiency in SQL and experience with DBT/Dataform for building scalable pipelines
- Some exposure to Python and data visualization tools (e.g. Metabase, Hex, Tableau, PowerBI)
- Detail-oriented with the ability to analyze complex data and draw conclusions even when faced with messy/incomplete data
- Proven ability to automate data workflows using modern tools
- Comfortable giving and receiving feedback on standardized processes to ensure quality and consistency
- Strong communication skills for working cross-functionally with various teams
Nice to Haves
- Additional Python experience for data manipulation or automation
- Familiarity with managing transformation tools and cloud resources in AWS/GCP
- Background in analytics or business intelligence to drive insights from data
- Experience with data warehousing, performance tuning, and large datasets