Data Engineer

New Today

We are a fast-growing organisation in the automotive technology space, dedicated to connecting people with the right vehicles through smarter use of data. Data engineering plays a crucial role in enabling this mission.

You’ll be joining a small but highly collaborative data team of four professionals (2 Data Scientists, 1 BI Developer, and yourself as Data Engineer). Together, you’ll design and deliver impactful data solutions. Your main focus will be on building and optimising data pipelines—handling everything from vehicle stock volumes to customer reviews and offer data—ensuring timely, robust, and efficient data delivery. Great data is the foundation of great analysis, and as a Data Engineer, you’ll be central to the success of the department.

Our modern data stack includes Snowflake as the data warehouse, dbt for data transformation within a medallion architecture, and Apache Airflow for orchestration. The infrastructure is hosted on Microsoft Azure, and you’ll gain hands-on experience with all these technologies. Many datasets you work with will require web scraping, so prior experience in this area is highly valuable.

What You’ll Be Doing

  • Designing and refining data ingestion pipelines for vehicle stock, offers and pricing, images, and other assets.
  • Developing analytics-friendly data models in dbt, delivering clean, structured datasets for analysts and data scientists.
  • Implementing and maintaining CI/CD pipelines to ensure testing, reliability, and smooth deployments.
  • Participating in code reviews and establishing best practices for pipeline development.
  • Staying on top of emerging datasets in the automotive sector and recommending new sources.
  • Supporting and improving Microsoft Azure infrastructure for enhanced pipeline performance.
  • Producing clear and comprehensive documentation to accelerate colleagues’ understanding of available data.

What You’ll Need to Succeed

  • 2–4 years’ experience building robust data pipelines in a commercial setting (or through advanced personal projects).
  • Strong Python skills, including use of web scraping libraries (Scrapy, Requests, Selenium, etc.) and writing production-ready, testable code.
  • Advanced SQL knowledge, with experience in query optimisation and data modelling.
  • Solid grasp of software engineering principles (SOLID, DRY, design patterns) applied to data workflows.
  • Experience with version control (Git) and collaborative development.
  • Understanding of CI/CD concepts and experience with automated testing strategies.
  • Knowledge of data quality practices, including validation, monitoring, and testing frameworks.
  • Experience with a modern ELT framework such as dbt or sql-mesh.
  • Exposure to at least one major cloud platform (AWS, Azure, or GCP).
  • Must be based within a 1-hour commute of Liverpool city centre (essential, due to regular office collaboration).

Nice-to-Have Skills

  • Experience with batch and near real-time pipelines.
  • Familiarity with Infrastructure as Code (e.g., Terraform).
  • Practical experience with dbt and medallion architecture patterns.
  • Knowledge of Apache Airflow or equivalent orchestration tools.

Seniority level

  • Associate

Employment type

  • Full-time

Job function

  • Product Management

Industries

  • Climate Technology
  • Product Manufacturing and IT Services and IT Consulting
#J-18808-Ljbffr
Location:
Liverpool
Job Type:
FullTime
Category:
IT & Technology

We found some similar jobs based on your search