Principal Data Engineer

New Today

Overview

Join to apply for the Principal Data Engineer role at Danaher Life Sciences.

Here’s what you’ll be doing and what we’re looking for in a successful candidate.

Responsibilities

  • Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark (PySpark) to support analytics and other data-driven initiatives.
  • Support the elaboration of requirements, formulation of the technical implementation plan and backlog refinement. Provide technical perspective to products enhancements & new requirements activities.
  • Optimize Spark-based workflows for performance, scalability, and data integrity, ensuring alignment with GxP and other regulatory standards.
  • Research, and promote new technologies, design patterns, approaches, tools and methodologies that could optimise and accelerate development.
  • Apply strong software engineering practices including version control (Git), CI/CD pipelines, unit testing, and code reviews to ensure maintainable and production-grade code.
  • Deliver data-rich software and contribute to the architectural design, technical approach and implementation mechanisms adopted by the team. You will be involved in data-centric products from ingest to egress via pipelines, data warehousing, cataloguing and integrations.

What success looks like

  • Delivered reliable, scalable data pipelines that process clinical and pharmaceutical data efficiently, reducing data latency and improving time-to-insight for research and regulatory teams.
  • Enabled regulatory compliance by implementing secure, auditable, and GxP-aligned data workflows with robust access controls.
  • Improved system performance and cost-efficiency by optimizing Spark jobs and Databricks clusters, leading to measurable reductions in compute costs and processing times.
  • Fostered cross-functional collaboration by building reusable, testable, well-documented Databricks notebooks and APIs that empower data scientists, analysts, and other stakeholders to build out the product suite.
  • Contributed to a culture of engineering excellence through code reviews, CI/CD automation, and mentoring, resulting in higher code quality, faster deployments, and increased team productivity.

Preferred qualifications

  • Deployment of Databricks functionality in a SaaS environment (via infrastructure as code) with experience of Spark, Python and a breadth of database technologies.
  • Event-driven and distributed systems, using messaging systems like Kafka, AWS SNS/SQS and languages such as Java and Python.
  • Data Centric architectures, including experience with Data Governance / Management practices and Data Lakehouse / Data Intelligence platforms. Experience of AI software delivery and AI data preparation would be an advantage.

About IDBS

IDBS helps BioPharma organizations unlock the potential of AI/ML to improve the lives of patients. As a trusted long-term partner to 80% of the top 20 global BioPharma companies, IDBS delivers powerful cloud software and services designed for the BioPharma sector. IDBS, a Danaher company, leverages 35 years of scientific informatics expertise to help organizations design, execute and orchestrate processes, manage, contextualize and structure data and gain valuable insights throughout the product lifecycle, from R&D through manufacturing. Known for its IDBS E-WorkBook software, IDBS has extended its solutions to the IDBS Polar and PIMS cloud platforms.

This position is eligible for a flexible work arrangement in which you can work part-time at the Company location identified above and part-time remotely from your home.

Join our winning team today. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life.

For more information, visit www.danaher.com.

#J-18808-Ljbffr
Location:
Woking, England, United Kingdom
Salary:
£125,000 - £150,000
Job Type:
FullTime
Category:
IT & Technology

We found some similar jobs based on your search