Senior Data Engineer

New Yesterday

Overview

Join to apply for the Senior Data Engineer role at

Location: Fully remote. Work pattern: This is a permanent role, working 40 hours per week, Monday to Friday. Reporting to: Head of Analytics.

ProblemShared has a newly formed Data Office, and we’re looking for a Senior Data Engineer to work closely with our Senior Analytics Engineer, ensuring our data is accessible, accurate and actionable. We believe in “data for good” - using insights to help neurodivergent people better understand themselves, enable practitioners to deliver more effective care, and guide teams in improving support. We are building a modern data platform (Fivetran → Databricks + DBT → PowerBI) that integrates data from clinical, operational and corporate systems into a safe, accessible, and reliable model, enabling real-time insight and continuous improvement.

We blend people, data and technology to support individuals on their mental health or neurodevelopmental journey, and we believe you can do the most meaningful work of your career here.

About Us

We are a practitioner-led, CQC-regulated, digital mind health provider, working with a community of expert practitioners to broaden access to the highest quality care for people across the UK. By leveraging technology, we aim to provide scalable and personalized solutions that enhance mental well-being and bridge gaps in mental healthcare delivery.

We work in partnership with institutions such as the NHS, insurance companies and universities to deliver therapy, psychiatry, neurodevelopmental assessments and post diagnostic care for adults, children and young people, mainly around Autism and ADHD. Key to our mission is the innovative and effective use of data analytics and resulting insights.

The Role

You own the “on-ramp” for all our operational data—building and maintaining ingestion pipelines from our AWS-hosted Postgres system, Zendesk, billing feeds, and more into Databricks. While your primary focus is ingestion, you’ll work hand-in-glove with the Analytics Engineer to understand downstream needs, help troubleshoot transformations, and step in to assist with Silver-layer tasks when peers are out. You’ll also implement authentication, anonymisation and any non-SQL transformations needed upstream of our analytic models.

What you’ll do

  • Design, deploy and monitor Fivetran (or equivalent) connectors for PostgreSQL, Zendesk, Semble, Bob, and other API and flat-file exports.
  • Develop Databricks jobs and notebooks to preprocess raw data: validation, pseudonymisation, format conversions (JSON/Parquet).
  • Implement and maintain AWS-integrated authentication and role-based access controls in Databricks. (NB. User authentication in ProblemShared uses Microsoft, but our core internal tech stack is on AWS)
  • Write scripts (Python or R) to handle edge-case formats and automate testing/alerting.
  • Collaborate daily with the Analytics Engineer—providing context on schema changes, edge cases and PII nuances.
  • Cross-train so you can assist with Silver-layer cleanup or Gold-layer hand-offs during leave or as part of some spikes of work.
  • Partner closely with Software Engineers in our core system teams.
  • Continuously evaluate and bring in new ingestion or analytical tools (e.g. DuckDB, LLMs) where it adds speed, resilience or adaptability.

What you will bring to the role

Must have

  • Proven experience with a variety of ingestion frameworks (e.g. Fivetran) and analytical databases / lakes (e.g. Redshift, Snowflake, Databricks).
  • Production experience working in AWS and infrastructure as code (Terraform, Azure DevOps or GitHub Actions) in an established data team.
  • Hands-on with Databricks on AWS: jobs, notebooks, IAM integration.
  • Strong Python and/or R skills, and comfort working with non-relational data formats.
  • Strong SQL fluency against Postgres and cloud stores.
  • Action orientation: we value thoughtful, reliable solutions built to deliver immediate value.

Nice to have

  • STEM degree (Computer Science, Engineering, Maths, Stats, Physics or equivalent)
  • Knowledge of PII controls: encryption, masking, pseudonymisation and role-based access.
  • Experience with JSON/Parquet/HL7 or event-driven tools (Airflow, Kafka).
  • AWS certification (e.g. Data Analytics – Specialty).
  • Basic Linux administration or Docker.

What makes us different

  • Fully remote, flexible: Work where you do your best thinking.
  • Neurodiversity-welcoming: We encourage applications from autistic, ADHD and other neurodivergent candidates.
  • Tool-agnostic, outcome-driven: We value mental agility and diverse tool experience over narrow specialism.
  • Inclusive by design: We focus on potential over pedigree, and adapt our process to support your needs. We also know that everyone has a life outside work, so we’re happy to discuss flexible working. We embrace difference and individuality and are proud to be equal opportunity employer.

What else do we offer you

  • 30 days annual leave + public holidays + the option to buy and sell additional leave & extended leave options such as sabbatical leave
  • Private health insurance and pension scheme
  • Enhanced family friendly policies
  • Flexible working with the option of free co-working
  • All company and team in person meet ups
  • Access to a range of wellbeing activities
  • Access to development / training opportunities to support your career ambitions
  • One volunteering day a year

Seniority level

  • Mid-Senior level

Employment type

  • Full-time

Job function

  • Information Technology and Analyst

Industries

  • Mental Health Care, Hospitals and Health Care, and Health and Human Services
#J-18808-Ljbffr
Location:
United Kingdom
Job Type:
FullTime
Category:
IT & Technology