Senior Data & DevOps Engineer
New Yesterday
OverviewSenior Data and DevOps Engineer role at DREST. The Senior Data and DevOps Engineer is responsible for the performance, scalability, and reliability of Drest’s data infrastructure and pipelines. This role ensures the efficient ingestion, transformation, and delivery of high-volume data, while also maintaining robust, cost-effective cloud operations. As a hands-on engineer, the Senior Data and DevOps Engineer is actively involved in building pipelines, defining infrastructure as code, and supporting critical systems. Reporting to the Devops Lead Engineer, they work closely with data science, backend, and the platform team to enable high-quality analytics and drive technical excellence across data and infrastructure.What you will be accountable forDesign, build, and maintain robust data pipelines capable of handling tens of millions of events per day in both batch and real-time processing contexts.Manage and optimise AWS cloud infrastructure, ensuring high availability, performance, cost-efficiency, and security.Develop infrastructure-as-code using Terraform, supporting scalable and maintainable infrastructure deployments.Build and monitor data warehouse solutions (e.g. Redshift), ensuring data is accessible, clean, and well-modelled for analytics and product teams.Drive system performance and operational excellence by improving observability, uptime, and deployment processes across data and platform systems.Main responsibilitiesDesign, implement and maintain scalable and reliable data pipelinesBuild, optimize and monitor data warehousing solutions with Redshift and other columnar data storesDevelop infrastructure-as-code using Terraform to provision and manage cloud infrastructureOwn and manage AWS-based systems, ensuring cost-effective, secure and high-performance operationsSupport and enhance the data and platform stack to ensure uptime, observability, and recoverability of key systemsCollaborate with engineering and product teams to ensure data needs are met and infrastructure bottlenecks are identified earlySupport analytics and reporting workflows by making data accessible, clean, and well-modeledImplement system performance improvements and improve alerting, monitoring and release processesWhat you’ll bring5+ years of experience in data engineering, including multiple end-to-end pipeline buildsSolid experience with AWS cloud services, including S3, EC2, Lambda, RDS, Redshift, Glue, and Kinesis, MongoDB, PostgreSQLProven expertise in Terraform and infrastructure-as-code practicesStrong SQL and data modeling skills, and experience with both SQL and NoSQL data storesStrong understanding of dbt (or equivalent) and TableauHands-on experience with Python for data processing and automation tasksA background working in environments with high throughput data (millions of events per hour)Understanding of best practices around security, scalability, and maintainability in cloud-native systemsComfort working independently in a fast-paced, highly collaborative environmentGreat communication skills, ability to explain complex systems clearly to technical and non-technical stakeholdersBonus experienceFamiliarity with Docker, Kubernetes, or other container orchestration platformsExperience in the gaming industryExperience with event-driven applicationsExposure to CI/CD pipelines and automated testing for data infrastructureExperience with additional data warehouses (BigQuery, Snowflake, etc.)Additional detailsThis role is ideal for a data engineer seeking a new challenge and eager to expand their expertise into the DevOps space. If this sounds like you, please apply.Seniority levelMid-Senior levelEmployment typeFull-timeJob functionIndustries: Computer GamesNote: Referrals increase your chances of interviewing at DREST by 2xLocation and compensation notes: United Kingdom; salary ranges listed as provided in original posting.
#J-18808-Ljbffr
- Location:
- United Kingdom
- Job Type:
- FullTime