Junior AWS Data Engineer
New Today
Job description
Job Overview:
We are seeking an experienced AWS Data Engineer to join our team on a contract basis. As part of our data engineering team, you will work with a variety of AWS services to design, develop, and maintain scalable data pipelines. You’ll be responsible for creating robust ETL processes, implementing infrastructure as code, and ensuring that data is processed and delivered in a timely, reliable, and efficient manner.
Key Responsibilities:
Design, develop, and maintain robust ETL pipelines using Python and SQL.
Work with AWS services like Lambda, S3, and RDS for cloud- data architecture.
Utilize Terraform to provision and manage cloud resources in AWS.
Leverage Databricks and PySpark APIs to process large datasets and optimize data pipelines.
Collaborate with data scientists, analysts, and other teams to ensure data availability and quality.
Optimize and troubleshoot data pipelines to ensure scalability and performance.
Ensure proper documentation and adherence to coding standards.
Participate in code reviews and provide mentorship to junior engineers as needed.
Required Skills:
Python: Extensive experience writing Python scripts and working with data transformation libraries.
ETL Pipelines: Proficiency in designing and implementing scalable ETL pipelines.
SQL: Strong experience working with relational databases and writing efficient SQL queries.
Terraform: Hands-on experience with Terraform for infrastructure provisioning and management.
Databricks: Strong working knowledge of Databricks, including using Spark (PySpark) for data processing.
PySpark API: Deep understanding of PySpark and its integration with data processing workflows.
Nice to Have:
Cloud : Experience working with AWS Lambda and other serverless services.
AI/LLM: Exposure to Artificial Intelligence or Large Models (LLMs) and their application in data processing.
Snowflake: Familiarity with Snowflake data warehouse for data storage and querying.
- Location:
- London
- Job Type:
- FullTime