Principal Platform/ Product Security Engineer
New Today
About the AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.
We're here because governments are critical for advanced AI going well, and AISI is uniquely positioned to mobilize them. With our resources and the UK government's unique agility and international influence, this is the best place to shape both AI development and government action.
About the Team
Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.
What you might work on
- Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
- Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
- Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
- Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
- Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
- Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
- Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
- Contribute to open standards and open source, and share lessons with the broader community where appropriate
If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it.
Role Summary
Act as AISI's technical security lead for cloud and delivery infrastructure. You will enable secure-by-default platform patterns, provide reusable controls and guardrails, and partner with engineers to embed safe practices across the development lifecycle. You'll build influence through enablement, not enforcement. You will extend these patterns to AI/ML workloads, including secure handling of high-capability model weights, GPU estates, data/feature pipelines, evaluation/release gates, and inference services.
Responsibilities
- Define and maintain secure-by-default IaC modules, bootstrap templates, and reference architectures
- Provide consulting and coaching to platform and product teams to support secure delivery
- Build tooling for identity, secrets, environment isolation, and pipeline hardening
- Develop and maintain a baseline cloud control set (e.g. SCPs, logging, tagging)
- Track and improve cloud posture with automated feedback loops
- Lead or support post-incident reviews and design for resilience
- Align technical controls with DSIT central governance and shared responsibility boundaries
- Provide secure patterns for AI/ML training/finetuning and inference on AWS (e.g., EKS/ECS/SageMaker), including network isolation, egress controls, data locality, and private endpoints
- Implement custody controls for model weights and sensitive datasets (encryption with KMS/HSM, least-privilege access paths, just-in-time/break-glass, tamper-evident logging)
- Govern GPU/accelerator compute (quotas, tenancy/isolation, container image hardening, runtime policy, driver/AMI baselines)
- Secure the AI supply chain: signed model/dataset artefacts, provenance/attestation (e.g., Sigstore/SLSA), model registries, and promotion gates tied to evaluation evidence
- Establish paved paths for safe use of third-party model APIs (key management, egress allowlists, privacy-preserving logging, rate limiting, abuse and data exfil protection)
- Embed safety guardrails and patterns for RAG and prompting (context isolation/sanitisation, prompt injection mitigations, output/content policies, human-in-the-loop hooks)
- Deliver observability for AI surfaces (misuse/abuse telemetry, secrets/PII leak detection, anomalous output monitoring) integrated with incident response
Profile requirements
- Deep AWS experience, especially with security, identity, networking, and org-level services
- Strong infra-as-code skills (Terraform, CDK, etc.) and CI/CD pipeline knowledge
- Excellent technical judgment and stakeholder communication
- Experience building influence in cross-functional environments
- Practical understanding of AI/ML platform surfaces and risks (e.g., model weight security, GPU isolation, eval/release gating, prompt injection/data exfil risks)
- Desirable: exposure ML registries (e.g., MLflow/SageMaker), vector stores, and integrating ML artefacts into CI/CD
Key Competencies
- Deep cloud security knowledge (AWS)
- Ability to design reusable IaC components
- Threat modelling, secure defaults, and paved paths
- Collaboration across platform teams
- Securing AI/ML workloads and artefacts
- AI-specific threat mitigation (model supply chain, prompt injection, misuse/abuse telemetry)
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
Working for the Civil Service
Government Digital and Data Profession Capability Framework – Government Digital and Data Profession Capability Framework.
There are a range of pension options available which can be found through the Civil Service website.
Additional Information
Security: Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. A preference is given for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and this will be stated in the job advertisement. See our vetting charter here.
Nationality requirements: We may be able to offer roles to applicants from any nationality or background. We encourage you to apply even if you do not meet the standard nationality requirements.
Diversity and Inclusion: The Civil Service is committed to attracting, retaining and investing in talent wherever it is found. See the Civil Service People Plan and the Civil Service Diversity and Inclusion Strategy for more information.
- Location:
- London
- Category:
- IT & Technology
We found some similar jobs based on your search
-
New Today
Principal Platform/ Product Security Engineer
-
London
- IT & Technology
About the AI Security Institute The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines t...
More Details -
-
New Today
Principal Platform/ Product Security Engineer
-
London, England, United Kingdom
-
£150,000 - £200,000
- IT & Technology
About the AI Security Institute The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines t...
More Details -