Principal Platform/ Product Security Engineer
3 Days Old
OverviewPrincipal Platform/ Product Security Engineer. Join to apply for the Principal Platform/ Product Security Engineer role at AI Security Institute.The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.About The Team Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.What You Might Work OnHelp design and ship paved roads and secure defaults across our platform so researchers can build quickly and safelyBuild provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scaleDevelop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signalCreate detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate themThreat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layerAssess third party services and hardware/software supply chains; introduce lightweight controls that raise the barContribute to open standards and open source, and share lessons with the broader community where appropriateIf you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it.Role SummaryAct as AISI’s technical security lead for cloud and delivery infrastructure. You will enable secure-by-default platform patterns, provide reusable controls and guardrails, and partner with engineers to embed safe practices across the development lifecycle. You’ll build influence through enablement, not enforcement. You will extend these patterns to AI/ML workloads, including secure handling of high-capability model weights, GPU estates, data/feature pipelines, evaluation/release gates, and inference services.ResponsibilitiesDefine and maintain secure-by-default IaC modules, bootstrap templates, and reference architecturesProvide consulting and coaching to platform and product teams to support secure deliveryBuild tooling for identity, secrets, environment isolation, and pipeline hardeningDevelop and maintain a baseline cloud control set (e.g. SCPs, logging, tagging)Track and improve cloud posture with automated feedback loopsLead or support post-incident reviews and design for resilienceAlign technical controls with DSIT central governance and shared responsibility boundariesProvide secure patterns for AI/ML training/finetuning and inference on AWS (e.g., EKS/ECS/SageMaker), including network isolation, egress controls, data locality, and private endpointsImplement custody controls for model weights and sensitive datasets (encryption with KMS/HSM, least-privilege access paths, just-in-time/break-glass, tamper-evident logging)Govern GPU/accelerator compute (quotas, tenancy/isolation, container image hardening, runtime policy, driver/AMI baselines)Secure the AI supply chain: signed model/dataset artefacts, provenance/attestation (e.g., Sigstore/SLSA), model registries, and promotion gates tied to evaluation evidenceEstablish paved paths for safe use of third-party model APIs (key management, egress allowlists, privacy-preserving logging, rate limiting, abuse and data exfil protection)Embed safety guardrails and patterns for RAG and prompting (context isolation/sanitisation, prompt injection mitigations, output/content policies, human-in-the-loop hooks)Deliver observability for AI surfaces (misuse/abuse telemetry, secrets/PII leak detection, anomalous output monitoring) integrated with incident responseProfile RequirementsDeep AWS experience, especially with security, identity, networking, and org-level servicesStrong infra-as-code skills (Terraform, CDK, etc.) and CI/CD pipeline knowledgeExcellent technical judgment and stakeholder communicationExperience building influence in cross-functional environmentsPractical understanding of AI/ML platform surfaces and risks (e.g., model weight security, GPU isolation, eval/release gating, prompt injection/data exfil risks)Desirable: exposure ML registries (e.g., MLflow/SageMaker), vector stores, and integrating ML artefacts into CI/CDKey CompetenciesDeep cloud security knowledge (AWS)Ability to design reusable IaC componentsThreat modelling, secure defaults, and paved pathsCollaboration across platform teamsSecuring AI/ML workloads and artefactsAI-specific threat mitigation (model supply chain, prompt injection, misuse/abuse telemetry)What We OfferImpact you couldn't have anywhere elseIncredibly talented, mission-driven and supportive colleagues.Direct influence on how frontier AI is governed and deployed globally.Work with the Prime Minister’s AI Advisor and leading AI companies.Opportunity to shape the first & best-resourced public-interest research team focused on AI security.Resources & accessPre-release access to multiple frontier models and ample compute.Extensive operational support so you can focus on research and ship quickly.Work with experts across national security, policy, AI research and adjacent sciences.Growth & autonomyIf you’re talented and driven, you’ll own important problems early.5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations.Freedom to pursue research bets without product pressure.Opportunities to publish and collaborate externally.Life & familyModern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.At least 25 days’ annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).On top of your salary, we contribute 28.97% of your base salary to your pension.Discounts and benefits for cycling to work, donations and retail/gyms.SalaryAnnual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top.This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.The Full Range Of Salaries Are As FollowsLevel 3: £65,000–£75,000 (Base £35,720 + Technical Allowance £29,280–£39,280)Level 4: £85,000–£95,000 (Base £42,495 + Technical Allowance £42,505–£52,505)Level 5: £105,000–£115,000 (Base £55,805 + Technical Allowance £49,195–£59,195)Level 6: £125,000–£135,000 (Base £68,770 + Technical Allowance £56,230–£66,230)Level 7: £145,000 (Base £68,770 + Technical Allowance £76,230)Security & ComplianceInternal Fraud Database: The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for internal fraud. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out pre-employment checks to detect instances where known fraudsters are attempting to reapply for roles in the civil service. More information please see - Internal Fraud Register.Security: Successful candidates must undergo a criminal record check and baseline personnel security standard (BPSS) clearance before appointment. Eligibility for counter-terrorist check (CTC) clearance is preferred. Some roles may require higher levels of clearance; this will be stated in the ad. See our vetting charter here.Nationality requirements: We may be able to offer roles to applicants from any nationality or background. We encourage you to apply even if you do not meet standard nationality requirements.Working for the Civil Service: The Civil Service Code sets out standards of behaviour. We recruit on merit in fair and open competition. The Civil Service embraces diversity and runs a Disability Confident Scheme and Redeployment Interview Scheme where applicable. Diversity and Inclusion: The Civil Service is committed to attracting, retaining and investing in talent. See the Civil Service People Plan and Diversity and Inclusion Strategy for more information.
#J-18808-Ljbffr
- Location:
- City Of London, England, United Kingdom
- Job Type:
- FullTime
We found some similar jobs based on your search
-
2 Days Old
Principal Platform/ Product Security Engineer London, UK
-
City Of London, England, United Kingdom
Principal Platform/ Product Security EngineerLondon, UKAbout the AI Security InstituteThe AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re...
More Details -
-
3 Days Old
Principal Platform/ Product Security Engineer
-
City Of London, England, United Kingdom
OverviewPrincipal Platform/ Product Security Engineer. Join to apply for the Principal Platform/ Product Security Engineer role at AI Security Institute.The AI Security Institute is the world's largest and best-funded team dedicated to understanding ...
More Details -