Senior AI Solutions Engineer

New Yesterday

Job Title: Senior AI Solutions Engineer
Location: UK – Hybrid with on-site responsibilities
Department: Product & Strategy
Reports To: VP, Product & Strategy
Employment Type: Full-time
About the Company
A fast-growing UK-based AI infrastructure and solutions provider is delivering next- enterprise AI capabilities with a strong emphasis on sustainability, data sovereignty, and real-world outcomes. Backed by significant investment and strategic partnerships, the company operates a AI infrastructure platform that spans 50+ datacenter locations and is powered entirely by renewable energy.
The mission is to help enterprises operationalise AI faster, more efficiently, and with full control over their data — offering a complete platform that combines infrastructure, software, and services.
About the Role
This is a strategic, customer-facing engineering role where you’ll design and deliver complex AI solutions using retrieval-augmented (RAG), LLM fine-tuning, and enterprise-grade inference deployment. You’ll work closely with commercial teams, product, and platform engineering to bring cutting-edge AI solutions to life for enterprise clients — all while working within a sovereign, sustainable AI mesh.
Key Responsibilities
Customer Solution Design
Collaborate with enterprise customers to map business problems to AI-powered workflows.
Architect full-stack solutions using RAG, fine-tuning, and scalable inference endpoints.
Support pre-sales efforts, workshops, and proof-of-concepts alongside go-to-market teams.
AI & ML Engineering
Implement and optimise AI/ML models using frameworks like PyTorch, HuggingFace, LangChain, and NVIDIA Triton.
Fine-tune foundation models for domain-specific use cases.
Deploy and maintain inference services using REST/gRPC APIs, containerised and distributed systems.
Data & Knowledge Integration
Build secure, scalable data pipelines that integrate enterprise data into vector stores and embedding models.
Optimise semantic retrieval for performance and relevance in generative AI applications.
Cross-Team Collaboration
Work with infrastructure and platform engineering teams to ensure smooth deployments.
Feed insights back into the product team to shape platform features.
Mentor junior engineers and contribute to customer enablement content.
Sample Success Metrics
Deliver 5+ enterprise AI deployments (RAG, fine-tuned models, inference endpoints).
Contribute to solution accelerators (code repos, templates, tools) used across projects.
Help drive measurable revenue through technical pre-sales and solution support.
Maintain high client satisfaction scores post-deployment.
Produce thought leadership (blogs, talks, case studies) on real-world AI implementation.
Required Qualifications
Background in AI/ML engineering, applied AI, or technical solutions delivery.
Strong experience with:
Retrieval-Augmented (e.g., LangChain, LlamaIndex, vector databases).
LLM fine-tuning techniques (LoRA, PEFT, instruction tuning).
Deploying models in production (Triton Inference Server, HuggingFace, Kubernetes).
Advanced Python skills; bonus for experience in Go, Java, or C++.
Direct experience working with enterprise clients.
Solid understanding of embeddings, vector stores, and semantic search.
Qualifications
Familiarity with enterprise AI stacks (e.g., NVIDIA AI Enterprise, private cloud AI platforms).
Awareness of UK AI compliance and data sovereignty regulations (e.g., ISO 27001, SOC 2).
Experience optimising GPU workloads.
Contributions to open-source AI/ML projects or toolkits.
Compensation & Benefits
Competitive salary
Potential equity or performance-based incentives
Learning and development budget
Hybrid work flexibility with occasional client or team site visits
If you're passionate about building impactful AI systems that solve real problems — and want to work at the intersection of innovation, sustainability, and sovereignty — this is your opportunity.
Location:
London
Job Type:
FullTime

We found some similar jobs based on your search