AI Research Engineer (Model Serving & Inference)
New Today
Social network you want to login/join with:
AI Research Engineer (Model Serving & Inference), London
Location: London, European Union
Job Category: Internet
Job Reference: fflvgnom
Job Views: 1
Posted: 18.07.2025
Expiry Date: 01.09.2025
Job Description:
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions empower businesses—exchanges, wallets, payment processors, ATMs—to seamlessly integrate reserve-backed tokens across blockchains. We enable secure, instant, and global digital token transactions with transparency and trust.
Innovate with Tether
Tether Finance: Our trusted stablecoin, USDT, and digital asset tokenization services.
Tether Power: Eco-friendly energy solutions for Bitcoin mining.
Tether Data: AI and peer-to-peer technology solutions like KEET.
Tether Education: Digital learning for individuals in the digital economy.
Tether Evolution: Merging technology and human potential for innovative futures.
Why Join Us?
Work remotely with a global team passionate about fintech innovation. Collaborate with top talent, push boundaries, and set industry standards. Excellent English communication skills are essential.
About the job:
As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems, optimizing deployment and inference strategies for responsiveness, efficiency, and scalability across various applications and hardware environments.
Responsibilities:
- Design and deploy high-performance, low-latency model serving architectures adaptable to resource-constrained environments.
- Set performance targets and monitor key metrics such as latency, throughput, and memory usage.
- Develop and evaluate inference pipelines, analyze bottlenecks, and optimize for scalability and efficiency.
- Work with cross-functional teams to integrate solutions into production, ensuring continuous improvement and reliability.
Qualifications include a degree in Computer Science or related fields, preferably a PhD in NLP, Machine Learning, or similar, with proven experience in low-level kernel and inference optimization on mobile devices, and expertise in model serving frameworks and GPU/CPU kernel development.
#J-18808-Ljbffr- Location:
- London, England, United Kingdom
- Salary:
- £150,000 - £200,000
- Category:
- Engineering
We found some similar jobs based on your search
-
New Today
AI Research Engineer (Model Serving & Inference)
-
London, England, United Kingdom
-
£150,000 - £200,000
- Engineering
Social network you want to login/join with: AI Research Engineer (Model Serving & Inference), London Location: London, European Union Job Category: Internet Job Reference: fflvgnom Job Views: 1 Posted: 18.07.2025 Expiry Date: 01.09.2025 Job Descript...
More Details -
-
29 Days Old
AI Research Engineer (Model Serving & Inference)
-
London, England, United Kingdom
-
£150,000 - £200,000
- Engineering
Join Tether and Shape the Future of Digital Finance. Join our AI model team to innovate in model serving and inference. Your focus will be on optimizing deployment strategies to ensure high responsiveness, efficiency, and scalability across various applications and hardware environments.
More Details -