AI Research Engineer (Model Serving & Inference)
29 Days Old
Join Tether and Shape the Future of Digital Finance
At Tether, we're pioneering a global financial revolution with innovative blockchain solutions that enable seamless digital token transactions worldwide. Our products include the trusted stablecoin USDT, energy-efficient Bitcoin mining solutions, advanced data sharing apps like KEET, and educational initiatives to democratize digital knowledge.
Why join us? Our remote, global team is passionate about fintech innovation. We seek individuals with excellent English communication skills eager to contribute to cutting-edge projects in a fast-growing industry.
About the job:
As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems. Your focus will be on optimizing deployment strategies to ensure high responsiveness, efficiency, and scalability across various applications and hardware environments.
Responsibilities:
- Design and deploy high-performance, resource-efficient model serving architectures adaptable to diverse environments.
- Establish and track performance metrics like latency, throughput, and memory usage.
- Develop and monitor inference tests, analyze results, and validate performance improvements.
- Prepare realistic datasets and scenarios to evaluate model performance in low-resource settings.
- Identify bottlenecks and optimize serving pipelines for scalability and reliability.
- Collaborate with teams to integrate optimized frameworks into production, ensuring continuous improvement.
Qualifications:
- Degree in Computer Science or related field; PhD preferred in NLP, Machine Learning, with a strong publication record.
- Proven experience in low-level kernel and inference optimizations on mobile devices, with measurable improvements.
- Deep understanding of model serving architectures, optimization techniques, and memory management in resource-constrained environments.
- Expertise in CPU/GPU kernel development for mobile platforms and deploying inference pipelines on such devices.
- Ability to apply empirical research to overcome latency, bottleneck, and memory challenges, with experience in evaluation frameworks and iterative optimization.
- Location:
- London, England, United Kingdom
- Salary:
- £150,000 - £200,000
- Category:
- Engineering
We found some similar jobs based on your search
-
New Yesterday
AI Research Engineer (Model Serving & Inference)
-
London, England, United Kingdom
-
£150,000 - £200,000
- Engineering
Social network you want to login/join with: AI Research Engineer (Model Serving & Inference), London Location: London, European Union Job Category: Internet Job Reference: fflvgnom Job Views: 1 Posted: 18.07.2025 Expiry Date: 01.09.2025 Job Descript...
More Details -
-
29 Days Old
AI Research Engineer (Model Serving & Inference)
-
London, England, United Kingdom
-
£150,000 - £200,000
- Engineering
Join Tether and Shape the Future of Digital Finance. Join our AI model team to innovate in model serving and inference. Your focus will be on optimizing deployment strategies to ensure high responsiveness, efficiency, and scalability across various applications and hardware environments.
More Details -