AI Evaluation, Research Methods, Python, LLMObservability
To be considered for an interview, please make sure your application is full in line with the job specs as found below.
Salary range
60,000-80,000 p.a. + equity, depending on experience (up to 100,000 forcandidates with exceptional relevant experience)
Apply
Email us at work@writewithmarker.com and tell us a little bit about yourselfand your interest in the future of writing, along with your CV or a link to your CV site.
What is Marker? Marker is an AI-native Word Processor a reimagining of Google Docs and Microsoft Word.
Join us in building the next generation of agentic AI assistants supporting serious writers in their work.
We are a small, ambitious company using cutting-edge technology to give everybody writing superpowers.
What you'll do at Marker We are looking for someone with a couple of years experience in academia or industry who can help us bringrigour and insight to our AI systems through evaluation,research, and observability. You'll work directly with Ryan Bowman (CPO) to help us understand and improvehow our AI assists writers. Here are some examples of areas you will be working in:
Design and implement evaluation frameworks for complex, subjective AI outputs (like writing feedbackthat's meant to inspire rather than just correct)
Build flexible evaluation pipelines that can assess quality across multiple dimensions - from humanpreference to actual writing improvement
Research and prototype new evaluation methodologies for creative and subjective AI tasks
Collaborate with our engineering team to integrate evaluation insights into our development process
Help define what quality means for different AI outputs and create metrics that actually matter forour users
Work on challenging problems like: How do we automatically evaluate whether an AI comment successfullyencourages thoughtful revision?
What we can offer A calm, human-friendly work environment among kind and experienced professionals
Fun, creative, novel, and interesting technical work at the intersection of AI research and productdevelopment
An opportunity to work with and learn about the latest advancements in AI evaluation and language models
Direct collaboration with leadership to shape how we understand and improve our AI systems
As much responsibility and growth opportunities as you want to take on
Are you a good fit for this role? In order to be successful in this role, you will recognise yourself in the following:
You have experience with AI/ML evaluation methodologies and can speak the language of AI research
You've worked hands-on with language models and understand the challenges of evaluating subjective,creative outputs
You are a self-starter willing to work independently and at speed - we imagine a 2-week experimentcadence at most.
You are familiar with and have worked on related technical systems (evaluation pipelines, datacollection tools) but don't need to be a full-stack engineer. You won't be expected to build these alone!
You think critically about what metrics actually matter and aren't satisfied with vanity metrics
You're comfortable working with ambiguous problems where the right answer isn't obvious
You have some programming experience (Python preferred) and can work independently on technical projects
You're interested in the intersection of AI capabilities and human creativity
An exceptional candidate for this role would be able to demonstrate some of thefollowing:
Experience building evaluation systems for generative AI in production environments
Knowledge of TypeScript and ability to integrate with our existing systems
Background in human-computer interaction, computational creativity, or writing research
Experience with A/B testing, statistical analysis, and experimental design
Familiarity with modern AI observability and monitoring tools
Published research or deep interest in AI evaluation methodologies
Interest in writing (fiction, non-fiction, essays)
However, you are NOT expected to:
Be a senior software engineer - we're looking for someone who can build evaluation systems, notarchitect our entire backend
Have solved every evaluation problem before - this is cutting-edge work and we're figuring it outtogether
Be experienced with every library in our stack from day one - you'll work closely with Ryan and ourengineering team
Have a specific degree - we value practical experience and research ability over credentials
Our stack You'll be working with the following technologies:
Our AI engine uses a range of models, including self-hosted and fine-tuned open source models, as wellas latest reasoning models from Anthropic and OpenAI
Evaluation and research tools built primarily in Python, with integration into our TypeScriptinfrastructure
Our agentic AI execution platform is written in TypeScript, hosted on Cloudflare Workers
Standard ML tooling: various evaluation frameworks, data analysis tools, and monitoring systems
Our text editor frontend is a web application built with React, TypeScript and ProseMirror
Apply now! Interested? Email us at work@writewithmarker.com with your CV (or a link to your CV site).Tell us a little bit about yourself and why you'd like to work at Marker!
Please note that this role is currently only available based in ourLondon hub, and at this time we are not able to sponsor work visas in the UK.
#J-18808-Ljbffr