Remote Machine Learning Compiler Engineer - Gensyn (London)

22 Days Old

The world will be unrecognisable in 5 years.

Machine learning models are driving our cars , testing our eyesight , detecting our cancer , giving sight to the blind , giving speech to the mute , and dictating what we consume, enjoy, and think . These AI systems are already an integral part of our lives and will shape our future as a species.

Soon, we'll conjure unlimited content: from never-ending TV series (where were the main character) to personalised tutors that are infinitely patient and leave no student behind. Well augment our memories with foundation models individually tailored to us through RLHF and connected directly to our thoughts via Brain-Machine Interfacesblurring the lines between organic and machine intelligence and ushering in the next generation of human development.

This future demands immense, globally accessible, uncensorable, computational power. Gensyn is the machine learning compute protocol that translates machine learning compute into an always-on commodity resourceoutside of centralised control and as ubiquitous as electricityaccelerating AI progress and ensuring that this revolutionary technology is accessible to all of humanity through a free market.

Our Principles:

AUTONOMY

FOCUS

REJECT MEDIOCRITY

Responsibilities:

Lower deep learning graphsfrom common frameworks (PyTorch, TensorFlow, Keras, etc.) down to an IR representation for trainingwith particular focus on ensuring reproducibility.

Write novel algorithms for transforming intermediate representations of compute graphs between different operator representations.

Ownership of two of the following compiler areas:

Minimum Requirements:

Compiler knowledgebase-level understanding of a traditional compiler (LLVM, GCC) and graph traversals required for writing code for such a compiler.

Solid software engineering skillspracticing software engineer, having significantly contributed to/shipped production code.

Understanding of parallel programmingspecifically as it pertains to GPUs.

Strong willingness to learn Rustas a Rust by default company, we require everyone to learn Rust so that they can work across the entire codebase.

Ability to operate on:

Highly self-motivated with excellent verbal and written communication skills.

Comfortable working in an applied research environmentwith extremely high autonomy.

Nice to haves:

Architecture understandingfull understanding of a computer architecture specialized for training NN graphs (Intel Xeon CPU, GPUs, TPUs, custom accelerators).

Rust experiencesystems level programming experience in Rust.

Open-source contributions to Compiler Stacks.

Compilation understandingstrong understanding of compilation in regards to one or more High-Performance Computer architectures (CPU, GPU, custom accelerator, or a heterogeneous system of all such components).

Proven technical foundationin CPU and GPU architectures, numeric libraries, and modular software design.

Deep Learning understandingboth in terms of recent architecture trends + fundamentals of how training works, and experience with machine learning frameworks and their internals (e.g., PyTorch, TensorFlow, scikit-learn, etc.).

Exposure to Deep Learning Compiler frameworkse.g., TVM, MLIR, TensorComprehensions, Triton, JAX.

Kernel Experienceexperience writing and optimizing highly-performant GPU kernels.

Note: For potential candidates outside these criteria, we still encourage you to apply as there may be openings with higher/lower levels than listed above.

Compensation / Benefits:

Competitive salary + share of equity and token pool.

Fully remote workwe hire between the West Coast (PT) and Central Europe (CET) time zones.

4x all expenses paid company retreats around the world, per year.

Whatever equipment you need.

Paid sick leave.

Private health, vision, and dental insuranceincluding spouse/dependents.

#J-18808-Ljbffr
Location:
Surbiton, Greater London
Job Type:
FullTime

We found some similar jobs based on your search