March 11
🏢 In-office - San Francisco
• Designing and building large-scale data pipelines that feed billions of tokens into LLMs • Scaling and optimizing LLM training infrastructure for maximum efficiency • Ensuring evaluation pipelines are accurate and reliable • Enhancing inference infrastructure to reduce costs and latency while maintaining 99.9% uptime • Continually improving LLM performance and accuracy
• Proven experience with large-scale LLMs and Deep Learning systems • Strong programming skills; versatility is a plus • Self-starter with a willingness to take ownership of tasks • Passion for tackling challenging problems • Nice-to-have PhD in Machine Learning or related areas • Experience building distributed systems
• Comprehensive health, dental, and vision insurance for you and your dependents • Includes a 401(k) plan
Apply Now