March 12
🔄 Hybrid – San Francisco
• Responsible for Model Serving and ModelOps: manage model-related metadata (using the model registry), implement hardware-accelerated optimization for each model engine, and containerize models for efficient serving. • Construct an ML pipeline that proficiently serves the trained foundation models in our ML Infrastructure. • Implement best model validation practices by conducting automatic evaluation benchmarking and performing output comparisons. • Develop an automatic training/finetune pipeline that includes rigorous data and model validation against the baseline model.
• 5+ years of software development experience, including experience in deploying machine learning models • 3+ years of experience in building and deploying an end-to-end machine learning pipeline, or equivalent • Have experience in establishing and maintaining secure software and system development environments • Have experience designing control and sandboxing systems for AI research • Willingness to learn the emerging AI technology and a practical mindset on productization • Have a black-box level of understanding in Transformer-based neural network • Experience in system development in model serving and inference
• An open and inclusive culture and work environment. • Work closely with a collaborative, mission-driven team on cutting-edge AI technology. • Full health, dental, and vision benefits • Extremely flexible PTO and parental leave policy. Office closed the week of Christmas and New Years. • Remote-flexible, offices in San Francisco and Seoul and coworking stipend • VISA support (such as H1B and OPT transfer for US employees)
Apply Now