May 7
🏢 In-office - Bay Area
• The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the next-generation AI Deployment software. You have had past experience working across all aspects of the full stack tool chain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design. You are able to build and scale software deliverables in a tight development window. You will work with a team of compiler experts to build out the compiler infrastructure working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company.
• MS or PhD in Computer Science, Electrical Engineering, or related fields. • Prior startup, small team or incubation experience. • Work experience at a cloud provider or AI compute / sub-system company. • Experience implementing SIMD algorithms on vector processors. • Experience with open source ML compiler frameworks such as MLIR. • Experience with deep learning frameworks (such as PyTorch, Tensorflow). • Experience with deep learning runtimes (such as ONNX Runtime, TensorRT,…). • Experience with inference servers/model serving frameworks (such as Triton, TFServ, KubeFlow,…). • Experience with distributed systems collectives such as NCCL, OpenMPI. • Experience deploying ML workloads on distributed systems, in a multitenancy environment. • Experience with MLOps from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment. • Experience training, tuning and deploying ML models for CV (ResNet,..), NLP (BERT, GPT), and/or Recommendation Systems (DLRM).
• Offers Equity • Offers Bonus
Apply Now