AI-powered LLMOps platform to improve and monitor LLM powered applications via logging, debugging, evaluations and fine-tuning$1. .$1
July 21
🏢 In-office - San Francisco
AI-powered LLMOps platform to improve and monitor LLM powered applications via logging, debugging, evaluations and fine-tuning$1. .$1
• Design and develop cloud-based workflows, data pipelines and compute infrastructure for production AI workloads such as LLM inference, evaluation and fine-tuning • Evaluate cloud computing and AISaaS providers to identify cost, accuracy, throughput and latency trade-offs • Build systems that optimize for these parameters • Compare closed source and open source LLM providers • Develop and evaluate systems with multiple interconnected LLMs and data sources • Collaborate with cross-functional teams to integrate AI solutions into product offerings • Stay abreast of emerging trends in AI, ML, and cloud technologies • Ensure compliance with data privacy and security protocols throughout the AI lifecycle
• 5+ years experience designing and operating production systems • Bachelor's or Master's degree in Computer Science, Engineering, or a related field • Experience with NLP/ML data pipelines and workflows • Experience with PostgreSQL, Redis and AWS services • Comfortable with Python, Node.js, and Go • Strong grasp of distributed computing fundamentals • Self-directed and enthusiastic about solving complex problems
• Competitive salary • Equity • Health/dental/vision insurance • Opportunity to develop foundational systems redefining AI
Apply Now