Working to improve the lives of the 350+ million people suffering from rare and complex conditions
September 12
🔄 Hybrid – San Francisco
Working to improve the lives of the 350+ million people suffering from rare and complex conditions
• Architect and implement high-performance, scalable backend solutions using modern technologies, healthcare interoperability standards like FHIR, and advanced AI/ML technologies • Lead the design and development of RESTful APIs and microservices that support our patient, partner, and internal-facing applications • Integrate and optimize AI models, particularly Large Language Models (LLMs), using frameworks like LangChain and LlamaIndex • Implement Retrieval-Augmented Generation (RAG) systems to enhance the accuracy and relevance of AI-generated insights in the healthcare context • Design and implement efficient data pipelines to support AI model training and inference • Optimize data processing and storage solutions to handle large volumes of complex health data efficiently • Collaborate with data scientists and researchers to implement advanced analytics and AI models that derive insights from patient data • Ensure the highest standards of data security and privacy compliance • Lead efforts to enhance system reliability and performance, including implementing monitoring, logging, alerting and auto scaling systems for both traditional backend services and AI components • Mentor junior backend developers and contribute to the team's technical growth • Participate in code reviews to ensure high code quality, maintainability, and adherence to best practices across the team
• 7+ years of experience in backend development, with a focus on building scalable, distributed systems • Strong proficiency in one or more modern programming languages (e.g., Python, Java, Go, Scala) • Experience with healthcare interoperability standards, particularly FHIR • Strong understanding of RESTful API design and microservices architectures • Hands-on experience integrating and optimizing Large Language Models (LLMs) in production environments • Proficiency with LLM tools and techniques such as LangChain, LlamaIndex, Prompt Engineering • Experience implementing Retrieval-Augmented Generation (RAG) systems • Familiarity with vector databases and efficient similarity search techniques for AI applications • Experience with cloud platforms (preferably AWS) and containerization technologies (e.g., Docker, Kubernetes) • Proficiency in working with both relational and NoSQL databases • Experience with message queuing systems and data streaming platforms • Familiarity with data processing and analytics technologies
Apply Now