3 days ago
🏢 In-office - San Francisco
• Track and Analyze Threats: Monitor and analyze emerging threats to AI/ML models, applications, and environments. • Deep Understanding: Possess expertise in foundation models like Large Language Models and diffusion models. • Develop AI Firewall: Develop and implement strategies to detect and mitigate identified threats, including prototyping innovative approaches. • Lead Red-Teaming Exercises: Lead red-teaming exercises and vulnerability assessments for generative AI technologies, addressing safety and security vulnerabilities. • Publish Insights: Author blog posts, white papers, or research papers on emerging threats in AI safety and security. • Collaborate and Innovate: Collaborate with cross-functional teams to translate research into product features and shape our machine learning culture.
• Undergraduate degree in EECS, Math, or Physics. • Strong programming skills in Python and deep knowledge of machine learning tools like PyTorch. • Strong algorithmic and problem-solving skills. • Fluency in reading academic papers on AI/ML.
Apply Now