April 24
🏢 In-office - San Francisco
• Track and identify AI security risks, experiment with latest ML techniques to build SoTA protections. • Be hands-on and build e2e ML workflows, experiment pipelines, evaluation strategies. Deploy and productize protection mechanisms. • Work with Senior Engineers to understand the latest GenAI research and applications that lead to novel security risks. • Participate in red-teaming assessments to help uncover threats, gather context, and collect data used to solidify our products. • Contribute to our overall machine learning culture as a team member.
• A BS or MS in Computer Science/Engineering. • A strong background in AI, machine learning, and deep learning. • An attitude of hands-on learning and constant experimentation to decompose and understand problem areas. • Strong programming skills in generic programming languages such as Python or Golang. • Excellent written and verbal communication skills, strong analytical and problem-solving skills.
Apply Now