The future of AI is open-source. Let's build together.
Artificial Intelligence • Cloud Computing • LLM • Open Source • Decentralized Computing
March 16
🏢 In-office - San Francisco
The future of AI is open-source. Let's build together.
Artificial Intelligence • Cloud Computing • LLM • Open Source • Decentralized Computing
• Develop software and processes for orchestration of AI workloads over large fleets of distributed GPU hardware • Automate cloud infrastructure for GPU-resident applications • Build and maintain CI/CD infrastructure • Participate in on-call rotation and ensure uptime of services • Analyze and decompose complex software systems • Collaborate with internal teams to ensure best practices are applied
• Minimum of 5 years of prior relevant experience in DevOps, cloud computing, data center operations, SRE, and Linux systems administration • Experience in programming in at least one of Java, Python, Go, C++, or other languages • Experience with cloud computing toolsets like Terraform, Vault, and Packer • Experience with configuration management tools like Ansible, Pulumi, Chef, and Puppet • Strong sense of ownership and desire to build great tools • Self-driven, motivated, with a passion for problem-solving
• Competitive compensation • Startup equity • Health insurance and other competitive benefits
Apply Now