July 13
🏢 In-office - Bay Area
• Create, optimize, and maintain scalable data pipelines to collect, process, and store large volumes of data from various sources. • Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity. • Design and implement data architectures that support efficient data storage, retrieval, and analysis. • Integrate data from multiple sources, including APIs, databases, and third-party services. • Monitor the performance and health of data systems and pipelines. • Troubleshoot and resolve issues to ensure data availability and reliability. • Maintain comprehensive documentation of data systems, pipelines, and processes. • Ensure documentation is up to date and accessible to relevant team members. • Stay updated on the latest data engineering tools and technologies. • Evaluate and recommend new tools and technologies to improve data engineering processes.
• 5-7 years of industry experience in data engineering or related roles. • Strong proficiency in Python, with experience in developing data pipelines and ETL processes. • Experience with data storage solutions (e.g., SQL, NoSQL, cloud storage). • Proficiency with data processing frameworks (e.g., Apache Spark, Apache Beam). • Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). • Strong problem-solving skills and the ability to troubleshoot complex data issues. • Ability to optimize data workflows for performance and scalability. • Excellent communication and collaboration skills. • Ability to work effectively in a fast-paced, dynamic environment. • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
• 11 paid holidays • Generous Accrued Time Off increasing with years of service • Generous paid sick time • Annual day of service
Apply Now