This job is closed Remote Job
This job is closed. But you can apply to other open Developer / Engineer jobs.

Sr. Data Engineer

Help us Shape the Future of Data

 Anaconda is the world’s most popular data science platform. With more than 30 million users, the open source Anaconda Distribution is the easiest way to do data science and machine learning. We pioneered the use of Python for data science, champion its vibrant community, and continue to steward open-source projects that make tomorrow’s innovations possible. Our enterprise-grade solutions enable corporate, research, and academic institutions around the world to harness the power of open source for competitive advantage and groundbreaking research.

Anaconda is seeking people who want to play a role in shaping the future of enterprise machine learning, and data science. Candidates should be knowledgeable and capable, but always eager to learn more and to teach others. Overall, we strive to create a culture of ability and humility and an environment that is both relaxed and focused. We stress empathy and collaboration with our customers, open-source users, and each other. 

 

Here is why people love most about working here: We’re not just a company, we’re part of a movement. Our dedicated employees and user community are democratizing data science and creating and promoting open-source technologies for a better world, and our commercial offerings make it possible for enterprise users to leverage the most innovative output from open source in a secure, governed way.

 

Summary

Anaconda is seeking a talented Sr. Data Engineer to join our rapidly-growing company. This is an excellent opportunity for you to leverage your experience and skills and apply it to the world of data science and machine learning. We are looking for someone with experience working on building product, data pipelines, and working with Data Scientist to put models into production and make sure models are operating correctly.

 

What You’ll Do:

  • Support the creation of Anaconda’s data infrastructure pipelines

  • Identify and implement process improvements: designing infrastructure that scales, automating manual processes, etc.

  • Drive database design and the underlying information architecture, transformation logic, and efficient query development to support our growing data needs.

  • Implement testing and observability across the data infrastructure to ensure data quality from raw sources to downstream models.

  • Write documentation that supports code maintainability.

  • Take ownership of the various tasks that will allow us to maintain high-quality data; ingestion, validation, transformation, enrichment, mapping, storage, etc

  • Work closely with Product teams to anticipate and support changes to the data.

  • Work with the Business Insights team and Infrastructure teams to build reliable, scalable tooling for analysis and experimentation.

  • Values collaboration and is very comfortable with pairing and mob programming

 

What You Need: 

  • 6+ years of relevant experience as a data engineer or significantly related work.

  • Foundation & proficiency in python

  • Experience building, optimizing, maintaining streamlining data architecture

  • Database experience with No SQL, SQL, big query

  • Experience building ETL pipelines

  • Experience with Airflow/Prefect/other - ETL workflow management tech (Cloud specific - names)

  • Cloud experience- AWS, Azure, GCE

  • Experience with Infrastructure as code, terraform or cloud formation, or ansible

  • Database experience with relational and non-relational data stores

  • Deep experience in ETL/ELT design and implementation using tools like Apache Airflow, Prefect, Lambda, Glue, Athena, etc

  • Experience working with large data sets, and an understanding of how to write code that leverages the parallel capabilities of Python and database platforms

  • Strong knowledge of database performance concepts like indices, segmentation, projections, and partitions

  • Experience leading projects with Engineering and Product teams from start to finish

  • Team attitude: “I am not done, until WE are done”

  • Embody our core values:  

    • Ability & Humility

    • Innovation & Action

    • Empathy & Connection

  • Care deeply about fostering an environment where people of all backgrounds and experiences can flourish 

 

What Will Make You Stand Out:

  • Experience with Kafka or other eventing pipeline technologies

  • Experience with Spark

  • Experience with Snowflake

  • Experience working in a fast-paced startup environment

  • Experience working in a open source or data science-oriented company

 

Why You’ll Like Working Here:

  • Unique opportunity to translate strong open source adoption and user enthusiasm into commercial product growth

  • Dynamic company that rewards high performers

  • On the cutting edge of enterprise application of data science, machine learning and AI

  • Collaborative team environment that values multiple perspectives and clear thinking

  • Employees-first culture

  • Flexible working hours

  • Medical, Dental, Vision, HSA, Life and 401K

  • Health and Remote working reimbursement 

  • Paid parental leave - both mothers and fathers

  • Pre-IPO stock options

  • Open vacation policy and monthly company days off known as Snake Days

  • 100% remote and flexible working policy – we embrace this fully through how we operate as a company.

 

An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.

This job is closed
But you can apply to other open Remote Developer / Engineer jobs