ML Ops Data Engineer - Contract

Sunnyvale, CA

Position Description

We’re Blue River, a team of innovators driven to radically change agriculture by creating intelligent machinery. We empower our customers – farmers - to implement more sustainable solutions: optimize chemical usage, reimagining routine processes, and improving farming yields year after year. We believe that focusing on the small stuff – pixel-by-pixel and plant-by-plant - leads to big gains. By partnering with John Deere, we are innovating computer vision, machine learning, robotics and product management to solve monumental challenges for our customers.

Our people are at the heart of what we do. Through cross-discipline collaboration, this mission-driven and daring team is eager to define the new frontier of agricultural robotics. We are always asking hard questions, rapidly iterating, and getting our boots in the field to figure it out. We won’t give up until we’ve made a tangible and positive impact on agriculture.

Position Summary:

We’re seeking a MLOps data Engineer specializing in data and cloud infrastructure to join our team. Our machine learning platform helps manage the various components of the ML application development life cycle, starting from data ingestion, annotation, exploration to model training, deployment and monitoring. All of these components are interdisciplinary, so you will be working closely with roboticists, ML researchers and Safety & Perception teams. 

A well qualified candidate for this position will be a problem-solver who is capable of exhibiting deftness to handle multiple simultaneous competing priorities and deliver solutions in a timely manner. You like to automate anything which you do and you document it for the benefit of others. The candidate should be adept at prioritizing multiple issues and have strong expertise in troubleshooting complex production issues.


Position Responsibility:


  • Monitor, investigate and fix issues of data ingestion and training pipeline as L1 support.
  • Identify patterns of data ingestion and pipeline issues, propose short and long term solutions and actively participate in developing the solutions.  
  • Provide data and infra support to internal Jupiter teams and work collaboratively with other developers in Jackson, IT and Platform team to figure out a resolution. 
  • Provide guidance to improve the stability, security, efficiency and scalability of systems.
  • Help improve our code quality through writing unit tests, automation and performing code reviews.

Required Experience and Qualifications:


  • 3+ years of professional backend software development experience. 
  • Experience in supporting highly scalable data systems and services written in Python. 
  • Strong communication skills and ability to work effectively across multiple technical teams.
  • Experience building ETL workflows and Data Warehouse solutions.
  • Solid understanding of relational and non-relational database systems
  • Experience with Docker,  CI/CD build systems like Jenkins/Team city, AWS services like S3, DynamoDB, EC2, ECR,  lambda, SQS, SNS etc.
  • Have a passion for automation by creating tools using Python.
  • Bachelor's Degree or higher in Computer Science, Math, or other quantitative field


Preferred Skills:


  • Expertise in migrating, supporting the applications on Kubernetes & services on third party clouds.
  • Experience with Cloud workflow platforms such as Kubeflow.
  • Familiarity with ML Frameworks such as TensorFlow, PyTorch.
  • Experience in Infrastructure templating tools like Terraform or CloudFormation.


Start application