There are over 7 billion people on this planet. And by 2050, there will be 2 billion more... many moving into urban centers at an unprecedented rate. Making sure there is enough food, fiber and infrastructure for our rapidly growing world is what we’re all about. And it’s why we’re investing in our people and our technology like never before! You will work with some of the world’s brightest minds are taking on the biggest technical challenges. If you want to be tackle hard technical challenges alongside excellent people and make the world a better, we want to work with you!
Our people are at the heart of what we do. Through cross-discipline collaboration, this mission-driven and daring team is eager to define the new frontier of agricultural robotics. We are always asking hard questions, rapidly iterating, and getting our boots in the field to figure it out. We won’t give up until we’ve made a tangible and positive impact on agriculture.
As a SData Engineer John Deere's Intelligent Solutions Group located in San Francisco, CA, you will build and support production oriented high performance data pipelines that are efficient and reliable
- Write clean, well-tested code to enable ingest, storage, retrieval, and transformation of large scale geospatial data for analysis, research and model development with robust monitoring.
- Contribute to the development of next-generation data structures and APIs that enable secure, performant, cost effective access to data for research and model development.
- Collaborate and communicate closely with data scientists and engineers to identify and build needed infrastructure, tools, and libraries to support machine learning algorithms.
- 5 or more years of software engineering experience
- 3 or more years of experience building and supporting mission critical, global, production-scale, cloud-based systems using Amazon Web Services (AWS, GCP, Azure)
- Experience working with Big Data Systems (Hadoop, Spark, MapReduce)
- Demonstrated ability to architect and build large scale processing pipeline
- Experience implementing geospatial data structures, analysis systems, and visualizations
- Strong coding skills in Python and additional experience with C++, Java, and/or Scala
- Experience with infrastructure-as-code, automation, monitoring, and a DevOps mindset
- Expertise with geospatial, IoT, or high frequency unstructured data sets
- Sense of ownership, curiosity, and ability to function in a fast paced, collaborative team environment that is distributed across various time zones and locations
- Experience with Spark, Databricks, or AWS Sagemaker
- Experience working with GDAL or similar spatial libraries
- Experience with Computer Vision algorithms
- Experience with machine learning techniques or tools
- Advanced degree, engineering or computer science preferred
Working at Deere
At John Deere, you are empowered to build the career of your dreams. Here, you'll enjoy the freedom to explore new projects, the support to solve problems creatively and the sophisticated tools and technology that cultivate innovation and achievement. We offer comprehensive relocation and reward packages to help you get started on your new career path. Click here to find out more about our Total Rewards Package.
The information contained herein is not intended to be an exhaustive list of all responsibilities and qualifications required of individuals performing the job. The qualifications detailed in this job description are not considered the minimum requirements necessary to perform the job, but rather as guidelines.
John Deere is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to, among other things, race, religion, color, national origin, sex, age, sexual orientation, gender identity or expression, status as a protected veteran, or status as a qualified individual with disability.