Software Engineer, Perception LiDAR (Autonomy)

See more jobs from Lyft Inc.

about 4 years old

This job is no longer active

At Lyft, our mission is to improve people’s lives with the world’s best transportation. To do this, we start with our own community by creating an open, inclusive, and diverse organization.

And self-driving cars are critical to that mission: they can make our streets safer, cities greener, and traffic a thing of the past. That’s why we started Level 5, our self-driving division, where we’re building a self-driving system to operate on the Lyft network.

Level 5 is looking for doers and creative problem solvers to join us in developing the leading self-driving system for ride sharing. Our team members come from diverse backgrounds and areas of expertise, and each has the opportunity to have an outsized influence on the future of our technology. Our world-class software and hardware experts work in brand new garages and labs in Palo Alto, California, and offices in London, England and Munich, Germany. And we’re moving at an incredible pace: we’re currently servicing employee rides in our test vehicles on the Lyft app. Learn more at lyft.com/level5.

As part of the Perception and Autonomy Team, you will be interacting on a daily basis with other software engineers to tackle highly advanced perception challenges. Eventually we expect all Autonomy Team members to work on a variety of problems across the autonomy space; however, with a focus on perception, your work will initially involve turning our constant flow of sensor data into a model of the world. For this position, we are looking for a software engineer with the ability to master the problem of object detection and analysis using advanced algorithms operating on LiDAR data and ancillary signals from other modalities. The ideal candidate will have a strong autonomous vehicle domain knowledge, expertise in traditional computer vision and modern, deep learned approaches to object detection and segmentation.

Responsibilities:

  • Work on core perception algorithms such as object detection, semantic segmentation, visibility and motion state analysis, phantom obstacle detection, and operating in adverse environments (air particulates, etc.)
  • Build segmentation and classification algorithms on LiDAR point cloud data
  • Propose and develop mid sensor fusion algorithms for data from multiple modalities (LiDAR, radar, and vision)
  • Implement real-time algorithms (< 10 milliseconds) on CPU/GPU in C++ or/and CUDA
  • Develop ML models using TensorFlow or/and pyTorch
  • Build/enhance tools and infrastructure to evaluate the performance of the perception stack
  • Interact with other teams (Hardware/Platform, Simulation, Release, Infra) to drive improvement of the sensor suite, simulation, testing and cloud infrastructure
  • Help define the long-term technical roadmap for the LiDAR and Perception team

Experience:

  • Ability to produce production-quality C++ software in a Linux environment
  • Strong experience in multi-threaded programming
  • Strong background in mathematics, linear algebra, numerical optimization, geometry, and statistics
  • Ph.D. degree in Computer Science, Electrical Engineering, or related field and 3 years of post-graduate experience or M.S. degree and 5+ years of post-graduate experience or B.S. degree and 8+ years of post-graduate experience
  • Ability to build machine learning applications using a broad range of tools such as decision trees, Hidden Markov Models, deep neural networks, etc
  • Ability to work in a fast-paced environment and collaborate across teams and disciplines
  • Openness to new and different ideas. Can quickly evaluate published state-of-the-art approaches and select and improve upon them based on first principles
  • (Nice to Have) 3+ years experience working in a related role in the autonomous vehicle domain
  • (Nice to Have) Strong understanding of computer architectures and ability to write SIMD/vectorized code
  • (Nice to Have) Hands-on experience with applying computer vision, machine learning or robotics theory to real products
  • (Nice to Have) Experience with computer vision techniques like structure from motion, RANSAC, camera calibration, pose estimation, point cloud registration, etc
  • (Nice to Have) Experience with deep learning techniques on images, LiDAR/radar point clouds, etc

Benefits:

  • Great medical, dental, and vision insurance options
  • In addition to 11 observed holidays, salaried team members have unlimited paid time off, hourly team members have 15 days paid time off
  • 401(k) plan to help save for your future
  • 18 weeks of paid parental leave. Biological, adoptive, and foster parents are all eligible
  • Monthly commuter subsidy to cover your transit to work
  • 20% off all Lyft rides

Lyft is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Lyft does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender-identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Pursuant to the San Francisco Fair Chance Ordinance and other similar state laws and local ordinances, and its internal policy, Lyft will also consider for employment qualified applicants with arrest and conviction records.