At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses. The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
Serve Robotics aims to develop dependable and proficient sidewalk autonomy perception and prediction software. Our Perception & Prediction team is looking for an enthusiastic individual capable of addressing challenging technical issues in sidewalk autonomy, including perception, prediction, and other innovative machine learning tasks.
Build foundational model for vision language and action that can exhibit good reasoning and maneuvering capability. Understands transformer based ML architecture really well.
Design, train and deploy learning-based perception models for on-robot perception systems. Perception models should be able to do multi-modal learning capturing different semantics such as segmentation, object detection, scene understanding and tracking.
Work with ML infrastructure engineers to assess and monitor model performance, analyze and resolve performance bottlenecks.
Collaborate with various teams to understand real-world problems and define tasks, incorporating insights into ML products.
Produce high-quality code for software development, participate in code reviews to ensure the quality of code, and share knowledge with the team.
Comfortable working with sql queries and ETL logic for data ingress.
Ms/PhD in Computer Science with minimum 5 years of industry experience with focus in ML/DL, Robotics, similar technical field of study, or equivalent practical experience
Minimum 2 years of industry experience with training & shipping ML models into production and tracking its lifecycle maintenance process.
Deep understanding of computer vision, machine learning and deep learning basic concepts.
Strong C++ and Python programming skills for efficient and robust code.
Experience with multiple sensors such as Lidar, Mono/Stereo cameras, IMU, etc.
Strong communication skills.
Publications on top conferences or journals such as CVPR, NeurIPS, ICCV, TPAMI, TRO, etc.
Demonstrated proficiency in tackling robotics and computer vision challenges within at least two of the following domains: multi-sensor feature extraction and fusion, object detection and tracking, 3D Estimation, and embodied AI with Transformer based models.
Familiarity with edge-device perception stack deployment, experience with NVIDIA software libraries such as CUDA or TensorRT.
Open source project contributor.
Experience with GCP or AWS, Kubernetes and Docker.