[Mission of the Role]
As a Senior Deep Learning Researcher/Engineer, your mission is to develop and optimize multi-camera-based vision perception and deep learning-driven path planning systems that enable safe, robust, and high-performance autonomous driving at ADAS Level 2++ and beyond.
You will contribute by: ㆍ Enhancing multi-camera-based perception systems to achieve human-like environmental understanding, even in complex driving conditions. ㆍ Developing high-precision path planning algorithms that ensure smooth, safe, and efficient navigation under dynamic road scenarios. ㆍ Leveraging state-of-the-art deep learning and AI techniques such as Vision Transformers, Graph Neural Networks (GNNs), and Reinforcement Learning (RL) to advance autonomous vehicle intelligence. ㆍ Bridging the gap between research and real-world deployment by optimizing models for real-time, low-latency inference on edge computing platforms. ㆍ Ensuring generalization and robustness of perception and planning algorithms across diverse driving environments, including adverse weather and dense urban traffic. This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.
[Key Responsibilities]
The selected candidate will be responsible for designing, developing, and optimizing deep learning models for multi-camera vision perception and path planning in ADAS Level 2++ and above autonomous vehicles.
ㆍ Vision Perception Responsibilities: ㆍ Develop multi-camera fusion algorithms for high-accuracy 3D object detection, lane detection and environmental understanding. ㆍDesign and optimize Bird’s Eye View (BEV) detection, semantic segmentation, and depth estimation models. ㆍImplement state-of-the-art Vision Transformers (ViTs), Hybrid Transformer-CNN architectures, and Vision-Language Models (VLMs) for perception tasks. ㆍEnhance distance estimation and 3D scene reconstruction through multi-camera self-supervised learning. ㆍImprove perception system robustness against edge cases (adverse weather, occlusions, nighttime scenarios). ㆍOptimize an algorithm and collaborate with cross-functional teams to ensure real-time performance on an embedded device under varyingconditions.
ㆍPath Planning Responsibilities: ㆍDevelop deep learning-based path planning algorithms, including imitation learning, reinforcement learning (RL), and graph-based trajectory optimization. ㆍImplement prediction models for behavioral forecasting (e.g., pedestrian and vehicle trajectory prediction). ㆍOptimize end-to-end deep learning models for motion planning, integrating perception and control. Enhance spatiotemporal modeling for traffic scene understanding using transformers and graph neural networks (GNNs). ㆍImplement multi-agent reinforcement learning (MARL) for interactive decision-making in dynamic environments.
ㆍCollaboration: ㆍCollaborate with cross-functional teams, including machine learning engineers, software integration engineers, hardware platform engineers, and quality assurance, to integrate camera pose estimation algorithms into ADAS systems. ㆍParticipate in code reviews and knowledge-sharing sessions to foster a collaborative work environment.
ㆍMentoring and Technical Guidance: ㆍMentor and provide technical guidance to junior/entry engineers.
[Basic Qualifications] ㆍ Ph.D. in Computer Vision, Deep Learning, Robotics, Autonomous Systems, Electrical Engineering, or related fields ㆍ Master’s degree with 5–7 years of industry experience in deep learning-based perception or path planning for ADAS/Autonomous Driving.
ㆍ Deep Learning & Computer Vision ㆍ Strong knowledge of Vision Transformers (ViTs), Swin Transformers, and Hybrid CNN-Transformer models. ㆍ Expertise in multi-camera perception, depth estimation, semantic segmentation, and 3D object detection. ㆍ Experience with multi-camera-to-BEV transformation, NeRF (Neural Radiance Fields), and self-supervised learning. ㆍ Proficiency in Vision-Language Models (VLMs) (e.g., CLIP, BLIP) for scene understanding. ㆍ Strong background in sensor fusion techniques (camera, LiDAR, radar fusion).
ㆍDeep Learning-based Path Planning & Prediction ㆍExperience in Graph Neural Networks (GNNs) and Transformers for trajectory prediction. ㆍExpertise in Reinforcement Learning (RL), Imitation Learning, and Model Predictive Control (MPC) for path planning. ㆍUnderstanding of Inverse Reinforcement Learning (IRL), Safe RL, and Deep Q-Learning for decision-making. ㆍProficiency in Proficiency in Bayesian decision-making models and probabilistic motion planning techniques.
ㆍProgramming & Deployment ㆍStrong coding skills in Python and C++. ㆍExperience with deep learning frameworks (TensorFlow, PyTorch, JAX). ㆍOptimization skills using CUDA, TensorRT, ONNX, and multi-threaded computing. ㆍExperience with real-time inference deployment on embedded platforms (NVIDIA Jetson, Orin, Xavier, Qualcomm Snapdragon, etc.). ㆍKnowledge of robotics middleware (ROS, ROS2) and simulation environments (CARLA, AirSim, LGSVL, etc.).
ㆍMathematical & Algorithm Foundations ㆍStrong background in optimization algorithms (gradient-based & gradient-free methods). ㆍSolid understanding of probabilistic models (Bayesian Networks, Kalman Filters, Hidden Markov Models). ㆍProficiency in differentiable rendering and neural scene representation (NeRF, Gaussian Splatting).
[Preferred Qualifications] ㆍExperience with multi-modal sensor fusion (camera + LiDAR + radar) in autonomous driving. ㆍContributions to top-tier AI/Computer Vision conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML). ㆍKnowledge of vision transformer, vision language model, self-supervised learning, continual learning, and meta-learning techniques. ㆍPublications in reinforcement learning, motion planning, or self-driving perception
[Application] ㆍRequired: Resume /Thesis (for those who have a Master’s degree or above.) ㆍOptional: Cover Letter/Project details/ Other theses |