Vehicular Sensing

Vehicles today have several hundred sensors that are used to monitor vehicular subsystems. Such monitoring can track the health of these subsystems and can also be used to control vehicular behavior. In this project, funded by the National Science Foundation and General Motors, we explored ways in which these sensors could be used to improve vehicular safety, comfort, and performance. Many of the sensors can be accessed through a diagnostic interface using proprietary protocols, so to conduct our research, we partnered with a major car company to obtain access to these sensors.

One outcome of our research was a suite of algorithms that are used on-board sensors to improve the positioning of the vehicle. Because GPS can be inaccurate in downtown areas, we developed techniques to use onboard sensors to obtain positioning with enough accuracy to locate the vehicle within a given lane. These techniques combine GPS readings with map information, but also use vehicular sensors to determine whether a vehicle traverses a speed bump, or turns right or left, or comes to stop at a signal, then uses these determinations to learn positional corrections. These positional corrections leverage the wisdom of the crowds: if multiple cars stop at the same light, then the average of their positional readings is likely to be more accurate.

A second outcome of the research built upon this idea and developed a collection of crowd-sourcing algorithms that use onboard sensors to determine driving behavior or various aspects of the environment. Because vehicles have hundreds of sensors, if drivers would share their sensor readings (in much the same way that drivers today share information on apps like Waze), we hypothesized that we could determine various aspects of the environment such as: stop signs, road curvature, road grade, and pothole positions. We developed sensor processing and aggregation algorithms and demonstrated that these algorithms could reliably detect these environmental features.

Our most recent outcome explored a novel capability that we call Augmented Vehicular Reality (AVR). Today, and in the foreseeable future, vehicles will have “3D sensors” like LiDAR and stereo cameras. Autonomous vehicles and driver assist systems rely on these sensors. These sensors can perceive depth, but their view is limited by obstacles. AVR enables a vehicle to obtain, using wireless communication, readings from other sensors, so that the vehicle can effectively “see through” obstacles. This capability can greatly increase the safety of autonomous driving, enabling a vehicle to plan its path more effectively. The code for AVR is publicly available (https://github.com/USC-NSL/AugmentedVehicularReality).

Concretely, these outcomes have resulted in two patents, several publications, and we have transitioned code to industry.

Publications

  1. TVT
    Towards Robust Vehicular Context Sensing
    Qiu, H, Chen, J, Jain, S, Jiang, Y, McCartney, M, Kar, G, Bai, F, Grimm, D K, Gruteser, M, and Govindan, R
    IEEE Transactions on Vehicular Technology 2018
  2. Mobisys
    AVR: Augmented Vehicular Reality
    Qiu, Hang, Ahmad, Fawad, Bai, Fan, Gruteser, Marco, and Govindan, Ramesh
    In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (Mobisys) 2018

    Autonomous vehicle prototypes today come with line-of-sight depth perception sensors like 3D cameras. These 3D sensors are used for improving vehicular safety in autonomous driving, but have fundamentally limited visibility due to occlusions, sensing range, and extreme weather and lighting conditions. To improve visibility and performance, we explore a capability called Augmented Vehicular Reality (AVR). AVR broadens the vehicle’s visual horizon by enabling it to wirelessly share visual information with other nearby vehicles. We show that AVR is feasible using off-the-shelf wireless technologies, and it can qualitatively change the decisions made by autonomous vehicle path planning algorithms. Our AVR prototype achieves positioning accuracies that are within a few percentages of car lengths and lane widths, and it is optimized to process frames at 30fps.

  3. HotMobile
    Augmented Vehicular Reality: Enabling Extended Vision for Future Vehicles
    Qiu, Hang, Ahmad, Fawad, Govindan, Ramesh, Gruteser, Marco, and Bai, Gorkem Kar Fan
    In the 18th Workshop on Mobile Computing Systems and Applications (HotMobile 2017) 2017

    Like today’s autonomous vehicle prototypes, vehicles in the future will have rich sensors to map and identify objects in the environment. For example, many autonomous vehicle prototypes today come with line-of-sight depth perception sensors like 3D cameras. These cameras are used for improving vehicular safety in autonomous driving, but have fundamentally limited visibility due to occlusions, sensing range, and extreme weather and lighting conditions. To improve visibility and performance, not just for autonomous vehicles but for other Advanced Driving Assistance Systems (ADAS), we explore a capability called Augmented Vehicular Reality (AVR). AVR broadens the vehicle’s visual horizon by enabling it to share visual information with other nearby vehicles, but requires careful techniques to align coordinate frames of reference, and to detect dynamic objects. Preliminary evaluations hint at the feasibility of AVR and also highlight research challenges in achieving AVR’s potential to improve autonomous vehicles and ADAS.

  4. SEC
    Real-time Traffic Estimation at Vehicular Edge Nodes
    Kar, Gorkem, Jain, Shubham, Gruteser, Marco, Bai, Fan, and Govindan, Ramesh
    In Proceedings of the Second ACM/IEEE Symposium on Edge Computing 2017
  5. SEC
    PredriveID: Pre-trip Driver Identification from In-vehicle Data
    Kar, Gorkem, Jain, Shubham, Gruteser, Marco, Chen, Jinzhu, Bai, Fan, and Govindan, Ramesh
    In Proceedings of the Second ACM/IEEE Symposium on Edge Computing 2017
  6. SenSys
    CARLOC: Precisely Tracking Automobile Position
    Jiang, Yurong, Qiu, Hang, McCartney, Matthew, Sukhatme, Gaurav, Gruteser, Marco, Bai, Fan, Grimm, Donald, and Govindan, Ramesh
    In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys) 2015
  7. SenSys
    CarLog: A Platform for Flexible and Efficient Automotive Sensing
    Jiang, Yurong, Qiu, Hang, McCartney, Matthew, Halfond, William G J, Bai, Fan, Grimm, Donald, and Govindan, Ramesh
    In Proceedings of the 12th ACM Conference on Embedded Networked Sensor Systems (SenSys’14) 2014

Patents

  1. US Patent App. 15/903,616: Crowd-sensed point cloud map, Fawad Ahmad, Hang Qiu, Fan Bai, and Ramesh Govindan.
  2. US Patent No. 17/041026: Method and Apparatus of Networked Scene Rendering and Augmentation in Vehicular Environments in Autonomous Driving Systems, Hang Qiu, Ramesh Govindan, Marco Gruteser, and Fan Bai.