Driver Behavior Recognition and Prediction Models for Improved Highway Safety of Elderly Drivers |
|
---|---|
University | Florida A&M University (FAMU) Florida State University (FSU) |
Principal Investigators | Shonda L. (Walker) Bernadin, Ph.D. |
PI Contact Information |
Department of Electrical and Computer Engineering Phone: 850.410.6393 Email: bernadin@eng.famu.fsu.edu |
Funding Source(s)and Amounts Provided(by each agency or organization) |
USDOT: $141,000 Florida A&M University: $45,800 Florida State University: $36,200 |
Total Project Cost | $223,000 |
Agency ID or Contract Number | DTRT13-G-UTC42-004139-005719 DTRT13-G-UTC42-033177-040795 |
Start and End Dates | 08/17/2017 – 07/31/2018 |
Brief Description of Research Project |
In this project driver behavior recognition and prediction models for varying demographics including older drivers (65+), young drivers (15-24), and average drivers (25-64) are developed using optimal feature extraction techniques and data fusion methods based on a driver’s cognitive and physiological inputs. The driver models are used to design and implement advanced driver assistance technologies that are specific to the needs of each driver for increased roadway safety and enhanced driving experiences. Experimental studies were conducted with various types of drivers using the driving simulator in efforts to build a research platform for further studies on predicting a driver’s intent to perform a driving maneuver. The main research objectives during this phase were database generation, optimal feature extraction and driving behavior classification. In database generation, we developed an experimental driving scenario in which subjects had to perform a lane change and/or lane merge while engaging in a secondary task of conversing with co-passengers. The highway scenario attempted to model a 10-mile stretch of Interstate-10 within the local region. Continuing efforts in this area focus on incorporating real-time traffic data into the simulation to gain deeper insight to driver’s behavior within the local region. To obtain optimal feature vectors that accurately capture driving behavior, we first developed an analytical framework for extracting physiological features including eye glance and head turn motion from the data. A glance behavior algorithm was developed that used within frame analysis to characterize how glance frequency was affected by secondary driving tasks. Using glance frequency as a parameter, a lane-change intent algorithm was developed to determine a driver’s intent to change lanes. In addition, tangential research investigated the use of low-level acoustic features such as pitch, intensity and frequency formants, to extract the emotional state of the driver. The design of a driver behavior recognition system for differentiated drivers (i.e. young, average, older) is currently being implemented that uses glance frequency, emotional cues and other features, as well as, vehicle dynamics to classify and respond to the behavior and intent of specific drivers. Preliminary results of research on vehicle modeling suggest that feature data can also be used to inform the design of advanced driver assistance systems such as lane departure warning systems under adverse weather conditions. Additionally, we are exploring the use of deep reinforcement learning algorithms for implementing a left turn at non-signal intersections. For fully autonomous vehicles, observable vehicle parameters for left-turn assists are being modeled for integration into the differentiated driver system. The expected outcomes of this work include (1) a driver database of video capture driver profiling data; (2) optimal feature vectors for evaluating driving behaviors; and (3) integrating features into a driver recognition and prediction model that accurately characterizes elderly human driving behavior using vehicle dynamics, behavioral and physiological cues. In addition, this research project helps to support practical research experiences for undergraduate and graduate students, particularly those from underrepresented backgrounds. |
Describe Implementation of Research Outcomes (or why not implemented) Place Any Photos Here | Final Report |
Impacts/Benefits of Implementation (actual, not anticipated) | See Final Report |