Tufts Human Factors TRoVE Lab Student Research Project
AUGMENTED REALITY IN AUTONOMOUS VEHICLES

RESEARCH PURPOSE
Key Targets for Success
​
-
Develop a high-fidelity, immersive driving environment to closely mimic the experience of autonomous driving that can serve as a toolbox for other Tufts students to develop virtual reality driving-related studies
-
Use the environment described above to examine whether the presence and frequency of world-fixed augmented reality cues can assist to heighten driver vigilance

PROJECT ORIGIN
In Fall 2019, one of my peers in the Tufts School of Engineering, Priya Misner, had a vision to conduct research on autonomous vehicles using virtual reality. Priya had just finished a summer internship at Volpe, a division of the Massachusetts Department of Transportation, and felt compelled to accelerate research on autonomous driving in a new and somewhat unexplored direction.
​
Priya, Julie Yeung, and I were the original three members to start this research group at Tufts. Our project is within the Human Factors VR Research Lab and supervised by Professors James Intriligator and Hal Miller-Jacobs.
After a semester’s worth of literature review and brainstorming, we decided to pursue a study focused on safety during manual takeover throughout autonomous driving. In cases where an autonomous vehicle’s sensors have been documented to underperform (i.e. weather, low contrast, reflective objects, etc.), the vehicle’s confidence in the accuracy of its sensors can sometimes be less accurate. It is often these situations that require the operator to switch into manual takeover.
With this project, we aim to explore how effective augmented reality (AR) cues are at increasing vigilance in operators of autonomous vehicles.

FORMAL STUDY DESCRIPTION
A safety hurdle in designing autonomous vehicles is how to tackle the transition problem - the situation in which the operator of the autonomous vehicle is unable to manage the sudden shift from cognitive underload to cognitive overload (Marh et al., 2011). The transition problem occurs because of the relationships that exist between over reliance, attention, and risk adaptation. As over reliance in an autonomous system increases, the operator either experiences more risk adaptation (taking risks that they normally would not), or a decrease in their attention levels as a direct result of the cognitive underload.
​
To combat cognitive underload, participating in an engaging secondary task has been shown to help operators maintain a sufficient level of attention and vigilance on the task at hand (Gershon et al., 2011). Prior research has demonstrated that augmented reality (AR) has the potential to enhance active safety features in motor vehicles as vehicle control moves from driver to car authority. When studying elderly driver’s hazard perception, one study demonstrated that AR cues were helpful in improving detection of low visibility hazards (Schall et al. 2013). These studies and others support the idea that the key issues underlying in the transition problem are allocation of cognitive resources and diminished situation awareness (Young et al.)
​
As such, we hypothesize that a system of AR cues fixed upon specific instances that the autonomous vehicle’s sensors are unsure about will offer a method of keeping operators more “in-the-loop” to remain adaptive and agile in situations where takeover of the autonomous vehicle is imminent.
FALL 2019
Research group members: Priya Misner, Julie Yeung, Korri Lampedusa
​
To get the project off the ground this semester, we focused on:
-
Developing a solid question to explore with guidance from our advisors
-
Conducting thorough literature review to support a detailed experimental design
-
Meeting weekly to have dialogue that helped develop our hypothesis
-
Learning how to use Unity 3D for XR projects through online documentation
-
Getting familiar with developing for Oculus using the Oculus SDK
-
Researching prefabs and assets that were helpful building blocks to leverage
-
Recruiting additional research assistants to build the team
​
SPRING 2020
Research group members: Priya Misner, Julie Yeung, Korri Lampedusa, Stephanie Bentley, Caleb Jeanniton
​
We began working on all of the components needed to eventually run our study. This involved:
​
-
Developing mockups of the driving simulator with these core components integrated:
-
Car interior modifications (adding infotainment screen & interactable elements)
-
Weather
-
Walking pedestrians
-
World-fixed AR cues
-
Autonomous driving following a waypoint path
-
Oculus hand controller integration
-
-
Documenting how to build these components in this Github repository
-
Writing drafts of the study protocol to be approved by the Tufts Social, Behavioral, and Educational Research Board (SBER IRB)
-
Communicating edits back and fourth between the team and IRB to get our study formally approved
​
FALL 2020
Research group members: Julie Yeung, Korri Lampedusa, Stephanie Bentley, Caleb Jeanniton, Barrett Roman, Ioana Lupascu
​
From Fall 2020 onwards, I became the lead student study coordinator for this group
​
Though our simulator's building blocks were mostly completed and the study protocol was approved by the IRB in May 2020, we were not able to run the experiment itself as planned this semester since the school put a hold on research involving human participants due to the COVID-19 pandemic. Instead, we focused on:
-
Documenting how to remotely build the simulator from our home machines, with emphasis on differences in the build process that result from updates to Unity and Visual Studio since last Fall
-
Creating a script for running user trials with consistency and professionalism
-
Drafting user instructions for interacting with components of the simulated car interior
-
Re-thinking elements of our protocol that need adjustment due to COVID-19 concerns
-
Researching industry trends for heads-up displays and other AR focused safety features so that the AR cues in our simulation are always aligned with current trends
-
Formalizing the experimental design of our secondary task that users will have to engage with in the simulation to mimic real-world distractions
SPRING 2021
Research group members: Korri Lampedusa, Stephanie Bentley, Caleb Jeanniton, Barrett Roman
​
I spent majority of early Spring bringing final changes into the simulator through Unity development and scripting to fine tune car driving behavior and add varied pedestrians. I finalized assembling the scene with all of our building blocks from the GitHub Repo.
Towards the end of Spring Semester, I was able to access the lab and run pilot studies with 3 peers to gather feedback on the comfort of the simulator and sample data for the AR cue acknowledgement experiment. Both were tremendously helpful in moving the project forwards to fine tune final elements.
​
​
FUTURE DIRECTIONS
Though I will graduate before the larger trials are able to be conducted as the pandemic situation improves, the team has hope that in Fall 2021 they will have clearance to recruit participants and collect formal data in the lab.
DISCLAIMERS AND ACKNOWLEDGEMENTS
This project is not associated with any particular autonomous vehicle producer. Though we cite information derived from Tesla’s published research on fleet learning, sensor performance, and other aspects of their autonomous vehicles, our project has no connection or endorsement from the brand.
​
Our simulator is made possible by Leveraging Microsoft Airsim. You can read more about Airsim here
​
Though we make modifications (both cosmetic and logic/scripting) to the car interior’s model and components, we purchased this Unity store asset for the body of the vehicle in our simulator.
​
As stated above, Priya Misner was the original student lead for this project. I became the student study coordinator in late Spring 2020, and have been leading the work throughout the 2020-2021 academic year.
​