Texas Tech Center for Multidisciplinary Research in Transportation (TechMRT)

Advisor: Jia Li, PhD

Both of my projects under this lab have been conducted under the guidance and support of Dr. Jia Li.


Project 1: Behavior Design for Heterogeneous Traffic Flow of Autonomous and Human-driven Vehicles

For this project, I developed an open source simulation and analysis software using inspiration from Nagel-Schrekenberg's Cellular Automata Rules for Traffic flow. The software can be used to simulate traffic for many different types of car and traffic situations and record all the important traffic-flow parameters and allows the user to track customized parameters as well. The software also comes with a data analysis and visualization package that helps the user make sense of the data. Using this software, we wrote a paper that analyzes and explains the emergent self-organized clustering and lane formation of AVs. The results of our research is extremely important to the self-driving car and transportation regulation communities as we have shown that AVs can move in clusters without any centralized control.


Software Link:
Github

Publications:

  • Will put on as soon as the issue gets out (contact me to see the preprint)

Poster and Oral Presentations:

  • Oral Presentation at the Texas Tech University Annual Virtual Research Conference 2020


  • Extended Abstract accepted for presentation at ISTDM 2020 (postponed to 2021 because of Covid 19)


Project 2: Develop Nagel Schreckenberg Cellular Automata Model for training an agent to make efficient lane change decisions using Q Learning Algorithm.

This project had not completed due to my supervisor changing school's and Covid19.


For this project, I am adding Reinforcement Learning functionalities to the previous software release. This program has three main objects - car, road, and representation. The representation object deals with interactive mode, while the road and car classes make up the environment for the simulation. The road has three lanes with each lane having 100 cells, the road is modeled as a circular road (periodic boundary conditions). The simulation starts with 99 HVs with well defined properties randomly distributed on the road, 1 agent is also distributed randomly in the same road. Each update of the system involves each car object making lane change decisions followed by longitudinal update. The agent uses QLearning algorithms to learn the optimal lane change policy that would reduce the time taken for it to make 10 cycles on the road.

Software Link:
Github
Current Version: This version has the following simulation conditions
The default parameters used for Qlearning are as below:
QState: Matrix composed of 300 rows and 3 columns. The rows correspond to the state space (300 grids in the road data structure). The columns correspond to the action space (change lane up, change lane down, do not change lane)

Reward Function Logic: The current version of the program has the reward function working as follows:
Once, the aggregate reward is calculated using the above code block. The final reward for the episode is calculated as follows:
New version of reward function logic is as follows.
Customization: In order to change the simulation condition, edit the file "config\case.py".
In order to change the qlearning parameters, edit respective variables in the "driver.py" file. To change the environment conditions change the "simulation/road.py" file and to change agent/other car behaviors, reward functions change the "simulation/car.py" file. Important methods in road.py and car.py: