Volume 11, Issue 2 (Journal of Control, V.11, N.2 Summer 2017)                   JoC 2017, 11(2): 9-21 | Back to browse issues page

XML Persian Abstract Print


1- Faculty of Geodesy and Geomatics Eng. K.N.Toosi University of Technology
Abstract:   (8205 Views)
The daily increase of a number of vehicles in big cities poses a serious challenge to efficient traffic control. The suitable approach for optimum traffic control should be adaptive in order to successfully content with the urban traffic that has the dynamic and complex nature. Within such a context, the major focus of this research is developing a method for adaptive and distributed traffic signal control based on reinforcement learning (RL). RL as a promising approach for generating, evaluating, and improving traffic signal decision-making solutions is beneficial and synergetic. RL-embedded traffic signal controller has the capability to learn through experience by dynamically interacting with the traffic environment in order to reach its goals. Traffic signal control often requires dealing with continuous state defined by means of continuous variables. Conventional RL methods do not scale well to problems with continuous state space or very large state space because they require storing distinct estimations of each state value in lookup tables. The contribution of the present research is developing adaptive traffic signal controllers based on continuous state RL for handling the big state space challenge arises in traffic control. The performance of the proposed method is compared with Q-learning and actor-critic and the results reveal that the proposed method outperforms others.
Full-Text [PDF 1298 kb]   (3999 Downloads)    
Type of Article: Research paper | Subject: Special
Received: 2016/05/16 | Accepted: 2017/05/14 | Published: 2017/07/6

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.