Sažetak
In recent years, there has been a significant increase in research interest in applying Reinforcement Learning (RL) to Adaptive Traffic Signal Control (ATSC). Urban traffic networks present a suitable environment for Multi-Agent (MA) ATSC systems, as each intersection can be managed by a single RL agent. However, the non-stationarity of the ATSC environment in Multi-Agent Reinforcement Learning (MARL) poses a challenge since the actions of one agent can directly affect the performance of its neighboring agents. To address this issue, this paper presents and compares several MARL ATSC approaches utilizing Growing Neural Gas (GNG) for state identification, implemented using a microscopic traffic simulator with a synthetic traffic model of nine intersections. This paper explores the effectiveness of various MARL ATSC approaches, including fully independent agents and those augmented with reward and state-sharing mechanisms. The results demonstrate that fully independent agents can enhance global traffic performance by optimizing local decisions. Furthermore, when agents share rewards and states, they achieve additional improvements in both local and global traffic conditions by fostering cooperative behavior and mitigating the impact of non-stationarity. In addition, this paper identifies the approach of centralized state identification with GNG, coupled with decentralized agent execution, as the most effective ATSC strategy. This configuration leverages the strengths of centralized data processing for accurate state representation while maintaining the flexibility and scalability of decentralized agent operation. Overall, the f indings highlight the potential of GNG-based state identification in enhancing the performance of MARL ATSC systems.
Ključne riječi
Growing Neural Gas; Reinforcement Learning; Adaptive Traffic Signal Control; Multi-Agent Systems