Abstract : The CSMA/CA protocol adopts the binary exponential backoff algorithm that varies the value of contention window to resolve the situation of concurrent access and contention when multiple stations coexist in a wireless network. However, a dilemma arises in that a small contention window increases the probability of collision, but a large contention window increases the delay of accessing wireless channels. The design of adaptive contention window becomes essential for improving the transmission performance of a wireless network. Our study considers that using deep reinforcement learning (DRL) can better decide an appropriate value of contention window. Since DRL agents are engaged in monitoring environmental states, they can predict the coming variations and adjust the value of contention window. In this paper, we propose a linear-increase-and-linear-decrease backoff scheme with deep Q-networks (LILD-DQN) to deal with the CW optimization problem. We examine the efficacy of the LILD-DQN scheme under a high-density scenario of 100 stationary stations. Experimental simulation presents the relative performance between the CSMA/CA, LILD, CCOD-DQN, and LILD-DQN schemes. Performance results show that the proposed LILD-DQN scheme outperforms CSMA/CA by increasing 42% of throughput and decreasing 60% of collision rate. Compared with LILD and CCOD-DQN, the LILD-DQN scheme achieves the increasing throughput of 12% and 10%, and the decreasing collision rate of 37% and 29%, respectively. Hence, the LILD-DQN scheme with deep reinforcement learning is superior to prior channel contention schemes, CSMA/CA, LILD, and CCOD-DQN, in terms of throughput and collision rate.
Index terms : Binary Exponential Backoff Algorithm, Contention Window, Collision, CSMA/CA, Deep Reinforcement Learning, Machine Learning