TY - JOUR
T1 - Ensemble consensus representation deep reinforcement learning for hybrid FSO/RF communication systems
AU - Henna, Shagufta
AU - Minhas, Abid Ali
AU - Khan, Muhammad Saeed
AU - Iqbal, Muhammad Shahid
N1 - Publisher Copyright:
© 2022 The Author(s)
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Hybrid FSO/RF system requires an efficient FSO and RF link switching to improve the system capacity by realizing the complementary benefits of both the links. The dynamics of network conditions, such as fog, dust, and sand storms compound the link switching problem and control complexity. To address this problem, we initiate the study of deep reinforcement learning (DRL) for link switching of hybrid FSO/RF systems. Specifically, we focus on actor–critic called Actor/Critic-FSO/RF and Deep-Q network (DQN) called DQN-FSO/RF for FSO/RF link switching under atmospheric turbulences. To formulate the problem, we define the state, action, and reward function of a hybrid FSO/RF system. DQN-FSO/RF frequently updates the deployed policy that interacts with the environment in a hybrid FSO/RF system, resulting in high switching costs. To overcome this, we lift this problem to ensemble consensus representation learning-based DRL called DQNEnsemble-FSO/RF. The proposed DQNEnsemble-FSO/RF DRL approach uses consensus learned features based on an ensemble of asynchronous threads to update the deployed policy. Experimental results corroborate that DQNEnsemble-FSO/RF's consensus-learned features demonstrate better performance than Actor/Critic-FSO/RF, DQN-FSO/RF, and MyOpic while keeping the switching cost low. The results provide interesting insights into the prediction of received signal strength indicator (RSSI) for FSO/RF link switching.
AB - Hybrid FSO/RF system requires an efficient FSO and RF link switching to improve the system capacity by realizing the complementary benefits of both the links. The dynamics of network conditions, such as fog, dust, and sand storms compound the link switching problem and control complexity. To address this problem, we initiate the study of deep reinforcement learning (DRL) for link switching of hybrid FSO/RF systems. Specifically, we focus on actor–critic called Actor/Critic-FSO/RF and Deep-Q network (DQN) called DQN-FSO/RF for FSO/RF link switching under atmospheric turbulences. To formulate the problem, we define the state, action, and reward function of a hybrid FSO/RF system. DQN-FSO/RF frequently updates the deployed policy that interacts with the environment in a hybrid FSO/RF system, resulting in high switching costs. To overcome this, we lift this problem to ensemble consensus representation learning-based DRL called DQNEnsemble-FSO/RF. The proposed DQNEnsemble-FSO/RF DRL approach uses consensus learned features based on an ensemble of asynchronous threads to update the deployed policy. Experimental results corroborate that DQNEnsemble-FSO/RF's consensus-learned features demonstrate better performance than Actor/Critic-FSO/RF, DQN-FSO/RF, and MyOpic while keeping the switching cost low. The results provide interesting insights into the prediction of received signal strength indicator (RSSI) for FSO/RF link switching.
KW - Actor–critic hybrid FSO/RF
KW - DQN
KW - FSO/RF link switching
KW - Hybrid FSO/RF
KW - Reinforcement learning
KW - Representation learning for optical communication
UR - https://www.scopus.com/pages/publications/85145965766
U2 - 10.1016/j.optcom.2022.129186
DO - 10.1016/j.optcom.2022.129186
M3 - Article
AN - SCOPUS:85145965766
SN - 0030-4018
VL - 530
JO - Optics Communications
JF - Optics Communications
M1 - 129186
ER -