Deep reinforcement learning for wireless networks /
General Material Designation
[Book]
First Statement of Responsibility
F. Richard Yu, Ying He.
.PUBLICATION, DISTRIBUTION, ETC
Place of Publication, Distribution, etc.
Cham, Switzerland :
Name of Publisher, Distributor, etc.
Springer,
Date of Publication, Distribution, etc.
[2019]
PHYSICAL DESCRIPTION
Specific Material Designation and Extent of Item
1 online resource (78 pages)
SERIES
Series Title
SpringerBriefs in Electrical and Computer Engineering
CONTENTS NOTE
Text of Note
Intro; Preface; A Brief Journey Through D̀̀eep Reinforcement Learning for Wireless Networks''; Contents; 1 Introduction to Machine Learning; 1.1 Supervised Learning; 1.1.1 k-Nearest Neighbor (k-NN); 1.1.2 Decision Tree (DT); 1.1.3 Random Forest; 1.1.4 Neural Network (NN); Random NN; Deep NN; Convolutional NN; Recurrent NN; 1.1.5 Support Vector Machine (SVM); 1.1.6 Bayes' Theory; 1.1.7 Hidden Markov Models (HMM); 1.2 Unsupervised Learning; 1.2.1 k-Means; 1.2.2 Self-Organizing Map (SOM); 1.3 Semi-supervised Learning; References; 2 Reinforcement Learning and Deep Reinforcement Learning.
Text of Note
2.1 Reinforcement Learning2.2 Deep Q-Learning; 2.3 Beyond Deep Q-Learning; 2.3.1 Double DQN; 2.3.2 Dueling DQN; References; 3 Deep Reinforcement Learning for Interference Alignment Wireless Networks; 3.1 Introduction; 3.2 System Model; 3.2.1 Interference Alignment; 3.2.2 Cache-Equipped Transmitters; 3.3 Problem Formulation; 3.3.1 Time-Varying IA-Based Channels; 3.3.2 Formulation of the Network's Optimization Problem; System State; System Action; Reward Function; 3.4 Simulation Results and Discussions; 3.4.1 TensorFlow; 3.4.2 Simulation Settings; 3.4.3 Simulation Results and Discussions.
Text of Note
3.5 Conclusions and Future WorkReferences; 4 Deep Reinforcement Learning for Mobile Social Networks; 4.1 Introduction; 4.1.1 Related Works; 4.1.2 Contributions; 4.2 System Model; 4.2.1 System Description; 4.2.2 Network Model; 4.2.3 Communication Model; 4.2.4 Cache Model; 4.2.5 Computing Model; 4.3 Social Trust Scheme with Uncertain Reasoning; 4.3.1 Trust Evaluation from Direct Observations; 4.3.2 Trust Evaluation from Indirect Observations; Belief Function; Dempster's Rule of Combining Belief Functions; 4.4 Problem Formulation; 4.4.1 System State; 4.4.2 System Action; 4.4.3 Reward Function.
Text of Note
4.5 Simulation Results and Discussions4.5.1 Simulation Settings; 4.5.2 Simulation Results; 4.6 Conclusions and Future Work; References.
0
8
8
8
SUMMARY OR ABSTRACT
Text of Note
This Springerbrief presents a deep reinforcement learning approach to wireless systems to improve system performance. Particularly, deep reinforcement learning approach is used in cache-enabled opportunistic interference alignment wireless networks and mobile social networks. Simulation results with different network parameters are presented to show the effectiveness of the proposed scheme. There is a phenomenal burst of research activities in artificial intelligence, deep reinforcement learning and wireless systems. Deep reinforcement learning has been successfully used to solve many practical problems. For example, Google DeepMind adopts this method on several artificial intelligent projects with big data (e.g., AlphaGo), and gets quite good results. Graduate students in electrical and computer engineering, as well as computer science will find this brief useful as a study guide. Researchers, engineers, computer scientists, programmers, and policy makers will also find this brief to be a useful tool.
ACQUISITION INFORMATION NOTE
Source for Acquisition/Subscription Address
Springer Nature
Stock Number
com.springer.onix.9783030105464
OTHER EDITION IN ANOTHER MEDIUM
Title
Deep Reinforcement Learning for Wireless Networks.