- Open Access
- Authors : Anju Satheesan, Joby James, Priya M
- Paper ID : IJERTCONV9IS13039
- Volume & Issue : NCREIS – 2021 (Volume 09 – Issue 13)
- Published (First Online): 02-08-2021
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Channel Quality Prediction an Overview
Anju Satheesan PG Scholar
Electronics and Communication Engineering, College of Engineering Kidangoor,
Kerala, India
Joby James
Assistant Professor
Electronics and Communication Engineering College of Engineering Kidangoor
Kerala, India
Priya M
Assistant Professor
Electronics and Communication Engineering College of Engineering Kidangoor
Kerala, India
Abstract Based on the received signal strength there exists several channel models. In this paper, design a deep channel for accurate prediction and modeling of wireless channel variations, which is essential for several network applications. The modeling and prediction help to schedule and provide better video forecasting in 4G LTE network. The prediction helps to improve bitrate adaptation for improved performance in the Wi-Fi network. Based on the past signal strength we can predict the future signal strength by using deep channel model. Considering the two variants of the deep channel, ie, LSTM, and GRU, the deep channel is highly adaptable to different parameters of the channel and can predict the future channel conditions like different mobility pattern, sampling rate, and networks. According to two standards, the deep channel comparison performance is, i) ARIMA ii) linear regression for multiple network. Also, paper considering the 4G LTE, Wi-Fi, WIMAX, Zigbee, and finally a comparison of all models in other network scenarios.
KeywordsDeep learning; machine learning; GRU; LSTM.
-
INTRODUCTION
The wireless network system has been constantly increasing its performance as it is developing from one generation to another. A growing tendency of wireless networks is not just to react to changes but also to expect more accuracy and signal strength. In networking research and wireless communication, received signal strength has a major role. The researches start with the Gilbert and Elliot two-state Markov channel model.[1] The predictable applications of this research show better video forecasting over 4G networks [2], [3], the improvement of performance in WiFi network by bit rate remodeling. [4], [5], and massive and energy-efficient data transfer in sensor networks.
The Researchers mainly focused on the design of Markov models that show the impact of the characteristics of the barrel
such as multipath fading, shadowing, path loss in received signal strength. Most of the Markovian models to particular network settings; it is mainly depending on parameters such as, location, mobility and sampling rate. Therefore, it cannot be used to predict signal strength on different wireless
network. Nowadays the channel prediction problem is very important.
Machine learning gives a computation of many data and helps design templates that provide prediction of wireless channel variations. It suited the deep learning models for prediction problems and time series forecasting with various input signals. So, investigating the wireless channel prediction problems, here designs a deep channel which is an encoder-decoder-based sequence to sequence the Deep learning model, which can predict signal strength. So, two components of Deep channel encoder and decoder in a multilayer neural network are the focused area in this paper. The encoder is used to take the past signal strength, which computes the channel information, and the decoder predicts the future signal strength.
Using data collected from various networks to investigate signal strength received, like 4G LTE, Wi-Fi, Zigbee and compares the performance of a Deep channel with linear regression and auto integrated moving average (ARIMA). Gain for Deep channels with higher signal strength variations and higher performance gains. The parameters sequence length, hidden layer number, and the type of method used affect the performance in this case as well. This is the best option parameters of configuring for deep learning provide the best performance and it depends on the dataset.
The sequence length depends on the performance, the sequence length of size 20 shows information in data is mostly useful beyond that sequence, measurement does improvements not made the achievement. A simple unguided learning strategy provides better performance than a complex training method. The unguided scheme provides a greater study on solution space and attains better prediction performance.
Here we provide the result on the applicability of the model in other network structures and also examine the path for future research considering the performance of the trained data on the hidden data of the past
-
RELATED WORK
Prediction of Wireless channel studies start from the two-state Markovian model. We mainly classified studies in this field into two categories: i) different Markovian models vary in arrive signal strength, and ii) different machine learning models.
-
Markovian models
Gilbert and Elliott's model [1] is the most ancient. It is a simple model to evaluate channel capacity and error rate performance through burst wireline telephone circuits. This model uses two states, in the first state transmission is error- free, and in the second state has only the probability of transmitting a digit correctly. This model uses time division encoding.[2] gives detailed analysis on finite state Markovian models. The Markov chain model [3] gives understanding between the observed values and predicted value of for both the temporary and steady state practices. A Markovian model predicts the slow channel process in long term evaluation (LTE) networks, the prediction can be done through the statistical prediction of the LTI system. [4]. [5] is an alternative approach for modeling the wireless channel, channel model formed on Discrete-Time Markov Chain (DTMC), modeling the channel variation for vehicular networks.
-
Machine learning models
Earlier work based on prediction of channel using machine learning it take out the useful features, and identifying the critical links etc. [6] sensor network communication increased by using machine learning techniques. The TALENT is the use of online machine learning techniques can improve the wireless communication. [7]. The deep learning model improves the deep models, and increases the accuracy of prediction accuracy compared to older schemes. [8].
The increased use of deep learning solves many problems in wireless communication. Deep-Fi can successfully decrease spatial error compared with old methods and Deep-Fi can attain excellent performance under different parameters and different propagation environments [9]. [10] can improve the efficiency of the vehicular networks and functional characteristic.
The massive MIMO systems have more complexity, which is overcome by direction of arrival calculation, [11]. Non-orthogonal multiple access (NOMA) is more complex for a fluctuating channel [12]. The other models are gaining and link alignment [13], spectrum sharing in mixed wireless networks [14], shadowing effects in without outfit [15]. [16] forms the disadvantage of logical acceptance of models , but
[17] provides a traffic regulation modelsIn this paper, designed specifically for received signal strength prediction. Prior works are more concentrated on restoring to stimulations, here show the application the model is to predict the signal strength of different networks.
-
-
DEEP CHANNEL
-
Deep Learning Model
Antient Markovian models are close to nature, they make only simple predictions, focused on some network charateristics, and depends on few past data data to decide.
Fig. 1:Deep channel architecture.
Fig. 2: RNN model architecture.
Simple model-based approaches are not suitable with the computational power and increasing data. The drastic calculation and data in the last few years, machine-learning provides better prediction.
Deep sequence-to-sequence models are mainly for generating captions in videos[18], and natural language translation [19]. The latest work [20] used for forecasting and prediction purposes, using previous we can predict the next data. The models can predict the data based on previous data. The model can predict future data by preparing a large amount of information. The deep learning model contains different hidden layers, the encoded signals pass through these layers.
B . Deep channel
This model is mainly divided into two. Figure 1 summarizes DeepChannel. The encoder accepts the past signal strength values X and produces a vector C; the decoder receives this as an input and produces ^ Y.^Y is the predicted channel variation. This model has the advantage that there is no need to have the length of input and output
sequence are same [21]. Here RNN architecture is used. The sequential layers, arranged as a network of nodes in encoder and decoder. Each node in the successive layers is inter- connected.
Fig. 3: model of LSTM cell architecture
Figures 2 (a) and 2 (b) shows the structure of RNN(single-layer). We do the calculation on unfolded layers at each time step. In this Figures = [2, 1, ]; is the input vector and = [2, 1, ] is the corresponding output vector, is the hidden layer, and,
, and are the weight matrices. The hidden layer
serves as a memory. we calculated ht using the previous hidden state1 and the input . The hidden state of the RNN is,
= (1, )
(1)
where, is any non-linear activation function and 1 t
-
A standard recurrent neural network (RNN) consists of the basic stimulation functions like tanh and sigmoid. RNN gives some broadcast inaccuracy in the system. This affects RNNs capacity [22]. To avoid this error, introduce LSTM and GRU cells; [21] by contains the forget facility.
The LSTM and GRU cells are same but differs in their interconnections and the number of gates. The LSTM cell have three gates, namely input gate, the output gate, and the forget gate. Error problems handle by the forget gates. They have shown both LSTM and GRU-based models effective in various predictions and it is impossible to find theoretically error [21]. In Figure 3 the network contains an LSTM cell. Each cell has the same inputs and output.
-
-
RESULT AND DISCUSSION
Here calculate the parameters of deep channel. The calculation depends on performance and training time. Figures 4 and 5 show comparison between LSTM models predicted and actual values. Here ARIMA and linear regression obey previous data. Figures 6 (a), 6 (b) and 6 (c) considering sampling rate for the prediction of data.
Figures 7 (a), 7 (b), and 7 (c) WiMAX networks predictions in indoor and outdoor. For Zigbee, predictions
are conducted in distance 10m and 15m(Figures 8 (a) and 8 (b)).
Fig. 4: real and predicted values for 4G LTE (LSTM).
Fig. 5: Comparison of real and predicted values for4G LTE(GRU).
Fig. 6:predictiond in wifi
Fig. 7: predictions in WiMAX. TABLE I: RE and MAE results.
Fig. 8: Zigbee results
Fig. 9: Industrial network comparison.
-
CONCLUTION
Here studied the problem of predicting the strength of the received signal in wireless networks. We developed Deep Channel, can predict the signal strength from the past data. Here instigates different networks and its predictions.
REFERENCES
-
Edgar N Gilbert. Capacity of a burst-noise channel. Bell system technical journal, 39(5):12531265, 1960.
-
Parastoo Sadeghi, Rodney A Kennedy, Predrag B Rapajic, and Ramtin Shams. Finite-state markov modeling of fading channels- a survey of principles and applications. IEEE Signal Processing Magazine, 25(5), 2008.
-
Nicola Bui, Matteo Cesana, S Amir Hosseini, Qi Liao, Ilaria Malanchini,and Joerg Widmer. A survey of anticipatory mobile networking: Context-based classification, prediction methodologies, and optimization techniques. IEEE Communications Surveys & Tutorials, 19(3):1790 1821, 2017
-
Mustapha Amara, Afef Feld, and Stefan Valentin. Channel quality prediction in lte: How far can we look ahead under realistic assumptions? In Personal, Indoor, and Mobile Radio Communications (PIMRC), 2017 IEEE 28th Annual International Symposium on, pages 16. IEEE, 2017.
-
Peppino Fazio, Mauro Tropea, Cesare Sottile, and Andrea Lupia. Vehicular networking and channel modeling: a new markovian
approach. InConsumer Communications and Networking Conference (CCNC), 201512th Annual IEEE, pages 702707. IEEE, 2015.
12th Annual IEEE, pages 702707. IEEE, 2015.
-
ong Wang, Margaret Martonosi, and Li-Shiuan Peh. Predicting link quality using supervised learning in wireless sensor networks. ACM SIGMOBILE Mobile Computing and Communications Review, 11(3):7183, 2007.
-
Tao Liu and Alberto E Cerpa. Temporal adaptive link quality prediction with online learning. ACM Transactions on Sensor Networks (TOSN), 10(3):46, 2014.
-
Jing Wang, Jian Tang, Zhiyuan Xu, Yanzhi Wang, Guoliang Xue, Xing Zhang, and Dejun Yang. Spatiotemporal modeling and prediction in cellular networks: A big data enabled deep learning approach. In INFOCOM 2017-IEEE Conference on Computer Communications, IEEE, pages 19. IEEE, 2017.
-
Xuyu Wang, Lingjun Gao, Shiwen Mao, and Santosh Pandey. Csi- based fingerprinting for indoor localization: A deep learning approach. IEEE Transactions on Vehicular Technology, 66(1):763776, 2017.
-
Le Thanh Tan and Rose Qingyang Hu. Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning. IEEE Transactions on Vehicular Technology, 67(11):1019010203, 2018.
-
Hongji Huang, Jie Yang, Hao Huang, Yiwei Song, and Guan Gui. Deep learning for super-resolution channel estimation and doa estimation based massive mimo system. IEEE Transactions on Vehicular Technology, 67(9):85498560, 2018.
-
Guan Gui, Hongji Huang, Yiwei Song, and Hikmet Sari. Deep learning for an effective nonorthogonal multiple access scheme. IEEE Transactions on Vehicular Technology, 67(9):84408450, 2018.
-
Ying He, Chengchao Liang, F Richard Yu, Nan Zhao, and Hongxi Yin. Optimization of cache-enabled opportunistic interference alignment wireless networks: A big data deep reinforcement learning approach. In Communications (ICC), 2017 IEEE International Conference on, pages 16. IEEE, 2017
-
Jie Wang, Xiao Zhang, Qinhua Gao, Hao Yue, and Hongyu Wang. Device-free wireless localization and activity recognition: A deep learning approach. IEEE Transactions on Vehicular Technology, 66(7):62586267, 2017.
-
Yiding Yu, Taotao Wang, and Soung Chang Liew. Deep- reinforcement learning multiple access for heterogeneous wireless networks. arXiv preprint arXiv:1712.00162, 2017.
-
Zubair Md Fadlullah, Fengxiao Tang, Bomin Mao, Nei Kato, Osamu Akashi, Takeru Inoue, and Kimihiro Mizutani. State-of- the-art deep learning: Evolving machine intelligence toward tomorrows intelligent network traffic control systems. IEEE Communications Surveys & Tutorials, 19(4):24322455.
-
Nei Kato, Zubair Md Fadlullah, Bomin Mao, Fengxiao Tang, Osamu Akashi, Takeru Inoue, and Kimihiro Mizutani. The deep learning vision for heterogeneous network traffic control: Proposal, challenges, and future perspective. IEEE wireless communications, 24(3):146153, 2017.
-
Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Money, Trevor Darrell, and Kate Saenko. Sequence to sequence-video to text. In Proceedings of the IEEE international conference on computer vision, pages 45344542, 2015.
-
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
-
Martin L¨angkvist, Lars Karlsson, and Amy Loutfi. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters, 42:1124, 2014.
-
Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
-
Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. 1999.