Neural Networks in Wireless Communications

Similar documents
Time and Frequency Domain Equalization

5 Signal Design for Bandlimited Channels

Implementation of Digital Signal Processing: Some Background on GFSK Modulation

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

QAM Demodulation. Performance Conclusion. o o o o o. (Nyquist shaping, Clock & Carrier Recovery, AGC, Adaptive Equaliser) o o. Wireless Communications

Coding and decoding with convolutional codes. The Viterbi Algor

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

Adaptive Equalization of binary encoded signals Using LMS Algorithm

Lezione 6 Communications Blockset

INTER CARRIER INTERFERENCE CANCELLATION IN HIGH SPEED OFDM SYSTEM Y. Naveena *1, K. Upendra Chowdary 2

Antennas & Propagation. CS 6710 Spring 2010 Rajmohan Rajaraman

An Algorithm for Automatic Base Station Placement in Cellular Network Deployment

Antenna Properties and their impact on Wireless System Performance. Dr. Steven R. Best. Cushcraft Corporation 48 Perimeter Road Manchester, NH 03013

A New Approach For Estimating Software Effort Using RBFN Network

Advanced Signal Processing and Digital Noise Reduction

CDMA Performance under Fading Channel

RESULTS OF TESTS WITH DOMESTIC RECEIVER IC S FOR DVB-T. C.R. Nokes BBC R&D, UK ABSTRACT

Capacity Limits of MIMO Channels

RF SYSTEM DESIGN OF TRANSCEIVERS FOR WIRELESS COMMUNICATIONS

MIMO Antenna Systems in WinProp

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak

Omni Antenna vs. Directional Antenna

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.

Analecta Vol. 8, No. 2 ISSN

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY

Module 13 : Measurements on Fiber Optic Systems

Recurrent Neural Networks

Optimum Frequency-Domain Partial Response Encoding in OFDM System

Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation

Course Curriculum for Master Degree in Electrical Engineering/Wireless Communications

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.

Whitepaper n The Next Generation in Wireless Technology

Suppression of Four Wave Mixing in 8 Channel DWDM System Using Hybrid Modulation Technique

A Performance Study of Wireless Broadband Access (WiMAX)

Revision of Lecture Eighteen

Performance of Quasi-Constant Envelope Phase Modulation through Nonlinear Radio Channels

Neural Network Design in Cloud Computing

Bandwidth and Mutual Coupling Analysis of a Circular Microstrip MIMO Antenna Using Artificial Neural Networks

Environmental Effects On Phase Coherent Underwater Acoustic Communications: A Perspective From Several Experimental Measurements

DVB-T. The echo performance of. receivers. Theory of echo tolerance. Ranulph Poole BBC Research and Development

Optical Fibres. Introduction. Safety precautions. For your safety. For the safety of the apparatus

SMART ANTENNA BEAMFORMING NETWORK Sharul Kamal Abdul Rahim Peter Gardner

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

Appendix D Digital Modulation and GMSK

Application Note Noise Frequently Asked Questions

MIMO detector algorithms and their implementations for LTE/LTE-A

RADIO FREQUENCY INTERFERENCE AND CAPACITY REDUCTION IN DSL

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

6.2.8 Neural networks for data mining

Non-Data Aided Carrier Offset Compensation for SDR Implementation

Propagation Channel Emulator ECP_V3

Digital Communications

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

Using artificial intelligence for data reduction in mechanical engineering

Algorithms for Interference Sensing in Optical CDMA Networks

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

IN current film media, the increase in areal density has

Robot Perception Continued

Lecture 6. Artificial Neural Networks

Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction

MODULATION Systems (part 1)

ISI Mitigation in Image Data for Wireless Wideband Communications Receivers using Adjustment of Estimated Flat Fading Errors

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

Measurement, Modeling and Simulation of Power Line Channel for Indoor High-speed Data Communications

Antenna Systems for Broadband Wireless Access

Multipath fading in wireless sensor mote

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

Neural Networks and Support Vector Machines

CDMA TECHNOLOGY. Brief Working of CDMA

Power Prediction Analysis using Artificial Neural Network in MS Excel

A Novel Decentralized Time Slot Allocation Algorithm in Dynamic TDD System

BSEE Degree Plan Bachelor of Science in Electrical Engineering:

ARTIFICIAL NEURAL NETWORKS FOR ADAPTIVE MANAGEMENT TRAFFIC LIGHT OBJECTS AT THE INTERSECTION

AN Application Note: FCC Regulations for ISM Band Devices: MHz. FCC Regulations for ISM Band Devices: MHz

NEURAL NETWORKS A Comprehensive Foundation

The front end of the receiver performs the frequency translation, channel selection and amplification of the signal.

Dynamic Reconfiguration & Efficient Resource Allocation for Indoor Broadband Wireless Networks

Novelty Detection in image recognition using IRF Neural Networks properties

AM Receiver. Prelab. baseband

Spike-Based Sensing and Processing: What are spikes good for? John G. Harris Electrical and Computer Engineering Dept

Introduction to Digital Subscriber s Line (DSL)

An introduction to OBJECTIVE ASSESSMENT OF IMAGE QUALITY. Harrison H. Barrett University of Arizona Tucson, AZ

communication over wireless link handling mobile user who changes point of attachment to network

Jitter Measurements in Serial Data Signals

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Analysis and Improvement of Mach Zehnder Modulator Linearity Performance for Chirped and Tunable Optical Carriers

Stock Prediction using Artificial Neural Networks

8. Cellular Systems. 1. Bell System Technical Journal, Vol. 58, no. 1, Jan R. Steele, Mobile Communications, Pentech House, 1992.

Keywords: Slot antenna, ultra wideband (UWB), Microstrip line feeding, HFSS Simulation software.

SC-FDMA and LTE Uplink Physical Layer Design

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

Transcription:

1 Neural Networks in Wireless Communications Keerthi Ram, MS 2006, IIIT Hyderabad Abstract Neural processing presents a different way to store and manipulate knowledge. It uses a connectionist approach, where connections emphasize the learning capability and discovery of representations. This work presents a study of the application areas for neural networks in wireless communication. Despite its capability to act as a black box and model systems using learning, domain knowledge is required to apply neural networks successfully in wireless communications. Considered in this work are neural network based adaptive equalization, field strength prediction in indoor networks and microstrip antenna design using neural networks. This provides non-linearity at a granular level to the neuron. The neural network is formed by such neurons arranged in layers as shown in fig. 2. Neurons of one layer fan out their output to the input of every other neuron in the next layer. All connections between neurons are weighted. The neural network thus can be viewed as having one input layer, where the input vector X is applied, one ore more hidden layers, and an output layer formed by m neurons. The output vector is thus of size m. Index Terms neural network applications, wireless communications, antennas, channel equalization A I.INTRODUCTION N artificial neural network (ANN) is a network of artificial neurons. Artificial neurons model the simple characteristics of neurons in the brain. Each neuron receives signals from other neurons through special connections called synapses. Some inputs excite the neuron, others inhibit it. When the cumulative effect of the inputs to a neuron exceeds a threshold, the neuron fires a signal to other neurons. Similarly an artificial neuron receives a set of inputs X=[x1,x2,...,xn], each coming through an in-bound connection. Connections have weights, and each input is multiplied by the weight of the connection it comes along. At the neuron, the weighted inputs are summed. If the weights are arranged in a vector W=[w1,w2,...,wn], then the computation of the neuron is precisely the dot-product p=x.w The neuron applies an activation function f on the dot-product to get its output y, given by y=f(p). The activation function is ideally shaped like the sgn function, but in reality the sigmoid function or tanh is used. Fig. 1. Structure of a neuron Manuscript received October 20, 2006. This work is part of the course requirements of EC 4321 Wireless Communications, Monsoon 2006. Fig. 2. 3 layer neural network The learning aspect of the ANN is by virtue of the weighted connections. The neural network is trained using several training samples, and the error at the output, equal to the L2 norm of the expected and the observed output, is minimized by modifying the weights of each layer from the output layer proceeding backwards to the first hidden layer. A simple gradient descent approach is used, having the squared error (MSE) as minimization criterion. This weight correction is done by the neural network itself, and is hence self-organizing. An epoch in training consists of one cycle of all training samples sequentially applied to the ANN. Several epochs are applied until the squared error is minimized to a value below the desired MSE (typically 10-4 ) Once trained, ANNs have learned the mapping between the input and output, and can generalize to model the system whose behavior characterized the inputs and outputs. The capability of the ANN was proven by Kolmogorov [6] in his mapping-neural-network-existence theorem stating that any continuous function from the n-dimensional cube [0,1] n to the real numbers R can be implemented exactly by a three-layer neural network with n inputs, 2n+1 hidden neurons and m output neurons. Neural networks can in consequence be considered as universal function approximators.

2 A.Overview II.CHANNEL EQUALIZATION Wireless data transmission channels distort data signals in both the amplitude and phase, causing Inter-symbol Interference (ISI). This distortion causes the transmitted symbols to spread and overlap over successive time intervals, causing the information content also to spread among these symbols. Other factors like co-channel interference (CCI), multipath fading, cause further distortions to the received symbols. Signal processing techniques used at the receiver to overcome these interferences, so as to restore the transmitted symbols and recover their information are referred to as equalization methods. directed mode the tap gains are corrected based on a LMS gradient descent. The input to the decision device is a weighted sum of the delayed samples. Decision error is used in updating the tap coefficients. B.Methods Channel equalization algorithms are generally divided into two. The Most common one is performed by sending training sequences into the channel. The training sequences are known also at the receiver end. So by observing the changes at the receiver end, it is possible to extract the impulse response of the channel. Therefore at the receiver side, the inverse of the channel is applied, and the sources are recovered. The above described equalizer is called the zero-forcing (ZF) equalizer. ZF equalizers ignore additive noise, and may significantly amplify noise for channels with spectral nulls. The other method which does not use a training sequence is called blind channel equalization. Equalizers can be implemented as linear filters, or using non-linear methods - decision feedback. In training-based equalization, the viterbi algorithm is used after estimating the channel. Fig. 3. Block diagram showing noise and CCI, and the receiver structure for equalization The optimum linear equalizer was shown to be given by a filter matched to the channel and transmitter in cascade with a periodic filter that can be realized with a tapped delay line[1]. The viterbi detector is a non-linear recursive processor. It requires that ISI at the sampler output be limited to a finite number L of symbols (can be thought of as estimating the state sequence of a finite state markov process observed in memoryless noise) The adaptive LMS equalizer shown in fig. 4 is used in nonlinear operation. The processor consists of a tapped delay line in which the tap coefficients are modifiable. The equalizer operates in training mode to adjust using the preamble symbols and adapt to changing channel, and in decision Fig. 4. Adaptive LMS equalization The input to the decision device is the dot-product of the tap coefficients and the delayed samples. The weight correction is strikingly similar to the weight updation in a single neuron of a neural network. The maximal likelihood (ML) sequence receiver performs

ML detection on the entire data sequence. It is simpler than non-linear processors, but its complexity grows at O(L M ) where L is the number of interfering symbols. (For rather small values of L and M the implementation becomes impractical) A suboptimal modification to the ML sequence detector can be drawn from a reduced state viterbi algorithm instead of considering at each recursive step all possible sequences, eliminate all but the few likely ones. The Decision feedback equalizer (DFE) is such a pre-filter to reduce the complexity of the viterbi detector. The action of the DFE is to feedback a weighted sum of past decision to cancel the ISI they cause to the present signaling interval. The Least mean-square DFE is structured as shown in fig. 5. It consists of a T/2-spaced feedforward FIR filter, and a T- spaced feedback filter. The feedforward and feedback tapped delay lines contain Nf and Nb tap coefficients respectively. This equalizer is denoted as LMSDFE (Nf, Nb). If there is no feedback Nb, then the structure reduces to a linear equalizer is made negligible, SIR=100 db. And if system is studied in CCI, then AWGN is made negligible, SNR=100 db. Training length in symbol periods is 200 for LMSDFE and 1000 for MLPDFE. Data frame length, including the training period is 2000 symbol periods. Simulation experiments were run over ensembles of upto 5000 independent trials. Convergence performance of LMSDFE and MLPDFE for the case of SNR=15 db is shown in fig. 6. 3 Fig. 5. LMS decision feedback equalizer A neural network based equalizer can be built on the same principle. It would consist of a three-layer MLP(multilayer perceptron) whose input signal samples are identical to that of the conventional equalizer. The input signal to the MLP equalizer consists of the Nf and Nb data samples in the feedforward and feedback tapped delay lines of the equalizer, respectively. This equalizer is denoted as MLPDFE((Nf,Nb), N1, N2) where N1 and N2 are the number of neurons in hidden layers 1 and 2 respectively. Fig. 7. Bit Error Rate (BER) vs. output Signal-to-interference ratio (SIR) Fig. 7 shows that when the parameters of the neural network are suitably configured, then performance converges with that of the LMS DFE. The modifiable parameters for the MLP are learning rate (also referred to as step size), number of neurons in the hidden layers. In high noise or interference situations the MLPDFE performance matches or surpasses the LMSDFE. W.K.Lo et al conclude that size of neural network impacts greatly the performance of the equalizer. D.Linear time series prediction Another way to look at the DFE is as a one step linear predictor. Consider the system shown in fig. 8. Fig. 6. Neural Network DFE The input layer has Nf + Nb nodes. The MLP equalizer generates at the symbol rate a single output which is the input to the decision device. C.Performance The effects of CCI and AWGN were considered separately by W.K.Lo et al[3]. If system is studied in AWGN, then CCI Fig. 8. Feedback Filter of DFE In Fig. 8, the following quantities are at play dk = sk xk dk^ = sk xk^ = dk + xk xk^ bk = gk * dk^ zk = sk - bk

4 Define ek = zk xk ek = sk bk xk = sk xk bk = dk bk = dk (gk*dk^) Assume that the previous n estimations are correct, then x k-n = x^k-n for 1<=n <= N and d k-n = d^ k-n thus, ek = dk (gk * dk^) = dk * (δk gk) Then the feedback filter is precisely the one step linear predictor of of dk based on exact knowledge of dk-n for n = 1 to N. Neural networks are well suited to the problem of linear prediction, and the role of neural networks in the DFE can be viewed from this perspective also. b) Local shielding effects 4) Influence of the location of the receiver site a) Loal reflectors b) Local shielding effects c) Shape of the room of the receiver d) size of the room of the receiver Measurement and quantization of all these parameters is discussed in [5]. Training samples consisted of about 5000 patterns including measurements in different transmitter and receiver environments. The statistical distribution of each input parameter should be homogeneous, so measurements in a lot of environments is necessary. III.FIELD STRENGTH PREDICTION A.Introduction Indoor wave propagation depends on several parameters such as structure of the building, material of the walls, location of the transmitter, carrier frequency, and several more. A number of empirical and deterministic (ray-optical) models have been developed for field value prediction in indoor environments. G.Wolfle and F.M.Landstorfer present an artificial neural network based model for the prediction of electric field strength. Their work reports a higher accuracy, owing to the possibility to consider parameters which are difficult to include in analytic equations. The neural network model is independent of time variation effects, and the generalization capability is well served in this problem. In the model constructed by G.Wolfle et al[5], each point in an area is predicted as a single point, and independent of neighbouring opints. This leads to smaller computation time. The input to the neural network include several parameters about the point at which the field is to be predicted. The output of the neural network represents the field strength at that point. The neural network is trained with measured field strength values inside buildings. B. Parameters of the model The parameters of the prediction model can be subdivided into four groups: 1) General parameters a) free space attenuation (distance, frequency) b) Visibility (LOS, OLOS, NLOS) 2) Influence of the walls between transmitter and receiver a) Transmission loss of the direct ray b) waveguiding effect of the walls 3) Local arrangement of the walls at the transmitter site a) Local reflectors Fig. 9. Structure of neural network for field strength prediction Using a network structured as in Fig. 9, G.Wolfle et al were able to obtain a mean error of 0.2 db and standard deviation of 3.5 db IV.ANTENNA DESIGN Most wireless phenomena like the design or analysis of antennas, estimation of direction of arrival, adaptive beamforming techniques, have quite a nonlinear relationship with their corresponding input variables. Neural networks can be used in these areas - microstrip antenna analysis and design, wideband mobile antenna design. A.Low Profile Antennas When antenna occupies appreciable volume a compact wireless device, and as transceivers are integrated into other devices, accurate characterization of antenna becomes necessary for the device's high performance. Analysis of parameters such as input resistance, bandwidth, resonant frequency of different regular shaped microstrip antennas have been modeled using neural networks [7]. Accuracy and simplicity were the key features of these networks, making ANN candidate for use in CAD algorithms.

5 a(mn) h(mn) є m n Fmn [6] K. Homik, M. Stichcombe, and H. While, "Multilayer Feedforward Networks are Universal Approximators," Neural Networks,2, 1989, pp. 359-366. [7] A.Patnaik, D.E.Anagnostou,R.K.Mishra, Applications of Neural Networks in wireless communications, IEEE Antennas and Propagation Magazine, Vol. 46, No. 3, June 2004 [8] H.Jaeger, H.Haas., Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication, Science magazine,vol. 304, No. 5667,pp 7880, April 2004 [9] LiMin Fu, Neural Networks in Computer Intelligence, ISBN 0-07- 053282-6, 2003 Fig. 10. Microstrip Antenna design using Neural Networks - parameters are the side length, substrate height, dielectric constant, and mode numbers B.Arrays and Smart Antennas Arrays use multiple antenna elements to achieve enhanced performance, including high gain. They can also support electrical beam steering to improve transmission and reception. Neural networks have been successfully applied to direction-of-arrival estimation and beamforming for antenna arrays [7]. Radial basis function (RBF) neural networks are used in this case. RBF's differ from MLP in that the activation function at a neuron can be considered as the vector gaussian having the in-bound weights as mean, and the input vector as the random variable. The activation function is the vector gaussian distribution function. The weights are self-adjusted by using the k-means clustering technique. V.CONCLUSION Neural networks are mathematical models for a selforganizing and learning black box, which is a universal function approximator and time series predictor. Neural networks are built based on domain knowledge and can be used for causal modeling of systems in uncertainty and noise. Wireless communication systems present ample scope for neural networks to be applied, but performance of the neural network depends on fine tuning the parameters to the network, and proper quantization of inputs. REFERENCES [1] C.A.Belfiore, J.H.Park.Jr, Decision Feedback Equalization in Proc. August 1979. [2] D.Jianping, N.Sundararajan, Communication channel equalization using complex-valued minimal radial basis function neural networks, IEEE trans. on Neural networks, Vol 13, No. 2, May 2002 [3] W.K. Lo, H.M.Hafez, Neural network channel equalization, Intl. Joint Conference on Neural Networks, 1992 [4] S.U.H. Qureshi, "Adaptive Equalization", Proc of IEEE, vol. 73, no. 9, Sept 1985 [5] G.Wolfle, F.M.Landstorfer, "Field strength prediction in indoor environments with neural networks", in IEEE 47th Vehicular Technology Conference (VTC) 1997, Phoenix, AZ, pp. 82 -- 86, May 1997