Neural Networks Based Equalizer For Signal Restoration In Digital Communication Channels

One of the main obstacles to reliable communications is the inter symbol interference (ISI). An equalizer is required at the receiver to mitigate the effects of non-ideal channel characteristics and restore the transmitted signal. This paper presents the equalization of digital communication channels using artificial neural network structures. The performances of a nonlinear equalizer using multilayer perceptron (MLP) trained by


INTRODUCTION
In modern digital communications systems, there is an ever increasing need for high speed data transmission rate over the variety of limited bandwidth channels which, by their nature, distort the digital signal. The effect of this distortion is to cause the transmitted symbol to persist beyond the time interval allocated for its transmission; thus causing subsequent symbols to interfere. This distortion is called inter symbol interference (ISI) which can also arise in multipath propagation. The transmitted signal is also subject of other impairment such as nonlinear distortion and noise. At the receiver, an equalizer is used to mitigate these effects and restore the transmitted symbols [1]. Due to the variety and time varying nature of some channels, adaptive equalization techniques need to be implemented. Adaptive equalization has been an attractive area of research; many of the techniques are firmly based on linear adaptive filters, which limit the performance of the system. To improve system performances, non-linear structures have been proposed for the channel equalization task. Nonlinear equalizers are superior to linear ones in situation where the channel distortion is too severe. In particular, a linear equalizer does not perform well on non-minimum phase channels, which are difficult to equalize [2], [3]. Neural networks provide powerful nonlinear processing architectures, which have been proved to outperform traditional linear techniques in many common signal-processing applications. In particular, neural networks have been proposed for digital equalization of communication channels [4], [5], [6], and [11]. Neural networks have several properties, which make them very attractive for channel equalization; these properties are adaptive processing, universal approximation, learning and generalisation capacities etc...
The key issue in neural network is to find an appropriate structure that gives the best result. Different artificial neural network structures such as multilayer perceptron [2], [3], [5], [7] radial basis function [4] and recurrent neural network [8], [9] have been investigated in channel equalization. Among all these structures, the most widely used is the MLP structure due to its computational simplicity, finite parameterization and its stability [7].  LTD, 2015 In this paper, nonlinear channel equalization using multilayer perceptron is considered. We present a study of the MLP based equalizer and we exhibit its capacities to equalize minimum phase channel and non-minimum phase one as well. We compare and evaluate the performance of the MLP equalizer against the conventional LTE in term of the steady state MSE reached and the equalized signals for the two channels considered.

A.
Problem statement Many channels can be characterized by discrete-time equivalent model described by a finite impulse response digital filter (FIR) and an additive noise [1]. Such appropriate channel model is depicted in Figure 1.
The digital data sequence x k is passed through a linear dispersive channel of finite impulse response with a transfer function given by: Where M is length of the impulse response, the observed sequence r k is formed by adding Gaussian random noise n k to the output of the FIR filter.
When dealing with the equalization, the main goal is to recover the transmitted unobserved data symbols x k , with the highest fidelity, using the information given by the noisy observation from a set of received symbols r k . Thus, the task of the equalizer is to construct a causal approximation to the inverse of the channel. The output of the equalizer is than applied to a memory less decision device that selects the element of the transmitted alphabet. For binary data, it is the sign function.

B.
Digital channel models Many channels can be characterized by discrete-time equivalent model described by a Finite Impulse Response (FIR) digital filter [1], with a transfer function given by (1). Such appropriate channel model is depicted in Figure 2.

C. Linear Transversal Equalizer (LTE)
The structure of a linear transversal equalizer (LTE) is depicted in Figure 3.The digital data sequence observed at the input of the equaliser can be given by a convolution as following: Where x k is the binary random sequence of the transmitted symbols drawn from {-1, 1}, with equal probability and are assumed to be independent of one another. The additive noise samples are chosen independently from a Gaussian distribution with mean zero and variance σ n 2 .
Equivalently the equation above can be rewritten in a matrix vector notation as: International Letters of Chemistry, Physics and Astronomy Vol. 55 189 The coefficients of the LTE can be trained to minimise the mean square error (MSE) cost function between the sequence equalised y k and the wanted sequence x k-d as followed: The iterative method by means of LMS algorithm is used to determine the optimum solution. The coefficients of the equaliser are updating as follows: μ is the step size; its choice must be a compromise between convergence speed and fine equalizer settings.

MULTILAYER PERCEPTRON BASED EQUALIZER
Before Neural network based equalizers have been privileged in the nonlinear equalization area [2], [6] because of their nonlinear processing capacities. Especially the MLP based equalizer is the most popular equalizer because of its attractive properties [2], [3], [7]. Its basic element, the artificial neuron illustrated in Figure 4 (a), is composed of a x N linear combiner and an activation function. The neuron receives inputs from other neurons processors. The linear combiner output is the weighted sum of the inputs plus a bias term. The activation function, which can be linear or nonlinear, gives then the neuron output: Where x j is the j th input value of the neuron, W j the corresponding synaptic weight, and b the bias term. f (.) is the activation function ( Figure 4 (b)), the most commonly used is of the sigmoid type [10], defined by: Where δ define the nonlinearity degree of the sigmoid function it is generally set to one.
A multilayer perceptron (Figure 4 (c)) is composed of neurons connected to each other. The input information is processed from the input layer to the output layer. The network inputs are the inputs of the first layer. The outputs of the neurons in one layer form the inputs to the next layer. The network outputs are the outputs of the output layer. In general, all neurons in a layer are fully interconnected to neurons in adjacent layers, but there is no connection within a layer, and normally no connections bridging layers. A multilayer perceptron (MLP) consists of several hidden layers of neurons, which are capable of performing complex, non-linear mapping between the input and the out put layer. The hidden layers have the ability of performing complex, nonlinear mappings between the input and output layer. The hidden layers provide the capability to use the nonlinear sigmoid ability (Figure 4 (b)); the MLP can have several of these hidden layers [10]. The output X ik of neuron (i, k) of the MLP is given by: Where i, is the layer index, X ik is the output of neuron k of layer i, W ijk is the weight that links the output X i-1j to neuron k of layer i. N i is the number of neurons in layer i. It has been demonstrated that a two layer perceptron with a sigmoid activation function and a scalar output can approximate arbitrarily well continuous functions, provided that an arbitrarily large number of neurons is available [10]. This property is called the universal approximation property.

A. Learning Algorithm
The MLP can be trained to achieve a particular task using the back propagation (BP) algorithm. A set of input/output pairs (x(n), d(n)) trains the network to implement the desired mapping. The error back propagation learning consists of two passes through the different layers of the network: a forward and a backward passes. In the forward pass, an activity pattern is applied to the input neurons of the network and its effects propagate through the network, layer by layer, until an output is produced. Weights between neurons of successive layers are initially assigned in random. In the backward pass, the error observed between the network and the desired responses is computed and used to amend the weights. Each neuron starting from the hidden layer is muddled with a non-linear activation function. The back propagation adjusts the MLP weights to minimize any differentiable cost function, e.g. the squared error between the network output and the desired output, as below: En  dnX L n 2 . Where X L (n) is the network output at time n and d(n) is the desired output. The BP algorithm performs a gradient descent on the cost function in order to reach a minimum. The weights and thresholds levels are updated, according to the following relations:

ILCPA Volume 55
Where μ is the learning gain, α is the momentum term. δ i k is the error term calculated starting from the output layer. The output error term is given by: Where f' denotes the derivative of the activation function f ' y df y / dy . The term error is then recursively back propagated to lower layers. For the hidden neuron (i, k), the error term δ ik is given by the following: A momentum term α is used to allow for a rapid learning and to filter out high frequency variations of the weight vector. Thus, the convergence rate is much faster and the fast weight changes are smoothed out.

SIMULATION RESULTS
Simulation results were all realized using MATLAB. The digital message applied to the channel is made of uniformly distributed bipolar random numbers {-1, 1}. The channel output is corrupted by zero mean white Gaussian noise with variance σ n 2 . Two digital channel models are used for the simulation test; a minimum phase channel and a non-minimum phase channel denoted as CH 1 and CH 2 respectively. Both the conventional the LTE and the MLPE use five input samples. The MLP has 3 layers, 9 neurons in the first hidden layer, 3 neurons in the second hidden layer, 1 neuron in the output layer. The parameters of the back propagation and the LMS algorithms have been chosen as in [3] and [7].

A.
Channel caracteristics Simulations are carried out for two digital channel models as following:   The minimum phase channel, having the following transfer function n in Z domain: CH 1 z  0.87 0.44z 1  0.23z 2  And the non minimum phase channel with the transfer function in Z domain given by:

ILCPA Volume 55
These channel models are widely used to evaluate the performance of equalizers in communication systems [2], [3], [5], [7], [9], [12]. Channel characteristics are depicted in Figure 6 for the minimum phase channel CH 1 and the non-minimum phase channel CH 2 respectively. The magnitude responses of the channels (part (a)) exhibit deep notches, about -5dB for CH 1 and about -15dB for CH 2 , which are difficult to equalise specially for CH 2 .
The phase response (part(c)) is non-linear for CH 1 and linear for CH 2 . The channel impulse responses coefficients are about three as depicted in part (b). Channels roots in Z plan (part (d)) are about two zeros. Both are inside the unit circle for CH1, one zero is outside the unit circle for CH2. The out side zero may causes instability of the equalizers.

B.
Restored signal performance In this part of simulation, the performance measure is obtained by expecting the restored signal behaviour quality. Figure 7 and 8 show the restored signal features by the equalizers for the channels CH 1 and CH 2 respectively. In both figures, Part (a) shows the transmitted signal, (b) illustrates the received signal witch is completely distorted by channel ISI and additive noise; thus the detection without any equalization process will produce severe errors. Where parts (c) and (d) represent the restored signal by the LTE and the MLPE equalizers respectively, the two equalizers perform well, whoever the improvement in the restoration quality for the MLP equalizer is pretty obvious. The impairments of the channel CH2 on the International Letters of Chemistry, Physics and Astronomy Vol. 55 transmitted signal became worse. The amplitude levels of some symbols are greatly attenuated. The LTE restored signal presents some residual distortion; however the MLPE exhibit greater improvement. The MLPE restores the transmitted signal with successful manner thus allowing detection without any errors. The MLPE shows the best performance for the two channels used, this results from the nonlinear processing of the MLP structure.

C. Learning curves performance
In this part, the performance measure is obtained by learning curves that describe the evolution of the mean square error (MSE) at the output of the equalizers. Figure 9 and 10 depict the convergence behaviour of the two equalizers for the channels CH 1 and CH 2 respectively. The MLPE converge more slowly than the LTE, but it shows a clear improvement in the steady state value of averaged square error produced (-28 dB), which is lower than the noise level (-20dB). The LTE produce a steady state MSE level of about -16 dB, which is greater than the noise level. A similar improvement of the MLPE is also obtained for the non-minimum channel CH 2 . The steady state MSE reached by the LTE became worse under the great distortion effect engendered by the large ISI of the channel CH 2 . Te LTE MSE level increase to achieve -11 dB. Whereas the MSE level of the MLPE is around -25dB which steel under the noise level.

ILCPA Volume 55
The increase in the steady state MSE is about -5dB for the LTE and only -3dB for the MLPE influenced by the sever ISI introduced by the non-minimum channel CH 2 . It should be clear from these figures that the steady state MSE of MLP based equalizer is below the noise level for both the channels considered.

D.
BER performance To more investigate the consistency in performance of the MLPE, bit error rate (BER) performances are studied for the equalizer configurations and are shown in figures 12 and 13.

International Letters of Chemistry, Physics and Astronomy Vol. 55 197
The MLPE shows a very good performance, it performs better than the LTE. Figure 11, which corresponds to CH shows that a gain of about 2dB is made by the MLPE at a BER of 10 −3 when compared to the LTE. On the other hand in the for the Figure 12 which corresponds to CH 2, the MLPE still consistently behaving better than the LTE a gain of about 2,5dB is made by the MLPE over the LTE at BER of 10 −3 . It can be seen that the MLPE achieves almost the best performance even in all the scenarios considered.