V Kotsovsky, F Geche, A Batyuk - Backpropagation algorithm for complex neural networks - страница 1

Страницы:
1  2 

АРХІТЕКТУРА ТА КОМПОНЕНТИ КОМП'ЮТЕРНИХ СИСТЕМ

 

 

УДК 681.14

 

V. Kotsovsky1, F. Geche1, A. Batyuk2, A. Mitsa1

1Uzhgorod National University 2Lviv National University "Львівська політехніка"

 

BACKPROPAGATION ALGORITHM FOR COMPLEX NEURAL NETWORKS

 

© Kotsovsky V., Geche F., Batyuk A., Mitsa A., 2012

 

Розглянуто комплексні штучні нейронні мережі, функції активації яких є комп­лексними аналогами раціональної сигмоїди. Наведено алгоритм навчання цих мереж, заснований на методі зворотного поширення похибки.

Ключові слова: штучний нейрон, штучні нейронні мережі, комплексні нейронні мережі, алгоритм зворотного поширення помилки.

Neural networks with complex weights and continuously differentiable activation function have been studied in the paper. Learning algorithm based on the backpropagation method for rational sigmoid function has been given in the paper.

Key Words: artificial neuron, artificial neural networks, complex neural networks, learning algorithms, backpropagation.

 

Introduction

Neural networks are the effective means of solving the task of function approximation, forecasting the dynamic systems behaviour, multiple attribute set classification, pattern recognition, associative search and lot of other tasks. At present many types of architecture of neural networks with real weights have been developed in the information science. The variety of architectures is conditioned by different relation types between neurons, various activation functions (continuous or discontinuous (threshold type)) and different functioning mode of the neural networks. In connection with it many learning algorithms have been offered for neural networks. We introduce the notion of the complex neuron with continuously differentiable function and consider neural networks being built of these neurons. We also describe the modification of the well-known algorithm of backpropagation [1] for complex networks. Complex neural networks can be used for both solving the same tasks as real networks (with possible reduction of the number of neuron of input and output layers) and specific problem solution with complex initial data (for example the approximation of functions of complex variable).

 

Complex neural networks

The complex neuron is the functional element with n inputs z^...,zn and one output y, which is calculated thus:y = f


Z wjzj + w0

Vj=1

3where complex numbers z1;...,zn are input signals, w0, w1,..., wn, - complex weight coefficients (similarly to [2-3] we can term w0 as the threshold of neuron  element),   f: C C   - nonlinear

function, continuous with its partial derivative which we call the function of activation.

Complex neurons permit the different mode of connection in neural networks. We confine ourselves to studying the multilayer feed-forward neural networks that is the networks satisfying the following condition: the neurons of each layer are connected with the neurons of previous or next layers by the rule "each to each". The first layer is called the input layer, internal layers are called hidden ones and the last layer is named the output layer. The proceeding of neural network can be described with a following formula:ykl = f fz wjkizjki + woki


Xkj,l+1 = ykl where the index j denotes the number of input neuron, k is the number of output neuron, і is the layer index, zjkl = xjkl + i yjkl is the value of the j input signal of k neuron in l layer, w jkl = u jkl + ivjkl is the

value of the j weight coefficient of k neuron in l layer.

 

Learning algorithm

Multilayer neural network calculates output vector F(z) on the base of input vector z. We mean the learning algorithm of instruction the selection of network parameters (weight coefficients wjkl) thus that

network puts in correspondence output vectors from the set                 for input vectors from the set

(z1,...,zm}. The collection of the pairs {z1,d1)...,(zm,dm)} is called the learning sample. Let fk be the value of output signal of k neuron in the last output layer l in the case, then the network input vector is equal to z1. Let us introduce the important variable that will be named the network error E.
E = 2ZZ|fk -dk\2

2 kt


 

 

 

 

 

 

 

 

 

 

 

 

(1)We shall suppose that E = E (W) = E(U ,V), where U is the vector components of which are the real parts of all coefficients of our neural network, V is the vector components of which are the imaginary parts

 

 

4of coefficients of the network. During learning we shall change the weight vector in direction of antigradient of E on every iteration:AWr =-nr gradE(ur ,Vr), Wr+1 = Wr +A W'


(2)where r is the number of iteration.

Let

su = Z wt]kiz)ki + wm, ali = Resli, Ki =Imsli, fz) = g(x,y) + ї h(x,y).

j

Let us put down the components of gradient calculated by applying the last layer weigts

jki

dE

du


dE


dgtki Mi + dgtki dKi daki dujki


+


dE


Э/4 Эа^ +dhkL dbl.

daki dujki


(3)E

jki


E

g


dgki daki + dgki dbki daki dvjki


+


E

hkt


dhki daki daki dvjki


+


hkt b

dbki dvjki j j


(4)Let us adduce calculating formulas for partial derivatives in (3)-(4) (we shall miss the index t for the simplification of notation):dg


k


dhk


(5)

= x-

du

daki =, dbki

:0,

du jk    jk du0k

dv

dv

jk

- y jk ,

0ki


y jk ,


du


0ki

 

 

0k


0


(6)

 

 

(7)Then we set to selection of the activation function. The most popular activation function for real
neural networks are logistic sigmoid curve f (x)


1


or hyperbolic tangent tanh x (sometimes withsome additional parameters). Unfortunally, the above mentioned functions are discontinuous as functions of complex variable. Therefore, they can't be applied in learning algorithms for complex networks which use the value of the gradient vector. The rational sigmoid

 

f (z )-

|z|+1

is stripped of these disadvantages. For rational sigmoide we can write

fz) = g(xy) + /h(x,y),

wherex

y/x2 + y 2 + 1


h(x, y ) =


y

y/x2 + y2 +1It is necessary to notice than the rational sigmoide possesses the values that lay in unit disk centered at coordinate origin. In addition, the rational sigmoide compresses proportionally the real and imaginary parts of its input argument and has the property of reinforcing "weak" input signals and decreasing "strong" input signals.

 

5Using the rational sigmoide curve we can easily obtain the following expressions for partial derivatives:dbk

dgki _bli + | ski |    dK _a\i +| ski |    dhki _ dgk

І%|+1)3' dbki    (|Ski|+1)3' dak


akibki


+1)2


(8)The values of derivatives


jki

dE

du


and


jki

dE

dv


calculated according to the formulas (3)-(8) let computeki

the corrections AuJki і Avjki for neurons of the last (output) layer. Let us show, how we can calculate theof partial derivatives

and

corrections of weight coefficients of other layers of our neural network by the instrumentality of the value

dE dE

dv

jki

jki

For the last layer we have:

dudE

jki


dE

dgki


+

li dali + dgki dbli

dKi dxjki J


+


dE


dh^ da^ +dhkL dbl. dali dxjki

jk

dE

dy


dgki

f dE (dgki da^ +dgki db^

dali dyjki    dba dy


+


dE


dK daki daki dvjki


+


dhl dbl

dbki dyIn the last two formules the partial derivatives


dE   dE dgk  dgk   dhk dhk

dgk dhk dak dbk


are already calculatedby the formules (6)-(8). The other partial derivatives are equal:jk

dak

dx


■ = u


jk

jk


dbk

dx


=v


jk

jk


dak

dy

Страницы:
1  2 


Похожие статьи

V Kotsovsky, F Geche, A Batyuk - Backpropagation algorithm for complex neural networks