ТОП просматриваемых книг сайта:
Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic
Читать онлайн.Название Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Год выпуска 0
isbn 9781119790310
Автор произведения Savo G. Glisic
Жанр Программы
Издательство John Wiley & Sons Limited
(3.59)
which again exploits feedback.
3.4.2 Feedback Options in Recurrent Neural Networks
Feedbacks in recurrent neural networks: In Figure 3.11, the inputs to the network are drawn from the discrete time signal (k). Conceptually, it is straightforward to consider connecting the delayed versions of the output,
(3.60)
Figure 3.11 Recurrent neural network.
where again Φ(·) represents the nonlinear mapping of the neural network and ê
State‐space representation and canonical form: Any feedback network can be cast into a canonical form that consists of a feedforward (static) network (FFSN) (i) whose outputs are the outputs of the neurons that have the desired values, and the values of the state variables, and (ii) whose inputs are the inputs of the network and the values of the state variables, the latter being delayed by one time unit.
The general canonical form of a recurrent neural network is represented in Figure 3.12. If the state is assumed to contain N variables, then a state vector is defined as s(k) = [s1(k), s2(k), … , sN(k)]T, and a vector of p external inputs is given by y(k − 1) = [y(k − 1), y(k − 2), … , y(k − p)]T. The state evolution and output equations of the recurrent network for prediction are given, respectively, by
Figure 3.12 Canonical form of a recurrent neural network for prediction.
Figure 3.13 Recurrent neural network (RNN) architectures: (a) activation feedback and (b) output feedback.
(3.61)
(3.62)
where φ and Ψ represent general classes of nonlinearities.
Recurrent neural network (RNN) architectures: Activation feedback and output feedback are two ways to include recurrent connections in neural networks, as shown in Figure 3.13a and b, respectively.
The output of a neuron shown in Figure 3.13a can be expressed as
(3.63)
where ωu,i and ωv,i are the weights associated with u and v, respectively. In the case of Figure 3.13b, we have
(3.64)