Скачать книгу

      where g is a nonlinear function. The model can be used for both scalar and vector sequences. The one time step prediction can be represented as

      (3.44)ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis equals g left-parenthesis y left-parenthesis k minus 1 right-parenthesis comma y left-parenthesis k minus 2 right-parenthesis comma ellipsis y left-parenthesis k minus upper M right-parenthesis right-parenthesis equals y left-parenthesis k right-parenthesis minus e left-parenthesis k right-parenthesis period

      Given the focus of this chapter, we discuss how the network models studied so far can be used when the inputs to the network correspond to the time window y(k − 1) through (kM). Using neural networks for predictions in this case has become increasingly popular. For our purposes, g will be realized with an FIR network. Thus, the prediction ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis corresponding to the output of an FIR network with input y(k − 1) can be represented as

      (3.45)ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis equals upper N Subscript upper M Baseline left-parenthesis y left-parenthesis k minus 1 right-parenthesis right-parenthesis comma

      where NM is an FIR network with total memory length M.

      3.3.1 Adaptation and Iterated Predictions

Schematic illustration of network prediction configuration.

      (3.46)upper N Superscript asterisk Baseline equals normal upper E left-bracket y left-parenthesis k right-parenthesis bar y left-parenthesis k minus 1 right-parenthesis comma y left-parenthesis k minus 2 right-parenthesis comma ellipsis y left-parenthesis k minus upper M right-parenthesis right-bracket comma

      where y(k) is viewed as a stationary ergodic process‚ and the expectation is taken over the joint distribution of y(k) through y(kM). N* represents a closed‐form optimal solution that can only be approximated due to finite training data and constraints in the network topology.

      Iterated predictions: Once the network is trained, iterated prediction is achieved by taking the estimate ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis and feeding it back as input to the network:

      3.4.1 Filters as Predictors

      Linear filters: As already indicated so far in this chapter, linear filters have been exploited for the structures of predictors. In general, there are two families of filters: those without feedback, whose output depends only upon current and past input values; and those with feedback, whose output depends upon both input values and past outputs. Such filters are best described by a constant coefficient difference equation, as

      where y(k) is the output, e(k) is the input, ai, i = 1, 2, … , p, are the AR feedback coefficients and bj, j = 0, 1, … , q, are the moving average (MA) feedforward coefficients. Such a filter is termed an autoregressive moving average (ARMA (p, q)) filter, where p is the order of the autoregressive, or feedback, part of the structure, and q is the order of the MA, or feedforward, element of the structure. Due to the feedback present within this filter, the impulse response – that is, the values of (k), k ≥ 0, when e(k) is a discrete time impulse – is infinite in duration, and therefore such a filter is referred to as an infinite impulse response (IIR) filter.

      (3.49)y left-parenthesis k right-parenthesis equals sigma-summation Underscript j equals 0 Overscript q Endscripts b Subscript j Baseline e left-parenthesis k minus j right-parenthesis period

      Such a filter is called MA (q) and has an FIR that is identical to the parameters bj, j = 0, 1, … , q. In digital signal processing, therefore, such a filter is called an FIR filter. Similarly, Eq.

Скачать книгу