Скачать книгу

the sum runs over the sources at layer l for a fixed neuron k at layer l + 1, whereas in definition (4.8) the sum runs over the sinks at layer l + 1 for a fixed neuron i at a layer l. When using Eq. (4.8) to define the relevance of a neuron from its messages, then condition (4.9) is a sufficient condition I order to ensure that Eq. (4.2) holds. Summing over the left hand side in Eq. (4.9) yields

StartLayout 1st Row sigma-summation Underscript k Endscripts upper R Subscript k Superscript left-parenthesis l plus 1 right-parenthesis Baseline equals sigma-summation Underscript k Endscripts sigma-summation Underscript i colon i is input for neuron k Endscripts upper R Subscript i left-arrow k Superscript left-parenthesis l Super Subscript semicolon Superscript l plus 1 right-parenthesis Baseline 2nd Row equals sigma-summation Underscript i Endscripts sigma-summation Underscript k colon i is input for neuron k Endscripts upper R Subscript i left-arrow k Superscript left-parenthesis l Super Subscript semicolon Superscript l plus 1 right-parenthesis Baseline equals sigma-summation Underscript i Endscripts upper R Subscript i Superscript left-parenthesis l right-parenthesis EndLayout

      One can interpret condition (4.9) by saying that the messages upper R Subscript i left-arrow k Superscript left-parenthesis l Super Subscript semicolon Superscript l plus 1 right-parenthesis are used to distribute the relevance upper R Subscript k Superscript left-parenthesis l plus 1 right-parenthesis of a neuron k onto its input neurons at layer l. In the following sections, we will use this notion and the more strict form of relevance conservation as given by definition (4.8) and condition (4.9). We set Eqs. (4.8) and (4.9) as the main constraints defining LRP. A solution following this concept is required to define the messages upper R Subscript i left-arrow k Superscript left-parenthesis l Super Subscript semicolon Superscript l plus 1 right-parenthesis according to these equations.

      Now we can derive an explicit formula for LRP for our example by defining the messages upper R Subscript i left-arrow k Superscript left-parenthesis l Super Subscript semicolon Superscript l plus 1 right-parenthesis. The LRP should reflect the messages passed during classification time. We know that during classification time, a neuron i inputs ai wik to neuron k, provided that i has a forward connection to k. Thus, we can rewrite expressions for upper R 7 Superscript left-parenthesis 3 right-parenthesis and upper R 4 Superscript left-parenthesis 2 right-parenthesis so that they match the structure of the right‐hand sides of the same equations by the following:

      (4.13)StartLayout 1st Row f left-parenthesis x right-parenthesis almost-equals f left-parenthesis x 0 right-parenthesis plus italic upper D f left-parenthesis x 0 right-parenthesis left-bracket x minus x 0 right-bracket 2nd Row equals f left-parenthesis x 0 right-parenthesis plus sigma-summation Underscript d equals 1 Overscript upper V Endscripts StartFraction partial-differential f 
				<p style= Скачать книгу