ТОП просматриваемых книг сайта:
Handbook of Intelligent Computing and Optimization for Sustainable Development. Группа авторов
Читать онлайн.Название Handbook of Intelligent Computing and Optimization for Sustainable Development
Год выпуска 0
isbn 9781119792628
Автор произведения Группа авторов
Издательство John Wiley & Sons Limited
Figure 2.9 Formation of axon. (a) Input and output neuron oligonucleotides and the linker sequence. (b) Input and output neurons get hybridized to each other using the linker sequence. (c) Input and output neurons are joined by T4 DNA ligase; DNA polymerase starts the extension of the linker molecule along the output oligonucleotide. (d) Partially double-stranded axon is formed after completion of polymerase extension.
For development of the model of single-layer neural network, a set of input neuron oligonucleotide, complementary to the input neurons, is used. This set and designed axon sequence (Figure 2.9) are mixed together. This mixed solution is treated with DNA polymerase in presence of appropriate reaction buffer. The input neuron hybridizes to the single stranded part of partially double-stranded axon. This hybridization procedure releases the output oligonucleotide from the axon molecules that were already attached on their output end. This mechanism is illustrated in Figure 2.10.
Figure 2.10 Formation of output molecule using DNA neural network. (a) Axon molecule and input oligonucleotide. (b) Input molecule hybridizes to the corresponding axon; extension of the strands gets started using DNA polymerase. (c) The extension of the primer leads to the release of output DNA oligonucleotide.
Generalization of the above discussed single-layer neural model to multi-layer networks is possible. To achieve this, each layer should be designed individually. The output of one layer is the input of the following layer in the multi-layer model.
2.4.2 Design Strategy of DNA Perceptron
In 2009, Liu et al. [5] used the property of massive parallelism of DNA molecules and designed a model perceptron. Thus, the running time of the algorithm can be reduced to a great extent. The structure of the perceptron is presented in Figure 2.11. It has two layers: the input layer generates n input signals and the output layer generates m output signals. Apart from n number of input signals, there is another additional signal termed as bias signal. The ith input neuron is joined with jth output neuron by a joining weight denoted as, wij. Each of the m output neurons receives n signals from the input layer and organizes them with the corresponding n weight coefficient. The weighted sum for each output neuron can be expressed by the following equation;
(2.6)
where
k ≡ kth sample of training set;
wij ≡ weight value joining ith input and jth output neuron;
Figure 2.11 Structure of perceptron [5].
The designed algorithm for perceptron categorizer model follows two processes: one is training process and the other is category process.
• Training process: In this process, the ideal input values, , and the ideal output values, , are used to train weight coefficient to get the set of weights. The sample vector is represented by Equation (2.7).(2.7)
The set of weights, wk, is represented by the following expression:
(2.8)
After determining the values for all of the wk, the value of w can be calculated by taking the intersection of wk.
• Category process: If an unknown vector is given as the input, then the model of perceptron categorizes it using the weight set w which has been computed from training process.
2.4.2.1 Algorithm
The researchers further designed the algorithm of linear perceptron categorizer in the context of DNA computation. The steps of the algorithm are given below.
• The input vector, X(k) = (x1, x2, …, xn), is represented by n DNA strands which are extracted from sample molecular library.If the input value of ith neuron = 0 (where i = 1, 2, …, n), then the coding mode is denoted byIf the input value of ith neuron = 1, then the coding mode is denoted byThe value of additional bias signal is always 1. Thus, the coding mode of bias signal isIn the molecular library, there are 2n + 1 types of DNA strands each of which are five bases long.Again, the output values are also represented by five bases long DNA strands. If the output value of jth neuron = 0 (where j = 1, 2, …, m), then the coding mode is denoted byIf the input value of jth neuron = 1, then the coding mode is denoted by
• These strands are hybridized with the DNA library representing weight coefficient.
• DNA strands are made representing the set of weights denoted by w1, w2, …, wp.
• The DNA strands representing the weight coefficient wij have four domains. The coding modes are presented by the following expression:(2.9)wherethe first domain takes in input value of ith input neuron;the second domain wij which is fifteen bases long represents weight coefficient;the third domain vij represents the arithmetic of input value and wij; the length of this domain is proportional to its true value, but not less than five bases;the fourth domain represents the weight coefficient wi=1,j which joins (i+1)th input neuron and jth output neuron.There are four special weights are represented by the expressions illustrated below:(2.10)(2.11)(2.12)(2.13)
• Intersection can be calculated by performing gel electrophoresis using the DNA strands w1, w2,…, wp and w is derived.
• Classification of unknown input vector can be derived from DNA strands representing the weight set w.
2.4.2.2 Implementation of the Algorithm
• Sample Input: