Скачать книгу

the physical world utilizing sensors. To get advantages of both the frameworks one may need to interface them. EEG sensor is associated with Arduino utilizing Bluetooth module HC-05. Here HC-05 go about as ace and EEG sensor as slave [25]. Its fills in as TTL Master/Slave UART convention correspondence. Outlined by Full fastest Bluetooth task with full piconet bolster. It enables us to accomplish the business’ largest amounts of affectability, precision, with least power utilization [28].

      Here we are using cloud of smart bridge to store the data. The data which is collected from the sensors is send to the cloud of domain smart bridge and sub domain health monitoring system through API. The patient can view his health details after logging-in. In this research we are using pulse sensor to know the patient heartbeat, LM35 to know his body temperature and EEG sensors to know his brain signals. So after login he will get a display of readings in tabular form as shown in the figure. In this research, we are using mindwave headset which works on EEG technology. This sensor consists of one main sensor and one reference electrode. This research can be implemented in future by making more sophisticated by expanding the sensors used to read the brain waves. The main working of mindwave mobile headset goes in ThinkGear ASIC module chip. In this research, we are using TGAM chip in the sensor [22].

      The EEG Sensor (values of Attention, Meditation), for calculation Range of 1–100 was taken

       • Range from 40 to 60 is considered “neutral”.

       • Range from 60 to 80 is slightly high, and interpreted as higher over normal.

       • Range from 80 to 100 are considered “high”, that mean it is strong indication levels Severe levels.

      1.4.3 Cloud Feature Extraction

      The most main role in creating an EEG signal classification system is generating mathematical representations and reductions of the input data which allow the input signal to be properly differentiated into its respective classes. These mathematical representations of the signal are, in a sense, a mapping of a multidimensional space (the input signal) into a space of fewer dimensions. This dimensional reduction is known as “feature extraction”. Ultimately, the extracted feature set should preserve only the most important information from the original signal [23].

Set Mathematical transform Feature number
1 Linear predictive codes taps 1–5
2 Fast Fourier transform statics 6–12
3 Mel frequency cepstral coefficients 13–22
4 Log (FFT) analysis 23–28
5 Phase shift correlation 29–36
6 Hilbert transform statics 37–44
7 Wavelet decomposition 45–55
8 1st, 2nd, 3rd derivatives 56–62
9 1st, 2nd, 3rd derivatives 63–67
10 Auto regressive parameters 68–72

      1.4.4 Feature Optimization

      In order to find the features with the most potential, an algorithm was implemented to approximate individual feature strength with respect to every other feature [30]. The strength of a feature was determined by the accuracy with which the preictal state was classified as an average of several classifications. Similar to Cross-Validation by Elimination HANNSVM algorithm repartitions the feature set, performs a set of classifications, finds the best feature sets to drop, and then adjusts the feature space to only contain features that improve the accuracy.

      1 1. Evaluate the accuracy of the classification using all N feature sets.

      2 2. Dropping one feature set at a time, repartitions the feature space into N, N − 1 feature subsets and save the accuracy of each sub set at position K in vector P along with the resulting accuracy.

      3 3. Denote the index of P with the maximum accuracy as B, and drop all the features listed in P from B to N from the final feature space.

      The resulting feature set P has accuracy similar to the accuracy found at position B in P. Under training and overtraining must still be taken into consideration since it can have an effect on the accuracy of a prediction.

      1.4.5 Classification and Validation

      The two methods in this section were developed to complement the classification algorithms and enhance their classification potential for noisy dynamical systems that change state over time.

      The first method SVM, which is called Cross-Validation by Elimination, is used to classify samples by testing the amount of correlation (determined by the accuracy of classifications) each sample has to every state and then remove classes that are least correlated to improve classification accuracy. The algorithm isolates each of the classes, compares the prediction results, and then makes a final decision based on a function of the independent predictions [23, 29].

Скачать книгу