Скачать книгу

GPIMAE F1 Technique MAE GIMAE GPIMAE F1 MF [10] 1.2077 1.3055 0.8079 0.4491 MF [10] 1.2961 1.2755 0.6204 0.4882 2016_Hybrid AE [23] 0.6531 0.6022 0.8406 0.6789 2016_Hybrid AE [23] 0.7691 0.6314 0.8244 0.6798 2011_Liwei Liu [13] 0.772 0.5262 0.6282 0.6102 2011_Liwei Liu [13] 0.7233 0.575 0.7232 0.6706 2017_Learning [22] 0.6204 0.5907 0.6103 0.6907 2017_Learning [22] 0.6514 0.5019 0.5824 0.7107 2017_CCC [27] 0.6737 0.5878 0.5901 0.4497 2017_CCC [27] 0.6888 0.6242 0.7577 0.538 2017_CCA [27] 0.6914 0.6124 0.6095 0.4826 2017_CCA [27] 0.6891 0.5417 0.5972 0.564 2017_CIC [27] 0.7129 0.6536 0.6814 0.4636 2017_CIC [27] 0.7012 0.642 0.7439 0.537 Extended_SAE_3 0.5674 0.521 0.5379 0.7458 Extended_SAE_3 0.608 0.4636 0.5673 0.7109 Extended_SAE_5 0.5593 0.5075 0.549 0.7384 Extended_SAE_5 0.5854 0.4633 0.5592 0.6073

       3.4.2.2 Training Setting

      Here, they trained their stacked autoencoder with the three-hidden layers and five-hidden layers, and also, they applied sigmoid and hyperbolic tangents. In sigmoid transfer function, the data are transformed in between 0 and 1. In hyperbolic tangents, the data are transformed in between −1 and 1. They used mini batch GD Optimizer and Adam Optimizer for regularization. Dropout regularization is added to each hidden layer with probability p = 0.5. Research was done for dissimilar parameters. Finest parameter values are shown in the research work. In this proposed approach, 90% of data are utilized for training purposes along with the rest samples for testing purposes [4].

       3.4.2.3 Result

      3.4.3 Situation-Aware Multi-Criteria Recommender System: Using Criteria Preferences as Contexts by Zheng

Скачать книгу