Скачать книгу

CUDA-enabled Nvidia Titan 312 GB GPU 256 × 256 -

      Considering scarcity of radiologists in less economically developed countries, deep learning models are used for medical imaging for detecting abnormalities in CXR images. There are 14 pathologies whose severity leads to mortality; therefore, many researchers try to detect all 14 pathologies. Generally, DL models are classified into two categories, namely, ensemble and non-ensemble models. Many researchers deployed parameters initialized from ImageNet dataset and then fine tune their proposed network as per the task. In order to deal with different issue, different pre-processing techniques are employed by the authors. The ChestX-ray14 dataset is the popular dataset which is experimented mostly as it contains large number of images with annotation. Cardiomegaly is the major chest pathology detected by many authors due to its spatially spread nature. We have also discussed factors affecting performance of models along with their significance. Finally, we have compared existing models on the basis of different parameters so that it will be easy to carry out future research to develop more robust and accurate model for the thoracic image analysis using deep models.

      We found that it is tedious to obtain a good AUC score for all the diseases using single CNN. Doctors used to rely on a broad range of additional data such as patient age, gender, medical history, clinical symptoms, and possibly CXRs from different views. These additional information should also be incorporated into the model training. For the identification of diseases which have small and complex structures on CXRs, a finer resolution such as 512 × 512 or 1,024 × 1,024 may be advantageous. However, for preparation and inference, this investigation needs far more computational resources. In addition, another concern is CXR image consistency. When taking a deeper look at the CheXpert, it is observed that a considerable rate of samples have low quality (e.g., rotated image, low-resolution, samples with texts, and noise) that definitely deteriorate the model performance. Spatially spread abnormalities such as cardiomegaly and Edema can be localized more accurately. Due to shift variant nature of CNN, antialiasing filters are needed to improve the performance of CNN model.

      Deep learning models should be able to integrate and interpret data from various imaging sources to obtain a better perspective on the anatomy of the patient in order to allow efficient analysis of patient scans. This could produce deeper insights into the nature and development of the disease, thereby offering a greater degree of understanding of the condition of patients by radiologists. Along with x-ray images other parameters such as heredity, age, and diabetes status, parameters can also be added to improve accuracy. Rather than going for ensemble and pre-trained models, pathology specific and data specific models can be implemented in future by combining good characteristics of existing models. Same models can also be used for detecting abnormalities in other region of body such as brain tumor, mouth cancer, and head-neck cancer. Novel deep learning models can be implemented for detecting post COVID impact on chest.

      1. Abbas, A., Abdelsamea, M.M., Gaber, M.M., Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Applied Intelligence, arXiv preprint arXiv:2003.13815., 51, 2, 854–864 2020.

      2. Abiyev, R.H. and Ma’aitah, M.K.S., Deep convolutional neural networks for chest diseases detection. J. Healthcare Eng., 2018, 1–11, 2018.

      3. Apostolopoulos, I.D., Aznaouridis, S.I., Tzani, M.A., Extracting possibly representative COVID-19 Biomarkers from X-Ray images with Deep Learning approach and image data related to Pulmonary Diseases. J. Med. Biol. Eng., 1, 40, 462–469, 2020.

      4. Bar, Y., Diamant, I., Wolf, L., Lieberman, S., Konen, E., Greenspan, H., Chest pathology detection using deep learning with non-medical training, in: 2015 IEEE 12th international symposium on biomedical imaging (ISBI), 2015, April, IEEE, pp. 294–297.

      5. Behzadi-khormouji, H., Rostami, H., Salehi, S., Derakhshande-Rishehri, T., Masoumi, M., Salemi, S., Batouli, A., Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed., 185, 105162, 2020.

      6. Belarus tuberculosis portal. Available at: http://tuberculosis.by.

      7. Bharati, S., Podder, P., Mondal, M.R.H., Hybrid deep learning for detecting lung diseases from X-ray images. Inf. Med. Unlocked, 20, 100391, 2020.

      8. Bouslama, A., Laaziz, Y., Tali, A., Diagnosis and precise localization of cardiomegaly disease using U-NET. Inf. Med. Unlocked, 19, 100306, 2020.

      9. Chauhan, A., Chauhan, D., Rout, C., Role of gist and PHOG features in computer-aided diagnosis of tuberculosis without segmentation. PLoS One, 9, 11, e112980, 2014.

      10. Chen, B., Li, J., Guo, X., Lu, G., DualCheXNet: dual asymmetric feature learning for thoracic disease classification in chest X-rays. Biomed. Signal Process. Control, 53, 101554, 2019.

      11. Cheng, J.Z., Ni, D., Chou, Y.H., Qin, J., Tiu, C.M., Chang, Y.C., Chen, C.M., Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci. Rep., 6, 1, 1–13, 2016.

      12. Chollet, F., Xception: Deep learning with depthwise separable convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017.

      13. Cicero, M., Bilbily, A., Colak, E., Dowdell, T., Gray, B., Perampaladas, K., Barfett, J., Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest. Radiol., 52, 5, 281–287, 2017.

      14. Ciompi, F., de Hoop, B., van Riel, S.J., Chung, K., Scholten, E.T., Oudkerk, M., van Ginneken, B., Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box. Med. Image Anal., 26, 1, 195–202, 2015.

      15. Demner-Fushman, D., Kohli, M.D., Rosenman, M.B., Shooshan, S.E., Rodriguez, L., Antani, S., McDonald, C.J., Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inf. Assoc., 23, 2, 304–310, 2016.

      16. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, 2009, June, IEEE, pp. 248–255.

      17. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T., Decaf: A deep convolutional activation feature for generic visual recognition, in: International conference on machine learning, 2014, January, pp. 647–655.

      18. Dunnmon, J.A., Yi, D., Langlotz, C.P., Ré, C., Rubin, D.L., Lungren, M.P., Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology, 290, 2, 537–544, 2019.

      20. Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y., Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. Pattern Recognition Letters, arXiv preprintarXiv:1801.09927, 131, 38–45, 2018.

      21. He, K., Zhang, X., Ren, S., Sun, J., Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.

      22.

Скачать книгу