World Congress Thoracic Imaging June 18-21, 2017, Hynes Convention Center, Boston, Massachusetts June 18-21, 2017, Hynes Convention Center, Boston, Massachusetts

Sponsoring Societies:

Fleischner Society Society of Thoracic Radiology European Society of Thoracic Imaging Japanese Society of Thoracic Radiology Korean Society of Thoracic Radiology
WCTI Home Congress Information Final Program

Back to 2017 Abstracts

Unsupervised Opacity Annotation of Diffuse Lung Diseases Using Deep Autoencoder and Bag-of-Features
Shingo Mabu, Masanao Obayashi, Takashi Kuremoto, Noriaki Hashimoto, Yasushi Hirano, Shoji Kido
Yamaguchi University, Ube, Japan

Purpose: Deep neural networks (DNNs) have been applied to medical image diagnosis to classify normal and abnormal opacities. However, in the training of DNNs, a large amount of training data with correct opacity annotations is essential to realize high classification accuracy. The annotations must be given to thousands of ROIs (regions of interest), not to the whole image of each case; therefore, it is quite tough work for radiologists to make annotations for all the ROIs. This research aims to realize an unsupervised opacity annotation, which does not need any annotations and can reduce the cost for training classifiers. In detail, a clustering algorithm using deep autoencoder and bag-of-features is developed, and applied to an automatic annotation system of CT images of diffuse lung diseases.
Materials and Methods: The proposed method realized ROI-based annotation. (1) We used 406 lung CT images (406 patients) taken in Yamaguchi University Hospital, Japan, containing the cases of normal and diffuse lung diseases. The 406 images were divided into 10094 ROI images (32 [pixel] * 32 [pixel]). (2) Each ROI was encoded by a 7-layered deep autoencoder, and a large number of feature vectors were extracted. (3) Each ROI was represented by a histogram of the extracted features using bag-of-features method. (4) K-means algorithm was applied to the histograms of ROIs to generate clusters of consolidation (CON), ground-glass opacity (GGO), emphysema (EMP), honeycomb (HCM), nodular (NOD) and normal (NOR).
Results: The clustering accuracy was evaluated by using the gold standard given by an expert radiologist. The accuracy of the above six kinds of opacities was 72.8%. Note that this accuracy was obtained by the unsupervised learning which does not use any prior information on the opacities. 64 clusters were generated where 23 NOR, five CON, 17 GGO, 18 EMP, one HCM and 0 NOD clusters were contained. The number of clusters of each kind of opacity is influenced by the number of data. The clustering accuracy of NOR is 63.1%, CON 83.7%, GGO 78.5%, EMP 84.4%, and HCM 53.5%. The accuracy of NOD could not be calculated because no clusters were generated. The clustering accuracy of bag-of-features without deep autoencoder was 69.5%, and that of HOG (Histograms of Oriented Gradients) features was 36.6%.
Conclusions: The proposed method could make clusters without using any information on opacity annotations and show better clustering accuracy comparing to other methods, which indicates the potential to apply to computer-aided-diagnosis reducing the cost for annotating CT images.

Back to 2017 Abstracts
Home | About | Contact