Dana H. Brooks
Jennifer G. Dy, Deniz Erdogmus, Milind Rajadhyaksha, Gregory C. Sharp
Date of Award
Doctor of Philosophy
Department or Academic Unit
College of Engineering, Departmant of Electrical and Computer Engineering
computer engineering, 3D level sets, dermal epidermis junction, dynamic texture segmentation, locally smooth classification, model-based image segmentation, shape model
Electrical and Computer Engineering
For some 3D biomedical image segmentation problems, there is neither consistent intensity contrast nor other clear discriminative features between the object of interest and the surrounding region; hence, the segmentation problem is difficult. This dissertation addresses this problem through the use of machine learning tools, incorporating a combination of global and local models into the segmentation algorithm which are learned during the training phase from manually labeled data sets, or specified in advance from domain knowledge, along with minimal initial user markup. We applied this approach to two quite disparate problems, localization of the dermis-epidermis junction (DEJ) in confocal reflectance microscopy (CRM) image stacks of skin, working with collaborators in the Memorial Sloan Kettering Cancer Center Dermatology department and 3D segmentation of the esophagus in thoracic CT scans for radiation therapy planning, working with collaborators at Massachusetts General Hospital Radiation Oncology department. For the 3D DEJ detection problem in confocal reflectance images, difficulties come both from low contrast and high intra-subject and inter-subject variability, making non-adaptive methods ineffective. Instead we combined a locally smooth texture-based classification in 2D slices with automated feature selection incorporating with a self-adjusting "dynamic" segmentation method along the optical axis which detects changes along that axis corresponding to boundaries between skin layers. As a result two boundaries, namely epidermis and dermis boundaries, trapping the DEJ in between are obtained with a mean error of 6.07±1.57 micrometer for epidermis and 5.39±1.95 micrometer for dermis boundaries over 7 stacks in comparison to the boundaries labeled by the clinicians. For the esophagus we constructed anatomically-based spatial models of the centerline and used them, together with shape, appearance and other models learned through training in a 3D level-set segmentation framework. We introduced a new shape model with two parts, 1) PCA based global affine shape model and 2) locally smooth nonlinear deformations. We also proposed a simple two step shape alignment method based on anatomical landmark locations and the estimated centerline. The results indicated an algorithm performance with a mean error of 1.9±0.6mm and a mean Dice coefficient of 0.78±0.07 over 21 CT scans in comparison to expert labeled esophagi.
Kurugol, Sila, "Machine learning and model based 3D segmentation algorithms for challenging medical imaging problems" (2011). Electrical Engineering Dissertations. Paper 42. http://hdl.handle.net/2047/d20002080
Click button above to open, or right-click to save.