Interactive segmentation in multimodal medical imagery using a bayesian transductive learning approach
Abstract
Labeled training data in the medical domain is rare and expensive to obtain. The lack of labeled multimodal medical image data is a major obstacle for devising learning-based interactive segmentation tools. Transductive learning (TL) or semi-supervised learning (SSL) offers a workaround by leveraging unlabeled and labeled data to infer labels for the test set given a small portion of label information. In this paper we propose a novel algorithm for interactive segmentation using transductive learning and inference in conditional mixture naïve Bayes models (T-CMNB) with spatial regularization constraints. T-CMNB is an extension of the transductive naïve Bayes algorithm [1, 20] to the seminonparametric case. The multimodal mixture assumption on each covariate feature dimension and spatial regularization constraints allow us to explain more complex distributions required for spatial classification in multimodal imagery. To simplify the estimation we reduce the parameter space by assuming naïve conditional independence between the feature space and the class label. The naïve conditional independence assumption allows efficient inference of marginal and conditional distributions for large scale learning and inference [19]. We evaluate the proposed algorithm on multimodal MRI brain imagery using ROC statistics and provide preliminary results. The algorithm shows promising segmentation performance with a sensitivity and specificity of 90.37% and 99.74% respectively and compares competitively to alternative interactive segmentation schemes. © 2009 SPIE.