Tseng K.-L.Lin Y.-L.Hsu W.WINSTON HSUCHUNG-YANG HUANG2019-07-102019-07-1020179781538604571https://scholars.lib.ntu.edu.tw/handle/123456789/413030Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multimodalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 [13] show that our method outperforms state-of-the-art biomedical segmentation approaches. ? 2017 IEEE.[SDGs]SDG3Joint sequence learning and cross-modality convolution for 3D biomedical segmentationconference paper10.1109/CVPR.2017.3982-s2.0-85044334445