Angelina, Clara LavitaClara LavitaAngelinaXiao, Fu-RenFu-RenXiaoVyas, SunilSunilVyasPAN-CHYR YANGChang, Hsuan-TingHsuan-TingChangYUAN LUO2026-02-052026-02-052026-01-12https://scholars.lib.ntu.edu.tw/handle/123456789/735770Background: Accurate classification and segmentation of brain tumors in MRI scans are essential for diagnosis and treatment planning. However, the heterogeneous morphology of brain tumors, including irregular shapes, sizes, and spatial variability, makes this task highly challenging. Traditional convolutional neural networks (CNNs) lack rotational and translational invariance, which limits their ability to generalize across different orientations. Methods: This study introduces a geometric deep learning framework called Modified Special Euclidean (Mod-SE(2)), which integrates geometric priors to enhance spatial consistency and reduce reliance on data augmentation. By incorporating symmetry-preserving group convolutions and spatial priors, Mod-SE(2) improves the robustness in tumor classification (namely Mod-Cls-SE(2)) and segmentation (mentioned as Mod-Seg-SE(2)). Unlike conventional CNNs, geometric deep learning encodes roto-translation symmetry directly into the architecture. This addresses the spatial variability and orientation sensitivity that are common in MRI-based diagnostics. Mod-SE(2) was evaluated on three MRI datasets and two other medical image datasets for classification and segmentation tasks. It incorporates lifting layers, group convolutions, and feature recalibration. It was benchmarked against U-Net, NN U-Net, VGG16, VGG19, and ResNet architectures. Results: Mod-Cls-SE(2) achieved an average classification accuracy of 0.914, outperforming ResNet101 with 0.682, VGG16 with 0.705, and their variants. In the binary classification of five tumor types (AVM, Meningioma, Pituitary, Metastases, and Schwannoma) from the private dataset, the model achieved an accuracy of 0.935 and a precision of 0.960 for pituitary tumors and a precision of 0.96. For segmentation tasks, Mod-Seg-SE(2) achieved a dice coefficient of 0.9503 and an IoU of 0.9616 on the BraTS2020 dataset. This result exceeds those of U-Net and NN U-Net with dice scores of 0.797 and 0.815, respectively. The model also reduced inference time and demonstrated strong computational performance. Conclusions: Mod-SE(2) uses geometric priors to improve the spatial consistency, efficiency, and interpretability in brain tumor analysis. Its symmetry-aware design enables better generalization across tumor shapes and outperforms traditional methods across all key metrics. The Mod-SE(2) CNN ensures accurate boundary delineation, supporting neurosurgical planning, intraoperative navigation, and downstream applications such as Monte Carlo-based radiotherapy simulations and PET-MRI co-registration. Future work will extend the model to 3D volumes and validate its clinical readiness.enBrain tumor classificationGeometric deep learningMRIMedical imagingMod-SE(2)Roto-translation invarianceMod-SE(2): a geometric deep learning framework for brain tumor classification and segmentation in MRI images.journal article10.1186/s12929-025-01213-y41527065