Dorjsembe, ZolnamarZolnamarDorjsembePao, Hsing-KuoHsing-KuoPaoOdonchimed, SodtavilanSodtavilanOdonchimedFU-REN XIAO2025-03-252025-03-252023-05-29https://scholars.lib.ntu.edu.tw/handle/123456789/726074Artificial intelligence (AI) in healthcare, especially in medical imaging, faces challenges due to data scarcity and privacy concerns. Addressing these, we introduce Med-DDPM, a diffusion model designed for 3D semantic brain MRI synthesis. This model effectively tackles data scarcity and privacy issues by integrating semantic conditioning. This involves the channel-wise concatenation of a conditioning image to the model input, enabling control in image generation. Med-DDPM demonstrates superior stability and performance compared to existing 3D brain imaging synthesis methods. It generates diverse, anatomically coherent images with high visual fidelity. In terms of dice score in the tumor segmentation task, Med-DDPM achieves 0.6207, close to the 0.6531 dice score of real images, and outperforms baseline models. Combined with real images, it further increases segmentation accuracy to 0.6675, showing the potential of our proposed method for data augmentation. This model represents the first use of a diffusion model in 3D semantic brain MRI synthesis, producing high-quality images. Its semantic conditioning feature also shows potential for image anonymization in biomedical imaging, addressing data and privacy issues. We provide the code and model weights for Med-DDPM on our GitHub repository (https://github.com/mobaidoctor/med-ddpm/) to support reproducibility. Copyright © 2023, The Authors. All rights reserved.enanonymizationConditional diffusion modelsdata augmentationgenerative modelssemantic image synthesisConditional Diffusion Models for Semantic 3D Brain MRI Synthesisother10.48550/arXiv.2305.184532-s2.0-85187879204