Access our GUI for the inference of the models here!
Note: The availability of the GUI is not guaranteed 24/7.
This repository contains the source code for the Master thesis project on mammographic image generation using diffusion models. This project was developed under the Joint Master's degree in Medical Imaging and Applications (MAIA) teaching program, at the University of Girona (Spain), the University of Cassino and Southern Lazio (Italy) and the University of Bourgogne (France).
In this work, we propose exploring the use of diffusion models for the generation of high quality full-field digital mammograms using state-of-the-art conditional diffusion pipelines. Additionally, we propose using stable diffusion models for the inpainting of synthetic lesions on healthy mammograms. We introduce MAM-E, a pipeline of generative models for high quality mammography synthesis controlled by a text prompt and capable of generating synthetic lesions on specific sections of the breast.
The paper of this project can be found here: MAM-E: Mammographic Synthetic Image Generation with Diffusion Models.
Additionally, the report of the project, the slides of the presentation and the poster can be found in the documentation folder.
To create the mame environment we suggest mamba/conda using our yaml file:
- Create conda environment:
mamba env create -f envs/requirements_mamba.yaml
mamba activate mameThis will create a conda environment named mame. Before creating it, make sure that the cuda version of pytorch is being installed.
- After activating the environment, install the bitsandbytes library individually:
mamba install bitsandbytes -c conda-forge This is done because the bitsandbytes library may try to install the pytorch cpu version if included in the requirements file.
You may also install all the libraries using other environment managers, like pip. Just open the yaml file, explore the libraries and install them using your prefered manager.
- Edit the configuration file you choose with the desired parameters. For example, you can change the number of epochs, the batch size, the learning rate, etc (example)
- It is important to define the training data location and the name of the experiment
- For special configuration settings (e.g. xformer usage, wandb logging, etc.), refer to the corresponding documentation.
- Results will be saved in the
resultsfolder, in the repository root.
- Run the experiment:
cd experiments/sd2
python train_dreambooth.py --config_path=config_files/<your_config_file>.yamlNote: The fusion model using batch size of 16 (with variable graident accumulation steps), 512x512 image, with xformers activated, 8 bit adam, gradient checkpointing and fp16 mixed precision training, requires around 20GB of GPU memory.
-
Edit the appropriate inpainting configuration file. Notice the additional parameters for inpainting such as validation image and mask paths (example).
-
Run the inpainting experiment:
cd experiments/sd2
python train_inpainting.py --config_path=config_files/<your_config_file>.yamlThe repository is structured as follows:
- assessment: code for qualitative and quantitative assessment of the generated images.
- data (not included in the repository): contains the data used for training the models.
- The training data location as well as the results directory are be defined in the configuration file.
- This means you can have virtually any directory structure that you want for your data.
- dataset_analysis: code for the analysis of the dataset. This includes constructing the dataset metadata, saving png files, creating masks, prompt, etc.
- datasets_local: contains useful functions for the dataset creation.
- documentation: contains the report, slides and poster of the project.
- envs: contains the conda (and pip) environment files.
- experiments: contains the code for the main experiments. It is divided in old experiments (original code) and updated code (duch as sd2).
- figures: contains the figures used in the README.
- generation (future work): for the use of synthetic images in the training of CAD systems.
- results (not included in the repository): contains the weights, pipeline configuration files and some logging files for the experiments. (The same information can be found in the Hugging Face repository of the first author).
If you find this project useful, please consider citing it:
@Article{s24072076,
AUTHOR = {Montoya-del-Angel, Ricardo and Sam-Millan, Karla and Vilanova, Joan C. and Martí, Robert},
TITLE = {MAM-E: Mammographic Synthetic Image Generation with Diffusion Models},
JOURNAL = {Sensors},
VOLUME = {24},
YEAR = {2024},
NUMBER = {7},
ARTICLE-NUMBER = {2076},
URL = {https://www.mdpi.com/1424-8220/24/7/2076},
ISSN = {1424-8220},
DOI = {10.3390/s24072076}
}
