Inverting Data Transformations via Diffusion Sampling
Jinwoo Kim*, Sékou-Oumar Kaba*, Jiyun Park, Seunghoon Hong†, Siamak Ravanbakhsh†
(* equal contribution, † equal advising)
arXiv 2026
This codebase contains training and evaluation scripts for Transformation-Inverting Energy Diffusion (TIED). The codebase has been tested with NVIDIA A6000 GPUs.
We recommend using the official PyTorch Docker image with CUDA support.
docker pull pytorch/pytorch:2.3.1-cuda12.1-cudnn8-devel
docker run -it --gpus all --ipc host --name tied -v /home:/home pytorch/pytorch:2.3.1-cuda12.1-cudnn8-devel bashAssuming the codebase is located at ~/tied inside Docker container, install the required packages and download the required data:
cd ~/tied
apt update && apt install -y git git-lfs
git lfs install
git lfs pull
pip3 install -r requirements.txtSynthetic sampling task on SO(10)
python3 test_so_sampling.py --config configs/synthetic_so10/langevin.yaml
python3 test_so_sampling.py --config configs/synthetic_so10/diffusion.yamlAffine/homography invariant image classification
# no transformation
python3 test_mnist_classification.py --config configs/padded_mnist/none.yaml
# affine transformation
python3 test_mnist_classification.py --config configs/affnist/none.yaml
# energy: VAE evidence lower bound
python3 test_mnist_classification.py --config configs/affnist/vae_ar_langevin.yaml
python3 test_mnist_classification.py --config configs/affnist/vae_ar_focal.yaml
python3 test_mnist_classification.py --config configs/affnist/vae_ar_its.yaml
python3 test_mnist_classification.py --config configs/affnist/vae_ar_lielac.yaml
python3 test_mnist_classification.py --config configs/affnist/vae_ar_diffusion.yaml
# energy: classifier logit confidence
python3 test_mnist_classification.py --config configs/affnist/logsumexp_langevin.yaml
python3 test_mnist_classification.py --config configs/affnist/logsumexp_focal.yaml
python3 test_mnist_classification.py --config configs/affnist/logsumexp_its.yaml
python3 test_mnist_classification.py --config configs/affnist/logsumexp_lielac.yaml
python3 test_mnist_classification.py --config configs/affnist/logsumexp_diffusion.yaml
# homography transformation
python3 test_mnist_classification.py --config configs/homnist/none.yaml
# energy: VAE evidence lower bound
python3 test_mnist_classification.py --config configs/homnist/vae_ar_langevin.yaml
python3 test_mnist_classification.py --config configs/homnist/vae_ar_focal.yaml
python3 test_mnist_classification.py --config configs/homnist/vae_ar_lielac.yaml
python3 test_mnist_classification.py --config configs/homnist/vae_ar_diffusion.yaml
# energy: classifier logit confidence
python3 test_mnist_classification.py --config configs/homnist/logsumexp_langevin.yaml
python3 test_mnist_classification.py --config configs/homnist/logsumexp_focal.yaml
python3 test_mnist_classification.py --config configs/homnist/logsumexp_lielac.yaml
python3 test_mnist_classification.py --config configs/homnist/logsumexp_diffusion.yamlPoint symmetry equivariant PDE solving
# heat PDE
python3 test_heat_pde.py --config configs/heat_pde/none.yaml
python3 test_heat_pde.py --config configs/heat_pde/langevin.yaml
python3 test_heat_pde.py --config configs/heat_pde/focal.yaml
python3 test_heat_pde.py --config configs/heat_pde/lielac.yaml
python3 test_heat_pde.py --config configs/heat_pde/diffusion.yaml
# heat PDE + data augmentation
python3 test_heat_pde.py --config configs/heat_pde/aug_none.yaml
python3 test_heat_pde.py --config configs/heat_pde/aug_langevin.yaml
python3 test_heat_pde.py --config configs/heat_pde/aug_focal.yaml
python3 test_heat_pde.py --config configs/heat_pde/aug_lielac.yaml
python3 test_heat_pde.py --config configs/heat_pde/aug_diffusion.yaml
# burgers PDE
python3 test_burgers_pde.py --config configs/burgers_pde/none.yaml
python3 test_burgers_pde.py --config configs/burgers_pde/langevin.yaml
python3 test_burgers_pde.py --config configs/burgers_pde/focal.yaml
python3 test_burgers_pde.py --config configs/burgers_pde/lielac.yaml
python3 test_burgers_pde.py --config configs/burgers_pde/diffusion.yamlOur implementation uses the code from the following repositories:
- ITS, FoCal, LieLAC, and equivariant-convolutions for baselines
- Metrics for Evaluating GANs for Fréchet Inception Distance (FID)
If you find our work useful, please consider citing it:
@article{kim2026inverting,
author = {Jinwoo Kim and Sékou-Oumar Kaba and Jiyun Park and Seunghoon Hong and Siamak Ravanbakhsh},
title = {Inverting Data Transformations via Diffusion Sampling},
journal = {arXiv},
volume = {abs/2602.08267},
year = {2026},
url = {https://arxiv.org/abs/2602.08267}
}