Diffusion HPC: Generating Synthetic Images with Realistic Humans

Stable Diffusion (left) often struggles to preserve the human structure when the generated pose is difficult. Our proposed Diffusion-HPC (right) improves the realism of human generations by injecting human body prior into the generation process..
Abstract

Recent text-to-image generative models have exhibited remarkable abilities in generating high-fidelity and photo-realistic images. However, despite the visually impressive results, these models often struggle to preserve plausible human structure in the generations. Due to this reason, while generative models have shown promising results in aiding downstream image recognition tasks by generating large volumes of synthetic data, they remain infeasible for improving downstream human pose perception and understanding. In this work, we propose Diffusion model with Human Pose Correction (Diffusion-HPC), a text-conditioned method that generates photo-realistic images with plausible posed humans by injecting prior knowledge about human body structure. We show that Diffusion-HPC effectively improves the realism of human generations. Furthermore, as the generations are accompanied by 3D meshes that serve as ground truths, Diffusion-HPC's generated image-mesh pairs are well-suited for downstream human mesh recovery task, where a shortage of 3D training data has long been an issue.

Example Text-conditioned Generations

BibTeX

@article{weng2023diffusion,
  title={Diffusion-HPC: Generating Synthetic Images with Realistic Humans},
  author={Weng, Zhenzhen and Bravo-S{\'a}nchez, Laura and Yeung, Serena},
  journal={arXiv preprint arXiv:2303.09541},
  year={2023}
}

Webpage template from Deep Image Prior.