Researchers from NVIDIA have announced the release of Imaginaire – a new PyTorch library for generative modeling, which hosts optimized implementations of various GAN-based image and video synthesis methods.
Imaginaire features a number of implementations of supervised and unsupervised image-to-image translation methods as well as video-to-video translation methods. NVIDIA researchers included several state-of-the-art methods from each of the domains, such as: pix2pix, SPADE from the supervised image-to-image translation, UNIT, MUNIT and FUNIT from the unsupervised translation and vid2vid, fs-vid2vid, wc-vid2vid from the video-to-video translation.
According to them, the library is easy to install, setup, and use and there are tutorials available for each of the models included in the library. Also, the models which are part of the library have their corresponding pre-trained weights available for a wide range of different tasks. For example, pix2pix is available for segmentation-to-image translation, UNIT for winter-summer domain translation, and fs-vid2vid for landmarks-to-video translation.
Different models can be explored in the Model Zoo. Imaginaire was released under the NVIDIA Software License and it is available for free, for non-commercial use. The documentation can be found here.