Adobe’s Firefly 2 Combines Image Generation and Style Transfer Models

11 October 2023
adobe firefly 2 - generative match

Adobe’s Firefly 2 Combines Image Generation and Style Transfer Models

The Adobe Firefly 2 image generation model, currently in beta, is now available via the Firefly web application. The standout feature of Firefly 2 is the Generative Match, designed for…

AudioPaLM: Google’s Multimodal Model for Voice Translation

29 June 2023
audiopalm google

AudioPaLM: Google’s Multimodal Model for Voice Translation

Google has introduced AudioPaLM, a large language model for speech processing and generation that combines two Google language models, PaLM-2 and AudioLM, into a multimodal architecture. The model can recognize…

“AI-Generated John Lennon” to Appear in The Beatles’ “Final Song”

22 June 2023
ai song beatles

“AI-Generated John Lennon” to Appear in The Beatles’ “Final Song”

Paul McCartney, at the age of 80, has announced the release of The Beatles’ “final song” featuring John Lennon’s voice generated by an AI neural network. The voice of John…

The StyleCLIP neural network sets picture style based on a text description

9 April 2021

The StyleCLIP neural network sets picture style based on a text description

StyleCLIP is a combination of CLIP and StyleGAN models designed to manipulate image style with text prompts. The open-source code is available, including Google Colab notebooks. Why is it needed StyleGAN…

NVIDIA Research Proposed New Style-Based Generator Architecture for GANs

18 December 2018
stylegan

NVIDIA Research Proposed New Style-Based Generator Architecture for GANs

NVIDIA Research has just released a new paper introducing a groundbreaking generator architecture called StyleGAN. This innovative method is making waves in the image synthesis community due to its superior…

A Style-Aware Content Loss for Real-time HD Style Transfer

14 August 2018

A Style-Aware Content Loss for Real-time HD Style Transfer

A picture may be worth a thousand words, but at least it contains a lot of very diverse information. This not only comprises what is portrayed, e.g., a composition of…

ReCoNet: Fast and Accurate Real-time Video Style Transfer

25 July 2018
video style transfer reconet

ReCoNet: Fast and Accurate Real-time Video Style Transfer

A real-time coherent video style transfer network – ReCoNet – is proposed by a group of researchers from the University of Hong Kong as a state-of-the-art approach to video style…

Twin-GAN: Cross-Domain Translation of Human Portraits

25 June 2018
Neural style transfer

Twin-GAN: Cross-Domain Translation of Human Portraits

That comes as no surprise that many discoveries and inventions in any domain come out of the researchers’ personal interests. This new approach to translation of human portraits is also…