“Deepdub Go” Empowers Content Creators with AI for Video Dubbing

9 July 2023
ai for video dubbing - neural network based service

“Deepdub Go” Empowers Content Creators with AI for Video Dubbing

Israeli startup Deepdub has unveiled its groundbreaking service, Deepdub Go, which utilizes AI for dubbing to automatically dub videos in 65 languages. This innovative platform targets game development studios, advertising…

MAGVIT: Open Source Generative Video Transformer 10-in-1

29 June 2023
MAGVIT

MAGVIT: Open Source Generative Video Transformer 10-in-1

Researchers from Carnegie Mellon University, Google Research, and the University of Georgia have introduced MAGVIT (Masked Generative Video Transformer), an open-source video generation model. MAGVIT is a unified model that…

New Datasets for Object Tracking

8 November 2018

New Datasets for Object Tracking

Object tracking in the wild is far from being solved. Existing object trackers do quite a good job on the established datasets (e.g., VOT, OTB), but these datasets are relatively…

3D Hair Reconstruction Out of In-the-Wild Videos

22 October 2018
hair reconstruction from video

3D Hair Reconstruction Out of In-the-Wild Videos

3D hair reconstruction is a problem with numerous applications in different areas such as Virtual Reality, Augmented Reality, video games, medical software, etc. As a non-trivial problem, researchers have proposed…

Vid2Vid – Conditional GANs for Video-to-Video Synthesis

3 September 2018
vid2vid-video-to-video-synthesis-e1535641547242

Vid2Vid – Conditional GANs for Video-to-Video Synthesis

Researchers from NVIDIA and MIT’s Computer Science and Artificial Intelligence Lab have proposed a novel method for video-to-video synthesis, showing impressive results. The proposed method – Vid2Vid – can synthesize…

Everybody Dance Now: a New Approach to “Do As I Do” Motion Transfer

30 August 2018
everybody dance now

Everybody Dance Now: a New Approach to “Do As I Do” Motion Transfer

Not very good at dancing? Not a problem anymore! Now you can easily impress your friends with a stunning video, where you dance like a superstar. Researchers from UC Berkeley…

ReCoNet: Fast and Accurate Real-time Video Style Transfer

25 July 2018
video style transfer reconet

ReCoNet: Fast and Accurate Real-time Video Style Transfer

A real-time coherent video style transfer network – ReCoNet – is proposed by a group of researchers from the University of Hong Kong as a state-of-the-art approach to video style…