Apple MGIE: Multimodal Models for Image Editing

12 February 2024
apple mgie

Apple MGIE: Multimodal Models for Image Editing

Apple, in collaboration with the University of California, has developed the open-source MGIE model for image editing based on text input. This model tackles various editing tasks, including Photoshop-style image…

DeepMind Trains AlphaGeometry Model to Solve Olympiad Geometry Problems

21 January 2024
AlphaGeometry

DeepMind Trains AlphaGeometry Model to Solve Olympiad Geometry Problems

DeepMind has unveiled AlphaGeometry – a model capable of solving geometric problems at the level of International Mathematical Olympiad winners. AlphaGeometry solved 25 out of 30 Olympiad problems, while on…

Microsoft DragNUWA: Video Generation via Object Trajectories

15 January 2024

Microsoft DragNUWA: Video Generation via Object Trajectories

Microsoft has released the DragNUWA weights – a cross-domain video generation model that offers more precise control over the resulting output compared to similar models. Control is achieved by simultaneously…

VideoPoet: Google’s Language Model for Video Generation and Editing

23 December 2023
videopoet

VideoPoet: Google’s Language Model for Video Generation and Editing

Google has unveiled VideoPoet, a language model for multimodal video content processing capable of turning text and images into clips, styling pre-existing videos, and generating soundtrack for them without any…

Google Introduces Gemini, a Cutting-Edge Language Model Set

7 December 2023

Google Introduces Gemini, a Cutting-Edge Language Model Set

Google has announced the creation of Gemini, a set of three language models surpassing competitors in 30 out of 32 benchmarks. The top-tier model, Gemini Ultra, is available through an…

DeepMind GNoME Discovered 2 Million New Materials

3 December 2023

DeepMind GNoME Discovered 2 Million New Materials

DeepMind has developed the graph neural network GNoME, predicting material stability. GNoME has identified 2.2 million new materials, with 380 thousand deemed stable for application in developing computer chips, batteries,…

Stable Video Diffusion: Stability AI’s Image-Based Video Generator

26 November 2023
Stable Video Diffusion

Stable Video Diffusion: Stability AI’s Image-Based Video Generator

Stability AI has announced the release of Stable Video Diffusion, a duo of models that generate up to 4-second videos from an input image. Both models are available publicly. Importantly,…

LCM-LoRA: Real-Time Image Generation Neural Network

19 November 2023

LCM-LoRA: Real-Time Image Generation Neural Network

Researchers at Tsinghua University have developed the LCM-LoRA algorithm, revolutionizing real-time image generation from text descriptions or sketches. Consequently, this technology marks a significant advancement in the field. Popular text-to-image…

OpenAI DevDay 2023: GPTs, GPT-4 Turbo, and Other Updates from OpenAI

12 November 2023
openai devday2023

OpenAI DevDay 2023: GPTs, GPT-4 Turbo, and Other Updates from OpenAI

OpenAI introduced over ten products and features for developers at DevDay 2023. Here’s a rundown of the new models and API updates: The GPT-4 Turbo model, trained on data up…

“Compact Giant” Mistral 7B Outperforms Llama 2 13B and Llama 34B

1 October 2023
Mistral 7B vs Llama 2

“Compact Giant” Mistral 7B Outperforms Llama 2 13B and Llama 34B

The Mistral AI team has unveiled the remarkable Mistral 7B – an open-source language model with a staggering 7.3 billion parameters, surpassing the significantly larger Llama 2 13B model in…

FLM-101B: Training 101 Billion Parameter Language Model with a $100K Budget

24 September 2023
FLM 101B evaluating growth strategy

FLM-101B: Training 101 Billion Parameter Language Model with a $100K Budget

Researchers from Beijing University present FLM-101B, an open-source large language model (LLM) with 101 billion parameters trained from scratch with a budget of only $100K. Training LLMs at large scales…

Würstchen: An Open-Source Text-to-Image Model Consuming 16 Times Less GPU than Stable Diffusion 1.4

14 September 2023
Würstchen approach

Würstchen: An Open-Source Text-to-Image Model Consuming 16 Times Less GPU than Stable Diffusion 1.4

Würstchen is an open text-to-image model that generates images faster than diffusion models like Stable Diffusion while consuming significantly less memory, achieving comparable results. The approach is based on a…

Persimmon-8B: An Open Model with a 16k Token Context, Running on a Single GPU

11 September 2023
persimmon-8b-llm

Persimmon-8B: An Open Model with a 16k Token Context, Running on a Single GPU

Researchers from Adept have introduced the open-source language model Persimmon-8B with a 16k token context, which is four times larger than the most compact Llama 2 and text-davinci-002 used in…

Falcon 180B: The Largest Open Language Model Surpasses Llama 2 and GPT 3.5

6 September 2023
falcon 180b model intro

Falcon 180B: The Largest Open Language Model Surpasses Llama 2 and GPT 3.5

The Institute of Technological Innovations from the UAE has unveiled Falcon 180B, the largest open language model, displacing Llama 2 from the top spot in the rankings of pre-trained open-access…

GigaGAN: Open Source Model Generates 512px Images in Just 0.13 Seconds

1 September 2023
GIGAGAN

GigaGAN: Open Source Model Generates 512px Images in Just 0.13 Seconds

GigaGAN – an open source model with 1 billion parameters, can generate 512×512 pixel images in just 0.13 seconds, significantly faster than diffuse and autoregressive models. Additionally, researchers have developed…

Code Llama: State-of-the-Art Code Creation Model

28 August 2023
code llama model

Code Llama: State-of-the-Art Code Creation Model

The Code Llama model is an enhanced version of Llama 2, designed for code generation, completion, and correction. It’s available for free for both commercial and research purposes. Code Llama…

ReLoRA: Method for Enhancing Performance in Training Large Language Models

16 August 2023
relora method

ReLoRA: Method for Enhancing Performance in Training Large Language Models

ReLoRA is a technique for training large transformer-based language models using low-rank matrices, aimed at boosting training efficiency. The effectiveness of this method increases with the scale of the models.…

NVIDIA FlexiCubes: Crafting 3D Grids Using Adaptive Parameters

13 August 2023
flexicubes

NVIDIA FlexiCubes: Crafting 3D Grids Using Adaptive Parameters

NVIDIA has introduced FlexiCubes – a method for generating 3D grids of objects through adaptive parameters. This innovation is designed to deliver the highest quality grids, catering to a wide…

Audiocraft: Open Source Library for Music and Sound Generation

4 August 2023
audiocraft

Audiocraft: Open Source Library for Music and Sound Generation

Introducing Audiocraft – a PyTorch library with open-source code, designed for generating music and sound from text. It serves as a powerful tool for deep learning-based audio generation research. Within…

PIGINet: Generating Optimal Sequence of Robot Actions

30 July 2023
robotic tasks piginet

PIGINet: Generating Optimal Sequence of Robot Actions

MIT researchers have introduced PIGINet, a neural network designed to teach robots how to navigate through various tasks. PIGINet evaluates potential action sequences based on task descriptions, scene images, and…

Llama 2 and Llama-2-Chat: A New Generation of Open Source Language Models

19 July 2023
Llama 2 update

Llama 2 and Llama-2-Chat: A New Generation of Open Source Language Models

The new generation of Llama models comprises three large language models, namely Llama 2 with 7, 13, and 70 billion parameters, along with the fine-tuned conversational models Llama-2-Chat 7B, 34B,…