Generative
-
HyperDreamBooth: 25x faster text-to-image personalization with HyperNetworks
HyperDreamBooth is a new powerful method that can generate a person’s face in…
-
SDXL: the next generation of Stable Diffusion models for text-to-image synthesis
Stable Diffusion XL (SDXL) is the latest text-to-image generation model developed by Stability AI, based…
-
TryOnDiffusion: try on virtual clothes with the power of two UNets
TryOnDiffusion is a new method that leverages diffusion models and cross attention mechanisms to…
-
Meta’s open source MUSICGEN: a single language model to create high-quality music from text or melody
Meta proposes MUSICGEN, a simple and controllable tool that generates high-quality music at…
-
MinD-Video model creates high-quality videos from your brain activity
MinD-Video is a new technology that can generate high-quality videos from brain signals.…
-
Make-An-Animation: a U-Net based diffusion model for 3D human motion generation
Make-An-Animation is a new text-to-motion generation model that creates realistic and diverse 3D…
-
DragGAN: edit images by simply dragging some points on them
DragGAN (Drag Your GAN) is an interactive method for editing GAN-generated images by simply dragging some…
-
How Stability AI is advancing open-source AI with StableStudio
Stability AI, a leading company in the field of generative AI, has announced…
-
Nvidia’s new high resolution text-to-video synthesis with Latent Diffusion Models
The AI Lab of Nvidia in Toronto launched a text-to-video generation model that uses…
-
DreamPose: fashion image-to-video generation via Stable Diffusion
DreamPose is a new method that generates fashion videos from still images, through…