Meta, the parent company of Facebook, Instagram and Whatsapp, has introduced an open source artificial intelligence (AI) tool called AudioCraft.
Facebook allows you to create music and videos based on text messages: can AudioCraft AI challenge ChatGPT?
The main purpose of this tool is to generate music and audio content based on text prompts and audio cues. AudioCraft encompasses three distinct models: MusicGen, AudioGen, and EnCodec.
Meta Launches “AudioCraft” Artificial Intelligence Tool
MusicGen is designed to create music by leveraging Meta’s proprietary and licensed music data via text prompts. Meanwhile, AudioGen uses publicly available sound effect data to produce audio content based on text prompts.
As CEO Mark Zuckerberg posted on his Facebook page: “We’re open-sourcing AudioCraft, which generates high-quality, realistic audio and music by listening to raw audio signals and text-based prompts.”
Meta has introduced an improved version of its EnCodec decoder, which enables the generation of higher quality music with reduced artifacts. They have also made available their pre-trained AudioGen models, which allow users to generate various ambient sounds such as barking dogs, car horns or footsteps on wooden floors.
In addition, Meta shares all the weights and codes of the AudioCraft models, making it easy for developers to access and use these AI models.
AudioCraft to help work on music, sound, compression, etc.
With AudioCraft, users can work on music, sound, compression, and generation tasks all within the same platform. The ease of creating and reusing code encourages people to create improved sound generators, compression algorithms, and music generators while taking advantage of the progress made by others in the field.
The AudioCraft family of AI models is praised for producing top-notch audio with consistent quality over long periods. Furthermore, they are easy to use and simplify the process of creating generative audio models compared to previous approaches in this field. Meta aims to empower users to experiment and explore existing models, while encouraging them to push the limits and develop their own custom models.
According to the company, “With AudioCraft, we simplified the overall design of generative models for audio compared to previous work in the field, giving people the complete recipe for playing with the existing models that Meta has been developing for the past several years and at the same time at the same time enabling them to overcome limits and develop their own models”.
By opening up these models, Meta gives researchers and practitioners the opportunity to train their own models using their unique data sets. This initiative is expected to advance the field of AI-generated audio and music.