NewsScienceTechTech NewsTechnology

Adobe’s new generative AI for music creation is the “Photoshop” for musical creativity

Adobe’s latest experiment in generative AI aims to help people create and edit music without the need for professional skills.



Announced during the Hot Pod Summit in Brooklyn, the Project Music GenAI Control prototype allows users to generate music using text queries and then edit the audio without going into specialized software.


Users start by entering a text description, which will generate music in a specified style, such as “happy dance” or “sad jazz.” Adobe says built-in editing tools then give you the ability to customize the results by changing any repeating patterns, tempo, volume and structure. Fragments of compositions can be mixed, and the audio can be generated in the form of an endless looping track to create, for example, background music.


The tool can also adjust the generated audio “based on the underlying melody” and lengthen audio segments if you want to make the track long enough for, say, a podcast or animation, Adobe says. So far, the interface for editing the generated sound has not been revealed, so you will have to be content with your imagination.


Adobe says the public demonstration of Project Music GenAI Control used public domain content, but it’s unclear whether the tool itself will be able to accept any audio directly for processing or how long snippets can be stretched out.


Similar tools are already available or in development by other companies, including Google’s MusicLM and Meta’s AudioCraft. But they only allow you to generate music based on text queries without the ability to edit. This means you either have to generate the audio again until you get the desired result, or edit it yourself in audio editors.


One of the most exciting things about these new tools is that they are not limited to audio generation. They take everything to the Photoshop level, giving content creators the same deep control to modify, tweak and edit their audio. This is a pixel control layer for music.


Project Music GenAI is being developed in collaboration with the University of California and the School of Computer Science at Carnegie Mellon University. Adobe describes it as the “early stages” of the experiment. This means that while these capabilities have the potential to be built into current Adobe tools like Audition and Premiere, full implementation will take time.


While the tool is not available to the general public and no release date has been announced, the development of Project Music GenAI, along with other Adobe experiments, can be followed on the Adobe Labs website .


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button