Adobe Will Soon Have A “Photoshop For Audio” Editing Tool

Adobe introduces an innovative tool, Project Music GenAI Control, aimed at transforming music creation and editing. This tool, still in early development, enables users to generate music from simple text prompts.

Nicholas Bryan, a Senior Research Scientist at Adobe Research, shares, “With Project Music GenAI Control, generative AI becomes your co-creator.”

This tool is designed for creators of all paths, from broadcasters to podcasters. The tool is aimed at assisting creators in crafting audio that perfectly fits the mood, tone, and length required for their projects.

The tool builds on Adobe’s history of AI innovation, including their popular Firefly AI model. With Project Music GenAI Control, users input text prompts like “powerful rock” or “sad jazz” to generate music. They can then finely edit this music, adjusting elements like tempo, structure, and intensity to meet their specific needs.

 

What Did Adobe Say About This Tool?

 

Adobe posted a video on YouTube featuring Dominique Graves and Nicholas Bryan showcasing Project Music GenAI Control. In the video, they demonstrate the tool’s ability to convert simple melodies into more complex compositions, such as converting a basic melody into a full piece of film music or a hip-hop beat.

The video demonstrates the project’s accessibility, allowing users to make significant adjustments to the music generated from text prompts.

 

 

How Will This Change The Content Creation Space?

 

Project Music GenAI Control is for producing audio, and changing the way creators interact with music. As Nicholas Bryan points out, “They’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio.”

This level of control is similar to that of editing pixels in an image, offering a large range of freedom when it comes to manipulation capabilities in the realm of sound.

 

What Makes This Project A Special One For Creators?

 

Adobe’s new project into the generative AI space for music sets it apart from existing tools by allowing detailed customisation after the initial music generation. This addresses a frequent problem in content creation: fitting music to specific project requirements.

Many other existing audio platforms usually need you to edit your sound manually, after generation. Adobe’s tool integrates creation and editing in one seamless workflow.

Together with experts from the University of California, San Diego, and Carnegie Mellon University, Adobe is making sure that the project is grounded in cutting-edge research.

This is a new era for digital content creation, one where AI assists in crafting audio at a professional level from just your PC. An era where sophisticated music production is accessible to all users, regardless of their audio experience.