MBW Views is a collection of unique op/eds from eminent music business folks… with one thing to say. The next article is slightly totally different than the standard first-person items we run: it’s one thing of a public resignation letter.
Ed Newton-Rex is likely one of the most outstanding figures within the evolution of generative AI in music.
The California-based entrepreneur based the pioneering music-making AI platform Jukedeck over a decade in the past, earlier than promoting it to TikTok/ByteDance in 2019. He subsequently grew to become Product Director of TikTok’s in-house AI Lab, earlier than changing into Chief Product Officer at music app Voisey (bought to Snap in late 2020).
Since final 12 months, Newton-Rex has labored at Stability AI, residence of generative AI image-maker, Secure Diffusion. Final 12 months Stability AI raised USD $101 million at a $1 billion valuation.
Newton-Rex has made a big effect at Stability AI in a comparatively brief time.
As VP of Audio on the firm, he’s led the event of Secure Audio, a generative AI music-making platform skilled on licensed music in partnership with rights-holders. Final month, Secure Audio was named certainly one of Time’s ‘Finest Innovations Of 2023’.
Regardless of this success, Newton-Rex has simply stop his function at Stability on some extent of precept.
A broadcast classical composer himself, Newton-Rex has, all through his profession, been constant in his perception within the significance of copyright for artists, songwriters, and rightsholders.
As he explains under, Newton-Rex’s private respect for copyright has considerably clashed with that of his employer in current weeks, after Stability AI argued in favor of the ‘truthful use’ of copyrighted materials to gasoline generative AI inside a submission to the US Copyright Workplace. (As Newton-Rex factors out, a number of different giant generative AI firms share Stability’s place on this.)
Some further current context: Newton-Rex’s determination to resign from Stability AI arrives as the talk over the ‘harvesting’ of copyrighted music by generative AI platforms will get even louder.
Simply final week, celebrity Dangerous Bunny expressed his fury over an AI-generated monitor that artificially replicates the sound of his vocals, in addition to these of Justin Bieber and Daddy Yankee.
The purported maker of that monitor, which has over 22 million performs on TikTok, calls themselves FlowGPT.
In a message responding to Dangerous Bunny revealed on TikTok, FlowGPT provided to let the artist re-record the AI-generated monitor “totally free with all rights… however don’t overlook to credit score FlowGPT”.
It will get worse: If Dangerous Bunny’s staff managed to get the monitor faraway from digital platforms, FlowGPT threatened, “I’ll should add a brand new model.”
Over to Ed…
I’ve resigned from my function main the Audio staff at Stability AI, as a result of I don’t agree with the corporate’s opinion that coaching generative AI fashions on copyrighted works is ‘truthful use’.
First off, I wish to say that there are many folks at Stability who’re deeply considerate about these points. I’m proud that we had been capable of launch a state-of-the-art AI music technology product skilled on licensed coaching information, sharing the income from the mannequin with rights-holders. I’m grateful to my many colleagues who labored on this with me and who supported our staff, and significantly to Emad for giving us the chance to construct and ship it. I’m grateful for my time at Stability, and in some ways I feel they take a extra nuanced view on this subject than a few of their opponents.
However, regardless of this, I wasn’t capable of change the prevailing opinion on truthful use on the firm.
“I don’t see how utilizing copyrighted works to coach generative AI fashions of this nature could be thought of truthful use.”
This was made clear when the US Copyright Workplace lately invited public feedback on generative AI and copyright, and Stability was certainly one of many AI firms to reply. Stability’s 23-page submission included this on its opening web page:
“We imagine that Al improvement is a suitable, transformative, and socially-beneficial use of present content material that’s protected by truthful use”.
For these unfamiliar with ‘truthful use’, this claims that coaching an AI mannequin on copyrighted works doesn’t infringe the copyright in these works, so it may be performed with out permission, and with out cost. It is a place that’s pretty customary throughout most of the giant generative AI firms, and different massive tech firms constructing these fashions — it’s removed from a view that’s distinctive to Stability. Nevertheless it’s a place I disagree with.
I disagree as a result of one of many components affecting whether or not the act of copying is truthful use, in accordance with Congress, is “the impact of the use upon the potential marketplace for or worth of the copyrighted work”. In the present day’s generative AI fashions can clearly be used to create works that compete with the copyrighted works they’re skilled on. So I don’t see how utilizing copyrighted works to coach generative AI fashions of this nature could be thought of truthful use.
“Corporations price billions of {dollars} are, with out permission, coaching generative AI fashions on creators’ works, that are then getting used to create new content material that in lots of instances can compete with the unique works. I don’t see how this may be acceptable.”
However setting apart the truthful use argument for a second — since ‘truthful use’ wasn’t designed with generative AI in thoughts — coaching generative AI fashions on this method is, to me, mistaken. Corporations price billions of {dollars} are, with out permission, coaching generative AI fashions on creators’ works, that are then getting used to create new content material that in lots of instances can compete with the unique works. I don’t see how this may be acceptable in a society that has arrange the economics of the artistic arts such that creators depend on copyright.
To be clear, I’m a supporter of generative AI. It’ll have many advantages — that’s why I’ve labored on it for 13 years. However I can solely help generative AI that doesn’t exploit creators by coaching fashions — which can change them — on their work with out permission.
I’m certain I’m not the one individual inside these generative AI firms who doesn’t suppose the declare of ‘truthful use’ is truthful to creators. I hope others will communicate up, both internally or in public, in order that firms realise that exploiting creators can’t be the long-term resolution in generative AI.
Ed Newton-RexMusic Enterprise Worldwide