General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow AI-Generated Music Became A $4 Billion Fraud Machine (Forbes, 5/5)
https://www.forbes.com/sites/virginieberger/2026/05/05/how-ai-generated-music-became-a-4-billion-fraud-machine/-snip-
In April 2026, Deezer reported receiving 75,000 fully AI-generated tracks per day, representing 44% of all daily uploads, more than two million tracks per month. Of the streams those tracks generate, 85% are fraudulent. Thibault Roucou, Deezers head of streaming, stated it directly in Music Week: "Generating fake streams continues to be the main purpose for uploading AI-generated music."
-snipping to get to this paragraph about fraudsters-
They now use AI generators to flood platforms with millions of tracks and stream each one just a few thousand times, enough to generate royalties from each but not enough to trigger detection systems tuned for high-volume replay.
As Melissa Morgia, Chief Global Content Protection Officer at IFPI, told a panel on the sidelines of the seventeenth session of WIPOs Advisory Committee on Enforcement in February 2025, AI is the ultimate enabler of streaming fraud because it allows bad actors to stay under the radar but still operate at a sufficient scale that their activities are lucrative.
-snip-
Generative AI seems to be better at fraud than anything else.
It's important for DUers to keep in mind that when they run into AI slop online, it's not only usually trashy, error-filled and unethical as hell simply because the AI was trained on stolen intellectual property, but there's a good chance there are professional fraudsters behind it. That goes for AI music, AI videos and AI images. Lots of the AI slop images and videos on Facebook and YouTube, for instance, come from content farms, and although that Forbes article doesn't use the term, the generation of lots of AI music tracks to be streamed by bots is done by content farms.
Professional criminals. Not usually individuals who could be well-meaning but are naive or desperate enough, for whatever reason, to use AI.
And the fact professional fraudsters are so heavily involved in AI slop is another reason why DUers shouldn't give any attention to it, especially copying it here or elsewhere, if they don't know exactly who's behind that slop video, image or music track.
As for the individual AI users who don't mean to be fraudsters - well, they should know better than to use generative AI to create content. Sometimes they might be so naive they don't know how AI is trained. Sometimes they might think a lofty goal outweighs using unethical tools. They should be reminded that no matter how well-intentioned they are, it's a bad idea and hurts their message to use those AI tools. Ideally they'll stop using genAI, because it's beneath them and does entangle their message with AI-bro and pro-AI messaging.
Anyway, if you don't know who's creating and posting AI slop, it's most reasonable to assume they're just fraudsters out to make a quick buck and steal attention and income from real artists - human artists.
85% are fraudulent. Stunning number. And with statistics like that, AI slop from unknown sources does not deserve the benefit of the doubt. It should be shunned - not trusted, and not shared.
AZJonnie
(3,953 posts)I.E. 100% would be a good number as it would mean ZERO real people are actually listening to this garbage.
Unfortunately with the lack of foresight by the worlds' regulatory agencies that allowed all the copyrighted content to used for training, AND allowing these products to generate works that people can fraudulently pass off as their own (and hence monetize), fighting what you're talking about with the fake music and fake streams may end up come down to the streaming services (and as you say, consumers) taking actions, and being legally liable. Good news is that Spotify does not want to pay royalties for fake streams generated by not-real users, so they at least they have a vested interest and I suspect will take considerable steps to stop that from happening.
In fact "but not enough to trigger detection systems tuned for high-volume replay." indicates they already do so, because it's not JUST AI-generated music that this "bot streaming" happens with, it happens with real artist's music as well. The difference in that case though is there's usually "known entities" that are 'responsible' for the content i.e. the license holders, so there's someone to punish if that's happening, i.e. they sort-of know who they're sending the checks to. Unlike some Russian content-farm.
Eventually these AI companies AND streaming services (and YT and similar) need to be held accountable for allowing anyone to monetize (largely) AI-generated works, if that AI was trained on copyrighted material.
The lawsuit from the book publishers you mentioned recently is going to be a big bellwether on the front against the AI companies themselves, but if that case is lost? Then we'll have to move on to trying to hold the companies hosting the materials AND paying people when it gets viewed legally responsible.
dickthegrouch
(4,612 posts)If theyre training themselves self-referentially on Slop, pretty soon only slop will inform the responses we get. NO AI system can discern truth, we cannot program it to recognize enough occasions when truth or even valid content is presented.
Half the country thinks Faux Noix is truth. Half the country thinks truth social is the truth. AI cant tell the difference either.
Program humor or sarcasm for me!
Even many DU members need the
tag to be able to understand some posts. AI CANNOT!
highplainsdem
(62,878 posts)generated by people who actually care about music.
Which is something I've mentioned here about AI slop images and YouTube videos - that the subjects of that AI slop might mean little or absolutely nothing to the AI users or content farms generating it. It's just clickbait, usually with bots adding to the views. 404 Media found bot responses often dominating comments on AI slop on Facebook, and the same is likely true with YouTube.
With an issue like AI companies' theft of intellectual property, the legal fight won't go away for any time at all short of a SCOTUS decision, and then it will return as soon as there's any change in SCOTUS. We're all aware that there have been terrible Supreme Court decisions that must be overturned. A ruling in favor of the AI companies on this would be one of them.
AZJonnie
(3,953 posts)are only streamed by AI bots, then it also means few people are interested in them, which I think is better than if 85% WERE streams from real people and 15% from bots. Because that would mean Spotify would hate these songs being there less, and it would directly take money from artists pockets because listeners can only play 1 song at a time. Every time it's an AI track, that is bad for artists, like, directly.
Also, every 'real' listener provides the slop companies with royalties *without* their having to pay for a membership, and w/o the costs associated with running bots, which may be pretty cheap but it's not free.
To be clear nothing about this is "good" by any means, but actually the closer that ratio is to 100%, the better, IMHO. And there's a simple fix (better fraud detection), and the streaming companies will WANT to fight this fraud as it's a big, obvious ripoff for them when it's just bots streaming fake AI music. If they do nothing they'll end up having to raise rates and/or cut royalties, which could hurt their business if artists leave or people cancel.