After Bandcamp’s ‘No AI’ ban, the first artists report deleted and lost catalogues

After Bandcamp’s 'No AI' ban, the first artists report deleted catalogs and lost catalogues
When Side-Line first covered Bandcamp’s new generative AI rules in “Bandcamp bans AI music uploads with new generative AI policy,” we focused on how the platform would block music created “wholly or in substantial part” with AI and reserve the right to remove releases on mere suspicion of heavy AI use.
In the days since, a follow-up story has started to emerge across comment sections and social feeds: artists now publicly claim that their Bandcamp accounts have been deleted, their pages shadow-banned from search, or entire catalogues wiped after being reported as AI-assisted.
In this new article I will look at those user reports, how they are linked with Bandcamp’s suspicion-based enforcement model, and why long-time users worry that false positives and malicious flagging could become the real fault line of the “No AI” era.
Next to this I also probed both Tom Shear from Assemblage 23 and Ronan Harris of VNV Nation on their stance on AI use in music.
Table of contents
- 1 ‘Suspicion’ is enough under Bandcamp’s new AI policy
- 2 Several posts and articles describe removals, wiped catalogs, or visibility suppression
- 3 At least one darkwave label quietly shelved its own AI-driven albums
- 4 What do artists say about AI in music? Ronan Harris and Tom Shear share their views
- 4.1 “Unfortunately, most will use it as a shortcut to mimic or plagiarise what has already been done“
- 4.2 “When I hear the phrase ‘AI-generated music’, I think ‘low effort’ and ‘not worth my time’.”
- 4.3 “I don’t use AI to create anything for me, so I obviously don’t have trouble disclosing my methods,”
- 5 Bandcamp’s ‘No AI’ era: detection, disclosure and due process
‘Suspicion’ is enough under Bandcamp’s new AI policy
Bandcamp’s January 13, 2026 post “Keeping Bandcamp Human” tells users to report releases that “appear to be made entirely or with heavy reliance on generative AI,” and it states: “We reserve the right to remove any music on suspicion of being AI-generated.”
In the related “Acceptable Use and Moderation policy,” Bandcamp also bans scraping/text-and-data-mining and prohibits training machine-learning or AI models on Bandcamp content. It further states that music/audio must not be “wholly or in substantial part” AI-generated, and it lists account/content removal as an enforcement option.
Several posts and articles describe removals, wiped catalogs, or visibility suppression
Now that Bandcamp’s AI policy has rolled out, evidence of its impact surfaced first in scattered posts. Creators on social platforms, in Bandcamp’s own comment threads, and on review sites began describing a similar pattern: accounts suddenly terminated, releases disappearing from public view, and artist pages no longer appearing in search. Taken together, these individual reports sketch an emerging picture of how suspicion-based enforcement and user flagging can translate into removals, wiped catalogs, or quiet visibility suppression under the new Bandcamp AI ban.
- Limelight (Jan 15, 2026) quotes a creator saying Bandcamp started “deleting our accounts,” and that their music was “gone forever.”
- In Bandcamp’s own comment thread, user estelle derrien states “I have been shadow banned,” says they no longer appear in search, and attributes it to malicious reporting after using AI vocal tools for testing (and labeling it).
- A Threads post by disco.alice states “bandcamp just deleted everything for us.”
- A Trustpilot review says Bandcamp “permanently terminated” an artist account for alleged “AI-generated music,” while the reviewer claims the catalog was human-made.
- Posts in a “Bandcamp – Summarily Banned Me” group reference accounts being deleted and mention AI-generated material being flagged.
- The Cold Transmissions released “Undone” album by A Shrine to Failure was removed from Bandcamp, and although there is no proof that this was Bandcamp intervening, there has been quite some online commotion regarding this particular band and album.
- Moonvampire was removed from Bandcamp and flagged for AI use on Deezer.
- In an Instagram post Graveland claims “Bandcamp has once again deleted Graveland.” The snippet does not document the reason.
- NortherN / Cold Northern Vengeance says in a Facebook post that “Bandcamp removed our album.”
- Bovine Productions claims that “BandCamp has removed the Bovine Productions page.”
I will dig a bit further in the cases of A Shrine To Failure and Moonvampire, since these are linked with the darkwave scene.
Case study 1: A Shrine To Failure
A Shrine To Failure is a Frankfurt-based dark wave project whose debut album “Undone” came out digitally via Bandcamp on 13 June 2025, followed by CD and LP editions through Cold Transmission Music. The band quickly attracted strong scene coverage and a lot of Bandcamp sales, which is exactly why its name started popping up in AI-discussion threads on Reddit’s r/goth. Several users pointed to the extensive use of AI-looking artwork, videos and promo visuals, and the project’s rapid, polished output as indicators that the music itself might also be AI-generated, even though nobody produced hard technical evidence.
By late 2025, the r/goth “AI music” wiki listed A Shrine To Failure as “All AI,” effectively labelling the project an AI band for that community. Around the same time, Cold Transmission published a statement addressing “the ongoing discussion surrounding A Shrine To Failure and the use of AI,” acknowledging the controversy and trying to clarify its stance. In parallel, some scene figures celebrated reports that Bandcamp had “yanked that AI goth band ‘A Shrine To Failure’ down,” presenting this as proof that the platform was enforcing its new AI policy.
The Bandcamp page has since gone offline.
Case study 2: Moonvampire
Moonvampire is a goth/darkwave act from Astana, Kazakhstan, with a catalogue of EPs and singles on Bandcamp, including the 2023 album “vampire II” and a string of releases through 2024–2025. The project attracted attention partly because of its aesthetic: AI-looking cover art, extremely high release tempo and minimal public profile. The r/goth AI wiki lists Moonvampire as “AI in artwork (debatable if the music is using AI),” explicitly separating the visual and audio questions.
In January 2026, a r/goth user filed a Terms of Use report to Bandcamp, stating they believed Moonvampire was using AI-generated assets and pointing to the rapid release pace and existing Reddit speculation. Bandcamp support replied that the reported content “has been reviewed, found in violation of our Terms / Acceptable Use Policy, and removed from the site,” and the user announced that “moonvampire is banned on Bandcamp.”
That thread quickly became a ‘celebration’ of Bandcamp’s new suspicion-based AI enforcement, but it also sparked concern about due process and proof standards. One commenter later argued that, after digging into the case, they concluded Moonvampire uses bought beats, edits and effects, plus their own vocals, and called this an indictment of Bandcamp’s “suspicion-based” removal rule.
The artist responded directly in a follow-up discussion, stating that they have been making music under various aliases since 2019, that they “never used A.I in my entire life in music,” and that their biggest tracks are built on type beats from marketplaces like BeatStars, with vocals recorded and mixed by themselves or an engineer. Critics in the same thread were not convinced by the screenshots provided as “proof,” arguing they show little more than imported audio files and could not rule out AI-generated stems.
Taken together, Moonvampire’s case shows how Bandcamp’s community-driven AI enforcement can run on parallel, conflicting tracks: a support email confirming removal; an artist publicly denying AI use and framing the takedown as “haters” reporting them; a skeptical producer community disputing the evidence; and a catalog that has been removed from Bandcamp.
At least one darkwave label quietly shelved its own AI-driven albums
In the past week I was also contacted by a darkwave label owner who says that he had decided to withhold several AI-heavy albums from release. Over the last year he worked with a small group of artists who used text-to-music and AI-assisted composition tools for entire LPs, often with results he describes as “technically impressive and fully coherent as records”.
“I heard albums where every track was generated from prompts and then edited, arranged, and mastered like any other release,” he explains. “Some of it was indistinguishable from what we usually put out: solid songwriting structures, convincing vocals, consistent aesthetics. On a purely musical level, I could have pressed those records tomorrow.”
Despite that, he chose not to launch them once it became clear how polarized the scene was becoming around AI. “I told the artists: right now, if we put this on Bandcamp, the conversation won’t be about the music. It will be about whether you ‘cheated,’ whether the album should even exist, and whether the label is flooding the ecosystem with machine content,” he says. “In the current climate, that is reputational risk for everyone on the roster, and not just for the AI projects.”
He stresses that his decision was not based on a blanket rejection of the technology, but on context and timing. “The work was often extremely well done,” he says. “But if the platform leans on suspicion, and if fans and other bands are already on high alert, then releasing AI-built albums under a darkwave label imprint becomes a liability. Until there is a clear framework and honest labeling that people trust, those records stay on the shelf.”
I personally received various tracks that were finished and honestly speaking, they were good, but AI-powered. Over the past few months I was also presented various other projects where AI was extensively used, so it’s clear that AI lives amongst label owners and artists.
We asked both Ronan Harris (VNV Nation) and Tom Shear (Assemblage 23) for input on what happens right now.
“Unfortunately, most will use it as a shortcut to mimic or plagiarise what has already been done“
Ronan: “I think that almost all the people who’ll read an article assume that using AI means using it for songwriting or generating tracks. I don’t use AI for any of that, nor do I have any need or desire to. Creating music is a direct and emotional connection to the creative process and the instruments and equipment used to create it. It’s self-directed, entirely under the creator’s control, and externalises a human’s internal emotions and imagination. The process is a kind of dialectic. It unlocks things that were never intended. Mistakes lead to new inspiration.
Just as AI has impacted the world of image creation, it will enable a few ‘creatives’ who use it in entirely new ways. These few will learn how to actively control the process for new, as-yet unimagined creations, where access to tools had previously been financially or logistically impossible for them. That’s already happening. Unfortunately, most will use it as a shortcut to mimic or plagiarise what has already been done, in order to produce lazy, unimaginative, pedestrian slop for cheap and easy likes, without any talent, personal effort, or intrinsic cultural value. The creation of AI slop has also encouraged a loud, gatekeeping minority to find a new purpose in being neo-Luddite zealots.
Demonstrable talent, creative transparency, and the process of creation being the creation in and of itself may end up as a highly valued currency of their own.”
Assemblage 23 frontman Tom Shear keeps AI firmly in the background of his creative process. “I don’t really use it very much,” he explains. He relies on LANDR only “to do quick and dirty masters to test mixes and for my live backing tracks,” and uses a custom GPT that he trained with all his gear and software manuals: “So if I’m trying to figure out how to do something, I can just ask the GPT questions instead of pouring over the manuals to find the relevant info.”
For Shear, this is the ideal role of AI: “I think that kind of task is what AI is most useful for – taking the tedium out of tasks that are taking up time you’d rather be spending on making music.” What he rejects is using AI for the core of the art itself: “It’s strange to me that people want to use it on the creative end. Why would you want to replace something as fun and gratifying as writing a song, painting a picture, or taking a photograph? Where is the joy in typing a prompt or pushing a button to make something for you?”
“When I hear the phrase ‘AI-generated music’, I think ‘low effort’ and ‘not worth my time’.”
“When I hear the phrase ‘AI-generated music’, I think ‘low effort’ and ‘not worth my time’.” The same goes for visuals: “If I see a release with obviously AI-generated art, my first thought is ‘This band obviously doesn’t want to put much effort into how they’re representing themselves, so why should I bother listening to it?’” He acknowledges the underlying technology as impressive, but not the output: “AI-generated music is, without a doubt, an amazing technical achievement, but the results are almost universally really uninteresting and soulless to listen to. It’s hard to imagine AI generating a song that could bring tears to your eyes, you know?”
For Shear, this crucial line is central: “Does this help me create more efficiently, or does it do the creation for me?” Tools that speed up work, like something that can sift a huge sample library, are welcome. Systems where he is “writing a prompt or pressing a button to actually generate the music” are not: “I just don’t see the point other than as a curiosity for people who have no musical abilities.”
Against that backdrop, Shear welcomes Bandcamp’s tougher stance on AI uploads. “I was very happy to see Bandcamp take a hard stance against being flooded by AI-generated material,” he says.
He also pushes back on the idea that the policy is arbitrary: “I’d argue that calling their policy purely ‘suspicion-based’ doesn’t quite do it justice, though. There are definitely algorithms being used in the process. There are certain types of detectable artifacts and anomalies that appear in most AI-generated music. AI leaves certain mathematical traces of itself, and while I suppose it’s possible that legitimate songs could get inaccurately detected as being made by AI, I think those cases would be extremely rare and could be resolved through human intervention.”
His main concern is scale: “The problem is, the creation of AI music is so fast and effortless that platforms could potentially become overrun by those tracks until they outnumber the actual artists. I read the other day that 28% of the daily uploads to Spotify now are AI-generated. That’s a problem.” In that scenario, he warns, listeners lose patience: “To find legitimate bands you really enjoy becomes much more difficult for the fans. What happens then is that frustration on the part of the listener leads them to find other ways to entertain themselves, and they stop trying to find the needle in the haystack among all the slop. It’s the ‘Dead Internet Theory’, just applied to music.”
“I don’t use AI to create anything for me, so I obviously don’t have trouble disclosing my methods,”
If platforms opt for transparency labels instead of bans, Shear strongly supports clear disclosure. “I don’t use AI to create anything for me, so I obviously don’t have trouble disclosing my methods,” he notes. He sees labeling as a protection for both audiences and non-AI artists: “I don’t know what the exact messaging should be or how it should be broken down, but I think some sort of transparency is the right thing to do both for the listener and for other artists who aren’t using AI.”
He has no interest in policing listeners’ tastes: “If someone hears a piece of AI-generated music that they really love, I’d certainly never argue they should be deprived of listening to it,” but he insists on clarity when human-made and AI-made coexist: “If we’re going to allow AI stuff to coexist on the same platforms as music created by humans, it should be made clear which is which. Listeners should be allowed to make an informed decision as to what they’re going to consume.”
Despite the broader backlash around AI in music, Shear has not felt the need to change course with Assemblage 23 or his solo work. “No, I just don’t find it fun or interesting to have AI do all the work for me,” he says when asked about shelving projects or avoiding certain tools. His position remains simple and consistent: “For me, the work is the fun part and the whole point.”
Looking ahead, he outlines what a fair and sustainable AI framework in music would need. First, he calls for strong detection technology in lockstep with creation tools: “I think it’s imperative that AI-detection tools are developed concurrently with the actual AI creation tools. I’m not just talking in terms of music, either. We have already seen the US government share AI-generated videos to push its agenda, for example. Fortunately, AI still makes a lot of mistakes and usually has a ‘look’ to it, but that isn’t always going to be the case. We face a very chaotic, dystopian future if we don’t have the tools to distinguish what is real and what isn’t.”
Second, he returns to the idea of transparent labeling, using food as an analogy: “With AI-detection tools in place, some sort of labeling system to let the listener make a decision whether they want to purchase/listen to the music or not. This would be essentially the same reason that we have the ingredient labels on food products. We should let consumers know what went into what they’re consuming and let them make their decisions based on that knowledge.”
Finally, he addresses compensation and training data. In his view, the industry has already crossed a line: “We already seem to be past the point of artists having any consent with regard to their music being used to train these AI models.” Because of that, he argues for a new kind of payment framework: “So I think it would not be unreasonable for some kind of compensation to be paid to artists for the use of their materials. Figuring out how to calculate that would be a complex problem to solve, but I think it would take some of the unfairness out of it.”
His conclusion is dry, but grounded in experience: “That said, the music industry is not known for its fairness.”
Bandcamp’s ‘No AI’ era: detection, disclosure and due process
The first wave of removals and quietly shelved AI-driven albums shows how a suspicion-based “No AI” framework turns every aesthetic choice, release tempo and promo image into potential evidence for or against an artist.
The cases of A Shrine To Failure and Moonvampire, together with reports of deleted accounts and hidden pages, expose the core tension: how can a platform block industrial-scale AI slop without punishing human producers who happen to work fast, use bought beats, or experiment with AI-adjacent tools.
A sustainable Bandcamp AI policy now depends on three pillars that Ronan Harris and Tom Shear indirectly outline from the artist side: robust AI-detection that keeps pace with new tools, transparent labeling when AI meaningfully forms the work, and a visible appeals process with clear evidentiary standards. Without those, enforcement drifts toward crowd-driven gatekeeping, where rumours on wikis and comment threads weigh more than verifiable proof, and where labels pre-emptively bury technically strong but AI-assisted albums to protect the rest of their roster.
For now, artists who rely on Bandcamp face pressure to document their workflows, disclose their use of tools before someone else frames the narrative treating it as “AI-generated music”, as shorthand for “low effort” and “not worth my time”. Labels weigh catalogue risk against experimentation, while listeners struggle to find new human-made releases in an ecosystem already flooded with machine content.
Until platforms align AI bans, detection technology, disclosure labels and compensation for training data, Bandcamp’s “No AI” era will remain a live stress test of how music platforms handle the next wave of machine-made sound.
Chief editor of Side-Line – which basically means I spend my days wading through a relentless flood of press releases from labels, artists, DJs, and zealous correspondents. My job? Strip out the promo nonsense, verify what’s actually real, and decide which stories make the cut and which get tossed into the digital void. Outside the news filter bubble, I’m all in for quality sushi and helping raise funds for Ukraine’s ongoing fight against the modern-day axis of evil.
Since you’re here …
… we have a small favour to ask. More people are reading Side-Line Magazine than ever but advertising revenues across the media are falling fast. Unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can - and we refuse to add annoying advertising. So you can see why we need to ask for your help.
Side-Line’s independent journalism takes a lot of time, money and hard work to produce. But we do it because we want to push the artists we like and who are equally fighting to survive.
If everyone who reads our reporting, who likes it, helps fund it, our future would be much more secure. For as little as 5 US$, you can support Side-Line Magazine – and it only takes a minute. Thank you.
The donations are safely powered by Paypal.

