January 21, 2026

How we are using AI to handle the bulk of badly written press releases

How we are using AI to handle the bulk of badly written press releases

How we are using AI to handle the bulk of badly written press releases

đŸ‡ș🇩 Side-Line stands with Ukraine - Show your Support

We have long – extremely long actually – endured the horror of badly written press releases sent to the Side-Line mailbox(es). There is the ‘acclaimed‘ issue as I talked about in the past, but there is more. Way too often press releases are so cryptic that we don’t even have a clue what they are about. Believe me, when I’m then supposed to write a news item based on this ‘info, I just delete the mail altogether. And then the LLM tools arrived, especially ChatGPT which launched first.

You’d think PR-people, bands, and labels would learn to decrypt their earlier copies. But no, many went completely mad.

The volume of news submissions literally exploded. And you may have guessed it, it also meant a real flood of low-quality ones. The overall standard of press copy dropped fast. Many bands, labels, and PR teams clearly don’t know how to use ChatGPT correctly, and they now send in text that often makes no sense at all.

Time to address the problem at its core and tackle both badly written copy and AI fluffiness.

Training an AI to survive the PR trenches

To address the growing flood of chaotic press releases, we kicked off a dedicated AI project in late 2024. It started by training artificial intelligence on our own carefully curated archive which was untouched by LLM’s – literally tens of thousands of news articles, interviews and reviews formed by years of editorial practice. Over several weeks, the model was taught to internalize our writing style (and we are NOT there yet), priorities, and the all-important whys and hows behind our coverage choices.

With that groundwork in place, we moved to the real challenge: feeding the system a batch of 7,854 press releases, spanning the full quality spectrum from respectable to catastrophic. Over the course of about one month, the system methodically tore through them, identifying structural flaws, stylistic pitfalls, and common patterns of confusion.

Our ultimate aim:

  1. reduce the hours lost to deciphering incoherent announcements
  2. redirect that energy into producing more articles
  3. add more relevant info
  4. contextualize the news since most press releases do not include bios

All this was simply impossible before due to the waste of ‘deciphering’ time.

But instead of relying on off-the-shelf AI models, we took a DIY approach. By training exclusively on our own content – the thousands of music news stories, interviews, and announcements published over the years – we ensured the model absorbed the real-world cadences and peculiarities specific to the music scene. And we especially trained it on how things should not be done.

Technically speaking, we fine-tuned our models to:

  • Decode announcements buried under layers of marketing fluff.
  • Normalize genre-specific jargon into readable text.
  • Extract essential metadata like release dates, tracklists, guest artists, and tour locations.
  • Flag incomprehensible “artistic manifestos” for manual review because the risk of misinterpretation was just too big.

Below are the 3 steps we take in this process.

1. Automated triage

Our model first sorts incoming material into three neat piles:

  1. Green: Coherent enough to move directly to light editing.
  2. Yellow: Containing actual news, but structurally mangled beyond recognition
  3. Red: Philosophical treatises disguised as press releases, these are politely returned to sender with the demand to send us decent info. The same goes for one-sentence press releases with just a link.

This triage system – based on real patterns in our website’s historical content – has become essential to maintaining the news rhythm going.

2. Teaching the AI taste

We don’t pretend our model has “good taste” in the aesthetic sense – but it does have a “clarity bias” informed by years of our own editorial standards. We developed a custom “readability and relevance” score, factoring in:

  • Passive voice density
  • Jargon-to-meaning ratio
  • ClichĂ© usage frequency (“soaring vocals,” “blistering guitar riffs”)
  • Quote coherence

Press releases that score below our minimum threshold are either comprehensively reworked or quietly archived in the “not today” (often “never again”) folder.

3. Rewriting with surgical precision

When confronted with a complete textual disaster, the model operates methodically.

  • It extracts what should have been the lead: the “who, what, when, where, why.”
  • It restructures meandering paragraphs into crisp news copy.
  • It salvages quotes (or what passes for them) and attributes opinions properly.

Our technical process includes:

  • Semantic parsing to understand the story beneath the word salad.
  • Coreference resolution to untangle who “they” and “it” actually refer to.
  • Content summarization tuned to the pacing and voice of music news articles.

Our model spots and tags artist names, album titles, festival names, and more, feeding directly into our SEO optimization and archival tagging system.

To strengthen the system further, we also integrated external fact-checking mechanisms. Using APIs connected to authoritative music databases, and official label release feeds, the model cross-verifies dates, album titles, tracklists, and artist names against real-world data.

Whenever a press release claims a “debut album” that is actually the band’s third – or mislabels a tour location – the model flags the inconsistency automatically. This added layer of verification ensures faster processing and also a noticeable boost in overall accuracy and reliability across all published pieces.

The final output is then again manually revised, fine-tuned and ‘issues’ are flagged and fed back to the model. The SEO is improved where needed, although I must say that our model has now a remarkably good score when it comes to SEO. The choice has been to go for sustainable SEO, so no short-term tricks etc..

We still need human intervention

While the AI-model has proven to be an excellent tool to work with, I do know that there is still work to be done to improve the output before we can safely say it runs perfectly well.

The challenge now is no longer the accuracy of the information. We use several external, database-driven verification systems, which has reduced the risk of hallucinations to near zero. I can safely say the model’s output is now almost error-free. The funny part is that our system, which often digs very deep for verified data, now uncovers details that some bands or musicians had completely forgotten about.

The main issue still left is the tone of voice used. After all, we work with an LLM, not real intelligence (despite what the term artificial intelligence suggests).

In order to get the tone better, I’m constantly feeding our completed articles back into the system for it to learn. Every now and then I also run tests to know if there is a change in behaviour in the model we use.

But I can safely say we have come a long way.

AI cleans up the mess, but the bands (and PR agents) still make it

Since integrating the LLM into our editorial workflow:

  • We reduced manual editing time by 65%.
  • Increased the number of articles by 40%.
  • Spent 80% less time trying to decipher which “visionary act” was “reshaping the sonic landscape.”

In short, this approach has freed up our time to focus on what matters: real stories about real music, not just decoding PR hyperbole.

Is it faultless? No, the system needs constant training and it only gets as good as the data gets. Our LLM is not a miracle worker. It cannot turn a bland album launch into breaking news. What it can do – and does – is clean up the debris field of badly written press releases, using the hard-earned editorial instincts we’ve embedded into it through years of music journalism experience.

One day, perhaps, artists and labels will craft announcements that don’t require algorithmic CPR. Until then, our AI remains on the front lines, armed with a mop, a red pen, and a deep, machine-learned understanding of just how weird this industry can get.

Since you’re here 



 we have a small favour to ask. More people are reading Side-Line Magazine than ever but advertising revenues across the media are falling fast. Unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can - and we refuse to add annoying advertising. So you can see why we need to ask for your help.

Side-Line’s independent journalism takes a lot of time, money and hard work to produce. But we do it because we want to push the artists we like and who are equally fighting to survive.

If everyone who reads our reporting, who likes it, helps fund it, our future would be much more secure. For as little as 5 US$, you can support Side-Line Magazine – and it only takes a minute. Thank you.

The donations are safely powered by Paypal.

Select a Donation Option (USD)

Enter Donation Amount (USD)