Google says it classifies AI-generated content as ‘spam’

Any publishers and editors out there who are thinking of replacing their journalists with AI might want to put the brakes on. Everyone’s boss, the Google algorithm, classifies AI-generated content as spam.

John Mueller, Google’s SEO expert, settled the issue in a recent Google Search Central SEO Office-Hours Hangout.

According to a report by Search Engine Journal’s Matt Southern, Mueller says that GPT-3 and other content generators are not considered quality content, no matter how compellingly human:

These would essentially still fall under the auto-generated content category that we had in the webmaster guidelines almost from the start.

My suspicion is that the quality of the content might be a bit better than the really old tools, but for us it’s still auto-generated content and for us that means it’s still against webmaster guidelines. So we would consider that as spam.

Southern’s report indicates that this has pretty much always been the case. For better or for worse, Google tends to respect the work of human authors. And that means keeping bot-generated content to a minimum. But why?

Let’s play devil’s advocate for a moment. Who do Google and John Mueller think they are? As a publisher, shouldn’t I have the right to use whatever means I choose to generate content?

The answer is yes, with a cup of tea by the side. The market can certainly decide whether it wants idiosyncratic news from human experts or… whatever an AI can hallucinate.

But that doesn’t mean Google has to put up with it. Nor should it. No company with shareholders in their right mind would allow AI-generated content to render their “news” section.

There is simply no way to verify the effectiveness of an AI-generated report unless you have skilled journalists verifying everything the AI ​​claims.

And that, dear readers, is a bigger waste of time and money than having humans and machines work together from the creation of content.

Most human journalists use a variety of technological tools to get their job done. We use spelling and grammar checkers to eliminate typos before submitting our copy. And the software we use to format and publish our work usually has a dozen or so plugins that manipulate SEO, tags, and other digital markings to help us reach the right audience.

But ultimately, due diligence comes down to human responsibility. And until an AI can actually be held accountable for its mistakes, Google is doing everyone a favor by marking everything the machines have to say as spam.

Of course, there are numerous caveats. Google allows many publishers to use AI-generated summaries of news articles or use AI aggregators to push posts.

Essentially, Big G just wants to make sure there aren’t any bad actors out there creating fake news articles to play SEO for hit commercials.

You don’t have to love mainstream media to know that fake news is bad for humanity.

You can check out the entire hangout below:

Leave a Reply

Your email address will not be published.