A student used AI to create self-help blog posts that fooled humans

Using OpenAI's GPT-3 language model, Liam Porr was able to generate blog posts that read as though they'd been written by a real person.

artificial intelligence hand type on keyboard

Over the course of a couple of weeks, a college student named Liam Porr utilized the OpenAI’s language model GPT-3 to craft several blog posts about self-help and productivity, some of which gained substantial traction online. MIT Technology Review reports that, despite the moderate success of this experiment, Porr doesn’t think AI will replace writing jobs — it will merely make writers more efficient. Newsrooms around the world certainly hope so.

Though GPT-3 can construct coherent and sometimes even beautiful sentences, it still stumbles on logic-based copy creation. This prompted Porr to only create posts in the realm of self-help, writing only the title and introduction as well as adding a photo. He would also promote the articles, one of which reached the number one spot on Hacker News — ahead of an NPR post about doomscrolling.

The blog — When Porr sort of came clean, he revealed that the Adolos substack he used to create the posts was named after the Greek god of deception — Dolos — with an “A” placed in the front. In a personal, more detailed post, he discussed his methodology and hypothesis for future GPT-3 applications in media.

In two weeks, the Adolos substack was visited more than 26,000 times, it gained roughly 60 subscribers, and the rare comments questioning the articles’ creation were downvoted by those assuming cruelty instead of justified skepticism.

What does this mean for media? — Porr thinks GPT-3 could make journalists' days easier. He suggests that media companies could save money while decreasing writers’ workloads. Wielded by a good writer, GPT-3 could help expedite quick news hits, but Porr thinks writers are too sensitive to embrace the technology:

“This leaves room for a new kind of media company. One that’s fast and lean. The writing team will be small, but experts at bending GPT-3 to their will. It’s a 2006 Chevy Suburban vs a 2020 Tesla. You can't mod your suburban into a Tesla, they’re completely different models.”

In an industry as tenuous as media that increasingly relies on attention, it’s all but certain some outlets will embrace algorithmically generated content as long as it gets hits. It's clickbait as we already know it, but quicker, cheaper, and easier. Automated systems are already being used to generate financial news stories and baseball game recaps. Because both earnings reports and baseball games are statistics-heavy, templates are often sufficient for communicating the key points.

One would hope that such implementation would allow for more time and resources for original reporting and not rampant layoffs for publications that run themselves. But even in a pre-COVID world, reducing the workforce is a leading way to diminish costs in the quest for reaching or maintaining profitability.

Luckily for writers, the GPT-3 is pretty repetitive and sometimes nonsensical, so we can hold onto hope and our jobs… at least until the almost inevitable GPT-4 or GPT-5 comes along. By then, it may not just be journalists who need to worry about automation's effect on their lives, but also its potential abuse for propaganda, misinformation, and deepfakes. Hopefully, the tools to spot automated content keep up with those designed to create it.