It’s likely, though, that the technology you’ve heard most about is ChatGPT. The chances are you’ve even engaged with it, during one of the rare moments in which it isn’t at full capacity. Interacting with ChatGPT is, at first, startling. The poetry, pirate impressions, and detailed answers to difficult questions are remarkable. They could’ve kept me occupied for hours.
But aside from providing entertainment, what are the real-world implications of ChatGPT, specifically? And should it be considered as a threat, or as a friend to copywriters like me?
Tech’s seat at the table
Opinion pieces, reports, and controversies have swirled since ChatGPT’s launch late last year. Twitter threads sparked by weary writers who are concerned – and in some cases, already being impacted by the tech – have gone viral.
But before offering my two cents on the matter, I wanted to give ChatGPT an opportunity to share it’s ‘thoughts’. So, I asked it a direct question.
Me: Will AI replace writers?
ChatGPT: It is possible that AI could be used to generate written content, but it is unlikely that AI will completely replace human writers. While AI can be trained to generate text based on certain patterns and structures, it currently lacks the creativity and nuance that human writers bring to their work. Additionally, there are certain types of writing, such as creative writing and journalism, that require a human touch to be effective.
I’m not sure about you, but that sounds like a ‘no’ to me. But the discussion doesn’t end there. Because AI already can and is being used to generate written content. And while it’s “unlikely that AI will completely replace human writers,” there’s no denying that certain use cases exist.
It might, for example, be possible that the tech could be used to assist with content planning, or to catch spelling errors or typos. It’s also possible that, when SEO is the priority for a piece of content, ChatGPT could be utilised alongside a human content editor.
This suggests that, when better-established, ChatGPT could be settling into a seat at the writer’s table. However, the involvement of a trained person will be essential in ensuring the content makes sense, is truthful, adds value, and feels less generic. Without human intervention, AI detection efforts mean that ChatGPT’s content would likely never go live.
A game of cat and mouse
Google continues to develop powerful algorithms to detect AI-generated content. These work by searching for patterns, inconsistencies, the overuse of keywords, and general readability. They also identify content that has been cobbled together using multiple sources. AI plagiarism expert Jon Gillam explains in an article for Business Age, “if [content] falls within a structure that is identifiable by the algorithm, Google flags it.”
Despite this, Jon goes on to explain that ‘GPT-3’ is more difficult to detect compared to earlier natural language processing (NLP) models. While Google bolsters its algorithms, the technology becomes ever more sophisticated at evading detection. Google’s Search Advocate, John Mueller, describes this as “a cat and mouse game.”
In addition to detection, there is also the issue of fictitious content to contend with. Various examples exist in which ChatGPT produces false information, including the invention of academic abstracts, which fooled even scientists.
It also answered questions about an entirely invented psychological condition, as relayed in an article titled ‘ChatGPT Is Dumber Than You Think’ for The Atlantic. Another concerning example shows Microsoft’s new Bing, powered by ChatGPT, arguing that the current year is 2022. It becomes increasingly aggressive as the user corrects it.
Perhaps the costliest AI faux pas to date, though, came from Google’s Bard. The bot supplied several false answers to a query about the James Webb Space Telescope. It stated that it took the first pictures of a planet outside of the Earth’s solar system. This feat was, in fact, achieved by the European Southern Observatory’s Very Large Telescope. This mistake ended up costing the tech giant over $100 billion.
It’s undeniably exciting to see what this tech pulls from its immense library of information. But for instances where content needs to be entirely reliable, the responses given need to be rigorously interrogated. That’s where people come in.
There’s no creativity without creatives
Without writers and editors to test, review, contextualise, fact-check and transform its output into content that delivers true meaning and value, ChatGPT is nothing more than a shiny new toy to play with. As well as containing inaccuracies, AI-generated content can be very generic.
For example, if you give ChatGPT a written brief, the output will only be as good as the written brief. Fine, if you want a predictable poem in the style of William Blake. But give a creative a written brief – copywriter or otherwise – and their brains will fire up. They begin to drill down into the heart of the challenge.
The solution a creative can produce might meet a need that was previously unknown. It’ll be underpinned by strategy and understanding. Their nuances, humour, emotion, and ability to challenge the accepted wisdom on a subject, are the differentiators that allow them to deliver purposeful work.
Ideas start as a seed, and with iterations, can blossom into impactful outcomes. As we move forward, the industry should not view ChatGPT as the writer. Instead, it should be seen as a tool that could be used to support the writer with these iterations. Whether that be with SEO optimisation, spell-checking, or freeing up time for more important or exciting work.
Ultimately, we should take advantage of what AI can do, while recognising what it cannot. It can’t perform our jobs to the standard that we can. But it can, perhaps, assist us in areas where we lack skills, time, or interest. And anything that could make life a little easier here and there feels like more of a friend than a threat to me.
If you would like to chat to us about anything you read, get in touch!