Anxiety, confusion and a whole lot of panic – these seem to be the three unofficial but unequivocal companions of artificial intelligence. Tethered by an invisible but unbreakable thread, people seem to be unable to move beyond the promise and potential of AI and its apparent intrinsic link to great risk. This is simultaneously absolutely warranted and also completely overhyped – there’s no doubt about the fact that AI is a double-edged sword, but the fear surrounding its implementation has become almost dystopian in nature.
As we are well aware, fear makes people do weird things. We react in a way that is less aligned with logic and reason and more of a semi-subconscious attempt to mitigate risk and avoid an expectation of impending peril. And that’s exactly what’s happening in the context of AI – specifically, the use of AI in written work.
One of the biggest splashes made by AI in its most recent introduction to mainstream technology was its use in writing and generating content. Suddenly, instead of having to spend hours brainstorming creative vocabulary and carefully crafting sentences to make them sing, consumers and businesses alike can simply provide AI chatbots with prompts, and in the blink of an eye, something that would’ve taken them ages to create pops up on their screen in mere seconds.
Needless to say, our minds were blown, and almost immediately, our virtual world had grown exponentially and was full of possibilities. People who disliked language and writing were suddenly able to communicate quickly and professionally, and those who already had the skills could generate oodles of content almost instantaneously.
It wasn’t long, however, before excitement turned to trepidation, and fear emerged like a dark cloud enveloping blue skies.
From Wide-Eyed Wonder To the Monster Under the Bed
The shift was dramatic, and the turn was quick.
Somehow, almost overnight, we’d gone from sheer admiration of AI chatbots to a paralysing fear of its potential – and not just its potential, but its incredible ability to mimic human writing. What started off as, “wow, it sounds just like a human!” quickly became, “oh no, it sounds just like a human“. And, at the core of this, the real concern was our ability (or rather, potential inability) to separate these one from the other. If AI can generate written content that sounds like it was written by a real person, how will we tell the difference between the two?
This question quickly dominated the conversation – of course, it didn’t, by any means, stop the progress of AI and the improvement of the technology, but it did mean that all the incredible things that were done were somewhat overshadowed by skepticism and concern (to different degrees, depending on who you asked).
But, while fear and anxiety about whether or not we’d be able to separate human-generated content from AI-generated content ran rampant, there was a significant philosophical question we seemed to have neglected, in a broader sense.
That is, does it even matter?
Very controversial, no doubt about it. Many people will eagerly and boldly assert that being able to tell the difference between something that’s been written by a human or an AI chatbot is absolutely essential in maintaining our humanity, authenticity and originality. Some maintain that the distinction allows for more effective quality control.
On the other side of the argument, AI fiends and fanatics (and also, just ordinary, level-headed people) argue that if the quality is high and the content that is produced is accurate and useful, it shouldn’t matter whether it was written by Jane next door or a complex programme powered by artificial intelligence. Put crudely, if it works, well, who cares?
Regardless of the merit of either argument, the fact is, people are worried about AI, and whether or not it’s logical or reasonable, they want to be able to detect AI-generated content and human writing. It makes us feel more comfortable and more in control of technology. But, as we’ve quickly come to learn, that’s no easy task.
The AI Detection Dilemma
As quickly as AI chatbots emerged, so too did AI detection programmes, claiming that all you had to do was input the suspicious text and the software would be able to tell you, by means of a percentage or some kind of rating, whether or not it was written by a human or generated by AI.
Of course, it wasn’t long before it became clear that these programmes were pretty ineffective, and to be honest, it’s not necessarily even their fault entirely. In my opinion, the crux of the issue is that the fundamental principles upon which they’re based and the problem they’re attempting solve are the very reasons why these detection programmes can never be accurate. They’re doomed to fail.
AI chatbots are constantly improving, moving towards the ultimate goal which is to produce text that is indistinguishable from human writing. AI detection programmes are intrinsically reactionary – they exist purely to evaluate what is produced by these chatbots, and so the parameters they are using to make judgements are based completely on AI content and they’re also constantly changing. They have to identify “markers” of AI slop and these are weeded out by means of pattern detection – for instance, if chatbots tend to use a specific words quite frequently (for instance, one that often comes up is “delve”).
However, AI chatbots write the way they do because they’re trying to mimic human writing style. So, if a chatbot is using the word “delve” a lot, for example, it’s because it’s found that habit within the data sets it’s been given. And, if AI detection tools use things like that as their main red flags to indicate the use of AI, how are they discerning between a chatbot copying the way a human uses a word and a real human who just tends to overuse a specific word? Well, they can’t, because if the AI chatbot is doing its job properly, there will be no discernable difference – that’s the point.
And, the issue progresses to the fact that the chatbots are constantly progressing and improving, so their “style” is ever-changing. Due to this, the detection programmes need to keep up, and they’ll start finding more “indications” of AI use – that are actually just habits being mimicked from human writing – that start to be flagged. But essentially, they’re just highlighting things that humans do and ways in which they write that these AI programmes have managed to copy.
Ultimately, the more this goes on and the longer the push and pull happens between AI chatbots and AI detection programmes, chatbots will be producing infinitely better quality writing that is incredibly close to human content and detection tools will still be flagging it as AI generated. The problem, though, is that through all of this, these detection tools will be (and already are) flagging content that is being written by people. Because the fundamental principle behind the model is flawed.
So, long story short? AI detection models are doomed to fail, and that’s part of the reason why people have started to adopt a new, potentially more problematic, approach: that is, do everything you can to avoid habits adopted by AI chatbots so that your writing will stand out from AI slop.
Sounds great, I know. But what does that actually mean? Well, becasue the things being flagged by detection tools as “AI red flags” are actually almost always just very normal styles of writing, specific words and even particular punctuation marks, we’re now moving into territory in which we risk completely transforming the way we write (for the worse) in order to avoid being donned an AI con-artist.
Basically, we’re so scared of the perception of having used AI to generate content rather than writing it ourselves that we’re willing to ruin the quality of our content in the process of seeming authentic.
Perception of authenticity over actual authenticity and true quality.
Write Like a Robot To Prove You’re Not One
Sound crude? Well, in my opinion, the idea of having to “write like robot to prove you’re not one” is as crude as it is our new reality. The notion has now gone beyond subtle changes in writing style to become a mandated list of “dos” and “don’ts”. There are a few things that have made the list, but the problem, it seems, is that the list is growing, and not only that, it’s starting to include things that are and have always been major parts of language.
The most recent “red flag” – or “AI indicator”, whatever you want to call it – is supposed to be the em dash. And you know what, straight off the bat, I’ll agree that chatbots, ChatGPT especially, do seem to enjoy a good em dash. They’re sprinkled into content a little more frequently than I, personally, would prefer.
But, it’s not using it incorrectly. In fact, for many people, this use of the em dash may stand out because they simply don’t use it as a punctuation mark – whether that’s because they don’t like it and it’s not really part of their writing style, or because (I think in many cases) they simply haven’t been taught to use it. So suddenly, the em dash is the hidden weapon of ChatGPT and ought to be avoided at all costs!
The reality, however, is that this is just the latest example of how AI chatbots are progressing and improving in their efforts to mimic human writing. In fact, they’re writing better, and the em dash is an indication of this.
So, does this mean we should stop using the em dash altogether, because some people may take that as a clear sign that you’ve used AI to generate your content?
No, absolutely not. And to be perfectly frank, I think that to do this would be a fundamental failure in our attempt to maintain our humanity. To randomly discard a whole punctuation mark because of an AI trend is utter madness. And what’s worse? This will just be the beginning. What’s next, the comma? No more capitalisation because ChatGPT capitalises sentences too well? Fewer paragraph breaks because Grok has gotten too good at separating ideas?
No.
The answer is: keep calm. Keep writing. And don’t let the fear of misperception make you become that which you aim to avoid.
Our writing is human because it’s being written by humans, and there’s so much more to it than the fundamentals of punctuation, grammar and syntax. It’s creativity and it’s personality, and for now, those things are still ours.
Indeed, it’s more important than ever that we hold onto the em dash and that we hold on tight. We find ourselves on the precipice of a slippery slope, and if we’re not careful, the em dash is about to become a martyr of the English language. At the end of the day, what we’re seeing is the result of panic over something that is scary because it’s misunderstood. AI doesn’t need to threaten all the we know and all that we are, but if we give in to fear, we’re going to do more damage by ourselves than AI would do on its own.
So, I’ll say it again: keep calm and keep writing. The em dash may have a gun against its proverbial head, but it’s our finger that’s on the trigger, and ultimately, we get to decide how we treat language in the future.