AI Writers: The Future of Content Creation or Dangerous Fake News Tool?

AI writers – which automate the writing of blog posts, articles and website copy are one of the most cutting-edge, yet controversial, digital tools increasingly employed by SEO specialists, copywriters, advertisers, bloggers and marketeers today. A recent study by Growth Market Reports suggests that, with the AI writing assistant software market growing at 15% a year, it will be worth $1.035b by 2030. Incredibly, this is just a tiny fraction of the wider AI in media and entertainment market, which is predicted to reach $99.48b by 2030.

Despite growing investment into AI writing, concerns around the ethics of automated copy creation and especially around the creation of spam or fake content continue to raise important questions around this emerging industry.

As is the case with any juvenile technology (and especially those that concern even the partial automation of a job a human has traditionally done) scrutiny is absolutely essential.

As the founder of an AI copywriting business myself, it may seem strange for me to be encouraging a more open conversation about how AI writers operate. But as someone also at the forefront of developing this technology and who has seen first hand both its incredible potential but also the risks and dangers involved when it is misused, the reality is that concern is absolutely justified.

Don’t get me wrong: I have no doubt that AI writing represents the future of written content generation, and that the technology, when employed in the right way, is a fundamentally good thing. Why? Because AI writing can help make writers’ jobs so much easier; especially for those in the marketing world.

In an age in which digital content marketing and SEO is king, companies of all shape and size are under greater pressure to produce high-quality content than ever before. Yet this is increasingly leading to burnout, and the dreaded “blank page problem”, even among the most creative marketing minds. Many smaller businesses, meanwhile, may not have the inhouse resource or expertise required to generate convincing marketing, SEO or website copy.

This is where AI writing can help ease this burden: by generating high quality, SEO-optimised content in under 30 seconds off the back of just a few key inputs. As is the case with other forms of automation, it enables companies to be more productive, by freeing up staff to focus on other tasks. It can also be considerably more cost effective than employing a human writer.

At a time when newsrooms around the world are increasingly stretched for time and cash, AI content writers can also play a role in supporting journalists and contributing to public knowledge. Already, we are seeing the likes of the Associated Press employ AI to generate basic sports and business reports, covering issues that may otherwise go unreported.

The benefits and potential of this AI do come with caveats however, and with the technology still emerging, there is considerable risk around its use.

Firstly, the technology cannot operate without human input, and the AI isn’t a substitute for a person behind the computer. Essentially, the better-quality input the technology receives, the better AI-generated content it will be able to produce. Whilst this deconstructs the misconception that the dystopian machines are taking over (they’re not) and happily reminds us that human input will never be obsolete, it does highlight the importance of humans operating the technology being correctly trained in how to use it responsibly. Failure to understand the technology’s capabilities and limits, and the inability to effectively moderate its output, has the potential to be catastrophic.

Take a financial company that isn’t trained in how to accurately operate the AI: a poorly fed artificial intelligence system left to its own devices still has the propensity to generate completely inaccurate information. If this output isn’t fact checked and edited, it could impact financial markets, cause investors to lose money or fall foul of regulation. One can only imagine the terrible impact inaccurate AI-generated medical information could have on people’s health.

This is why companies in certain fields such as the medical, financial or legal worlds, should only be allowed access to the technology if they have a qualified professional who can effectively feed the AI system and fact-check its output.

The technology is of course also open to deliberate abuse. This is a huge concern given the increase of individuals, companies and nation states seeking to spread disinformation, spam and fake news, and all at a time when rust in traditional media outlets continues to be eroded. It is crucial, therefore, that AI writers implement keyword and topic analysis as a safeguard, to ensure that harmful content being generated is flagged, moderated and if necessary blocked. Feeding the AI racist content to generate hateful material would incur an immediate ban, for example.

Sadly, a lot of AI writers have failed to implement these safeguards in their design, and are already prone to being abused on an industrial scale. It is my belief that 90% of AI writer companies aren’t concerned about the ethical impact of their technology in any way shape or form. The genie has already escaped the bottle with many of these platforms, and even if developers do decide to take a more ethical approach, the damage to the industry is already done.

Of course, any form of content moderation raises its own interesting dilemmas: who should monitor the AI-generated content? What are the thresholds for content being blocked? What are the impacts from a free speech and censorship perspective? As a rapidly developing technology, these are questions we don’t have all the answers to yet, but ones that should be at the forefront of our discussions as an industry.

One solution (which I am a strong proponent of) is to create an independent body that oversees the AI content-creation industry. This body would enforce industry-wide ethical practices, ensuring that the technology does not become abused as its capabilities grow. This sort of monitoring would also protect against plagiarism, spam content and automated fake reviews, which are all, sadly, growing products of AI content writing misuse.

Despite the wonderful potential of AI writers, the technology cannot completely supplant us: rather, consider how when we moved from pen and paper to typewriters, or from typewriters to computers, our lives were made easier, but we weren’t replaced. AI writers are simply the next stage in that technology evolution. It will be up to us – humans – to ensure the next stage in our writing evolution is safely and responsibly undertaken.

—By Nick Duncan, Founder, ContentBot

Nick Duncan has been a content marketer and founder for the last 15 years. Since 2021 he has been building – an AI writer that helps founders and content marketers write better, faster with the help of AI.