Students and Workers Are Now Using AI To Do Their Work For Them- And Many People Can’t Tell

According to a study by RM Technology, many secondary school teachers are apprehensive about their students using AI apps for assignments. Out of the 500 surveyed, an overwhelming 66% feel they’re frequently handed work penned by AI.

Alarmingly, around 9% admit they struggle to discern between student-produced assignments and those crafted by AI.

Mel Parker, a consultant for RM Technology and former headteacher, emphasises the necessity for oversight. She said, “There definitely needs to be government regulation, especially from a safeguarding point of view.”

Beyond regulations, she insists on comprehensive training on the burgeoning tech. “They need to know how they can talk to students about good use of AI… actually what is cheating and what is good practice?” she added.

Despite teachers’ concerns, students have a different viewpoint. Miya Crofts, a student from Greenwood Academy, sees AI as a tool for support. “I use it a lot for online homework, revision tool… it’s available whenever I need it on AI programs,” she shared.

On the other hand, Tito Thomson O’Reilly opined, “It takes away the social interaction… it’s just a straightforward answer.”

Safety First: Online Concerns

With the proliferation of AI tools, safety, especially for younger users, should be the biggest priority. Charlotte Ainsley, a digital safeguarding consultant, warns of the lurking dangers on online platforms, especially for children who might inadvertently access age-inappropriate content.

She stressed the need for proper AI regulations and said, “We don’t want to find ourselves in the same situation we did with social media.”

All angles are valid, and the collective responsibility adults should have to ensure regulation. Even in the case where AI were to be banned, children will still try and use it. It is better, after all to take precautionary measures first.


AI in the Workplace: A Helping Hand for Job Seekers?

Students may find AI useful in their studies, but UK workers also see its usefulness in professional settings.

A survey by cybersecurity firm Kaspersky suggests that almost half of workers would rely on ChatGPT, an AI-based system, to refine their CVs and cover letters. The belief is that this advanced tech can enhance their profiles, making them stand out in the competitive job market.

David Emm, a principal security researcher at Kaspersky, warns of the pitfalls of relying too heavily on AI, saying, “Job seekers need to be careful when using ChatGPT… their attempts to stretch the truth could lead to issues.”

He also spoke on the importance of data protection, advising companies to train their staff on AI use, appropriately.

Training Staff On Safe AI Use

Employers should focus on targeted training sessions that demystify the AI tools in use in order to ensure a harmonious working relationship between human employees and AI systems.

These sessions should break down the functionalities of AI solutions, explaining how they can assist in daily tasks rather than replace human roles.

AI is a tool, not a replacement. Understanding its capabilities and limitations is key to effective collaboration,” says Sarah Williams, an HR expert.

Another aspect to consider is ethical training, particularly when AI is used in decision-making processes.

Staff should be educated on the ethical considerations surrounding AI, such as data privacy and bias. This equips them with the knowledge to question AI recommendations when necessary and make more informed decisions.

“Ethical considerations are not just for the tech team; they’re a collective responsibility,” states Dr. Mark Thompson, a specialist in AI ethics. This dual approach to training—functional and ethical—will prepare employees to work alongside AI in a more effective and responsible manner.