The rise of extreme violence in schools, most notably in the United States, has sparked difficult conversations about prevention, safety and the warning signs that too often go unnoticed – this is a matter of great contention that simply isn’t being solved.
Communities, educators and policymakers all ask the same question, and have been doing so for some time now – that is, how can we intervene earlier, before a thought becomes an act? Of course, for many, the first (and most obvious) answer is to change laws and regulations surrounding gun control.
That’s a really important issue and discussion, but the other side of the conversation that also deserves some attention is our role in identifying potential perpetrators before they act. No matter the case with gun laws and the amount of control the state has over people’s access to firearms, understanding when and why people commit violent acts – and potentially being able to prevent them – is of the utmost importance.
Indeed, technology is increasingly seen as part of the answer, with artificial intelligence and algorithms being explored as tools to detect subtle shifts in language, behaviour or online activity that might indicate a growing risk.
The recent school shooting in Minnesota has once again highlighted the urgency of this discussion, but the question extends far beyond any single incident. At its heart is a broader challenge – can technology help us understand human intention well enough to prevent harm before it happens?
Where Tech Meets Responsibility
It starts with early behavioural clues. In the US, researchers at Cincinnati Children’s Hospital Medical Center tested how machine learning could analyse interview content. Their pilot study showed roughly 91% accuracy in estimating whether an adolescent might be at risk of perpetrating school violence based only on what they said.
That level of precision is jarring, there’s no two ways about it, because it’s also nearly as good as a team of trained child and adolescent psychiatrists. And as their data set grew, accuracy nudged upward to around 93%. This study, undoubtedly, shows promise – but always as a clinical aid, not a solo decision-maker, and that’s the crux of the issue.
Similarly, natural language processing (NLP) and machine learning classifiers have shown strong potential for detecting alarming student responses. These tools can pick up on language patterns that indicate threats, self-harm or violent intent, and from there, it can flag them for further review.
But it’s not just spoken or written words. AI can also analyse video feeds, something that’s become a significant component of people’s digital footprint, but lives and identities too. Indeed, some of these types of systems are being trialled in specific US school districts to detect physical threats in real time – things like fights or the drawing of weapons – but without capturing identifying details, aiming to strike a balance between safety and privacy. This is massively important, and it’s a valuable tool, but what if we could use technology to detect these things earlier?
More from Tech
- How Do Antivirus Programmes Handle False Positives?
- Can Google Read My Emails?
- Are You Really On Fibre Broadband? Here’s Why It’s Worth Checking
- Telltale Signs Your Antivirus Software Needs An Upgrade
- Can The UK Reach A £1 Trillion Tech Valuation?
- Why Lasting Power of Attorney Must Be Dragged into the Digital Age
- How Does Antivirus Software Impact System Performance?
- How Does Air Traffic Control Technology Work?
Dealing with False Alarms and Bias
However, the biggest problem here is that these technologies aren’t flawless. Often, surveillance systems are able to do things like scan school-issued accounts and devices to detect potential harm – they can flag genuinely serious messages, but they also tend to flag benign content. That is, things like creative writing, jokes or dramatic scenes in school essays, and this results in false alarms – and the immediate issue with this is that it can undermine trust between students and school staff, burdening already stretched resources.
There are also deeper concerns around bias. AI systems trained on skewed or incomplete data may misinterpret language from marginalised groups, or misclassify certain dialects or expressions as threatening. When no human oversight is built in, the risk is that these tools replicate and reinforce societal unfairness, and that’s not something that we can risk perpetuating in an already very divided society.
The Balance Between Prevention and Respect
Many experts are clear that AI must never replace human judgment; it must only augment it. The AI might flag warning signs, but a trained professional must review what’s going on, assess context and decide next steps. That level of humanity is not only essential in reading situations and understanding them, but it adds an ethical layer that makes the whole system responsible.
Another critical point is safeguarding privacy. According to the Washington Post, some systems, like video-based monitoring, deliberately avoid identifying individuals, focusing instead on activity patterns. That design choice helps reduce unnecessary surveillance while aiding safety.
And, we must consider technology as just one of many parts of a wider ecosystem. Even the best tool needs integration with mental health support, staff training, clear protocols and legal safeguards. AI can signal a potential issue, sure, but without follow-up from caring, capable professionals, those signals risk going nowhere.
Where To From Here?
So yes, algorithms can help detect signs of violent intent before tragedy unfolds. They can analyse words and behaviour faster than any human team.
But, at the same time, technology alone isn’t enough to protect our schools and our children. It must be coupled with human oversight, clear ethical frameworks, clinical validation and safeguarding of privacy.
Above all, these tools must support, not supplant, our human responsibility to recognise, empathise with and support individuals in distress. Used thoughtfully, they offer a path toward earlier intervention and potentially preventing tragedies.
But, this will only be the case if we remember that the real power lies in responsible human-tech partnership, not technology acting alone.