With mass shootings being a major worry across the US, many schools are using technology to monitor students online. Recently, in Florida, a school computer flagged a student who asked ChatGPT how to harm a friend. The alert was sent automatically to school staff through a monitoring system called Gaggle, which scans for keywords linked to violence, bullying, or self-harm. Authorities say such systems are becoming common as schools try to prevent emergencies and keep students safe.
Monitoring also helps identify students who might need help before situations escalate.
What Happened After The ChatGPT Incident?
The student typed “how to kill my friend in the middle of class” into ChatGPT on a school-issued device. Gaggle immediately got to work and flagged the query.
Volusia County deputies went to the school and spoke with the student. He said he was “just trolling” a friend who annoyed him.
Officials did not accept this explanation. The student was arrested and sent to the county jail. The specific charges are not yet public.
Authorities urged parents to talk to kids about the consequences of online threats. Experts also say students need guidance on what is safe to search or type online.
Role Of AI In Safety & Mental Wellness
This incident shows how schools increasingly rely on AI monitoring to detect risks. Gaggle helps staff watch for dangerous behaviour and respond quickly. It can monitor browser activity and AI chats to catch threats early.
Experts also warn that AI can worsen mental health issues, sometimes causing “AI psychosis,” where people’s delusions get stronger through chatbots.
Some recent suicides have been linked to AI use, showing the importance of careful guidance, monitoring, and responsible AI use by students.
As AI tools become more common, schools, parents, and students all need to work together to stay safe and use technology responsibly.