magine you are chatting online, and you are contacted by a stranger: she is a young kid from your area. You don’t really have time for her, but she insists and tells you about her typically adolescent problems. What a bore! But once again the girl tries to ask for your help: as a matter of fact she resorted to look for support from strangers online, because nobody is willing to listen to her troubles – smelling of abuse. In fact she is so desperate, she even offers sexual favors in exchange for your attention. What do you do?
If you behave normally and you try to help suggesting any sensible course of action… tough luck. You have just been flagged as a ‘level 1, possible pedophile’. The girl, who is in fact an advanced computer program, will try to extract as much of your personal data as it can while engaging you in an increasingly explicit conversation. An internal counter will go up anytime your answers contain certain keywords, and should you reach level 3 your life is going to turn sour very soon.
The “conversational agent” is called Negobot. It was designed by a Spanish research team from the University of Bilbao and it is being field tested right now. Think of it as an evolution of Apple’s Siri: an artificial intelligence so smart it keeps updated on all the latest teenagers’ fads and slang. It uses SMS acronyms just like a 13 years old kid, sometimes making the same typos most schoolkids make. It probably can’t pass a fullTuring test, but Negobot sure knows how to look the part of an underage prey online.
The real difference with the many scions of Clippy is however in its judgement algorythms, based on the infamous game theory (do you remember A beautiful mind?). Each phrase typed by its interlocutor is scored according to subject, words used and more – and the total score dictates the ideal way to proceed with the conversation.
Negobot sounds like a really cool idea. The Web is already acclaiming it as the savior of our children, and it could possibly become it one day – yet I am not convinced. This has probably something to do with my past as an IT journalist, during which I witnessed so many “revolutionary” softwares bork up for the most improbable reasons. Let’s just say that I wouldn’t feel too safe if I had to be diagnosed or judged by a computer program, especially if it could label me a criminal.
What really bothers me however is how the paper describing the concept seems not to sketch an assessment tool, but an entrapment one. I could live with a passive “honeypot” bait: a fake child who just hangs around, ready to notice ambiguously creepy behaviors. But a robot designed to pester random strangers with a sad story escalating to indecent proposals and cybersex?
Reading the research, it sounds like the only way to save yourself from Negobot would be to blacklist it, or it will go on until it thinks you said something incriminating. But how inhuman would be to close your door in the face of a poor abused kid looking for help? I am very glad I don’t hang around chatrooms, because I wouldn’t want to choose between being labeled an asshole or a pedophile.
Now, the chatbot is of course still evolving. These major issues will probably be fixed sometimes in the future, so it would be unjust to simply disparage the project. It obviously has potential. But it still misses the point – by far.
All research agree on a few simple facts. Statistically, the “dirty old man in a raincoat” type of child molester is just a fictional character most people will never meet. Kids do get abused, but this overwhelmingly happens at home, by their relatives or parents; next on the list are priests and religious figures in general, followed by sports coaches and teachers. This is unglamorous and sad, so real and frightening that we struggle to accept the very concept of it.
This is also what harms any actual fight and prevention of child abuse. Nobody wants to be the one instilling scary suspicions in a youngster’s mind, so society collectively keeps telling itself the easier to swallow tale of the bogeyman. Pity it’s useless.
If you really want to defend the children, just explain them that they should tell everyone whenever anyone behaves creepily around them, and that they should never be alone with one grownup, no matter how trustworthy he or she looks like. A bit paranoid, perhaps, but it works – especially when it is part of a sensible overall sexual education.