Another Role for Bots: Helping to Patrol the Chat Pathways

avatar

One of the most terrifying parts of being the parent of pre-teens in the digital era has got to be the fact that during their push to become independent and to develop their own social sphere, the first place they want to spread their wings is online. As a professional in technology with an active online presence myself, I probably know a little too much about the dangers kids can face online for my own kids’ taste.  I am all too aware of the problems with messaging channels promoting user anonymity and modes of communication that promise impermanence (with photos and posts “vanishing” within seconds, promoting an illusion of actions without consequences). For one thing, they provide a new medium in which kids can be cruel to each other. More terrifyingly, they have also become prowling grounds for pedophiles who can easily create any identity they wish and carefully groom their selected victims, having little fear of being traced. cyber-crime.1910x1000

Law enforcement organizations have come out very strongly stating that certain messaging platforms should be strictly off-limits to young teens. Multiplayer online gaming has also been shown to be just as fraught with threats. Unfortunately, as parents we know that we can do our best to set limits to protect our kids, but we also know that our children will find ways to foil our restrictions as easily as they opened childproof locks when they were toddlers. And even if you lock down all access in your own home, once your child sets foot in a friend’s living room, with a different set of house rules, they do not always make the choices you wish they would.

Research happening right now among my fellow computational linguists may be one way to start making the online streets safer. A significant effort has been made toward processing natural language not just to derive its meaning, but also for determining the intentions, personality, or even the age of the speaker. A few years ago, one of the biggest splashes at our annual research conference was a presentation on identifying double entendres. This year, I spoke with someone doing work on determining the age of a writer and also with one of the authors of a paper on detecting when a speaker is practicing deception. This second researcher and I discussed how her work has fantastic implications for the safety of online channels. We already have representatives of law enforcement groups staking out online communities in an attempt to catch criminals before they can act, but these groups are small and cannot monitor everything. A “bot,” however, whose purpose is not to converse but just to listen, could potentially process ALL of the text communicated via a messaging platform in a way no humans could possibly do. A learned model trained on the indicators of predatory behavior could flag potential predators for review by human staff, hopefully enabling interventions quickly.

Online communication is already my preferred medium for business and a lot of my social interaction, despite the fact that I came of age with only a 300 baud modem in my house. We know that millennials and my kids’ generation are taking to online life like an otter in a river. Anything we can do to keep that river free of crocodiles will make our burden as parents just a little bit easier.

avatar

Lisa Michaud

Lisa Michaud is a Data Architect on the Enterprise Architecture team at Aspect. She has 20 years of research experience in the field of Natural Language Processing / Computational Linguistics and pursues diverse interests in user modeling, dialogue, parsing, generation, and the analysis of non-grammatical text. She holds a PhD in Computer Science and has been published in multiple international journals, workshops, and conferences in the fields of user-adaptive interaction and Computational Linguistics.
avatar