Since the advent of online gaming, there have been bad apples taking to their microphones and keyboards, spewing toxicity and harassment. Chances are that if you’ve played any major online game, competitive or not, you’ve been the target of some sort of harassment. Thankfully, companies like Modulate AI are hoping to address and mitigate the amount of online harassment that persists today.
Harassment in gaming has sadly become synonymous with the online culture of the industry. Verbal abuse in all forms is demonstrated online every day. If you’re a player of colour, a female, or part of the LGBTQ2S+ community, these forms of harassment come in the form of slurs, sexism, and other hateful comments. Some of the most popular online games, like Counter-Strike 2, League of Legends, Call of Duty, etc., harbour some of the worst communities for abuse. Data sourced by Modulate AI shows that 67 percent of players say they would likely stop playing a game if another player exhibited toxic behaviour. Some studios have their own suite of enforcement and policies in place. However, Modulate AI is assisting many studios with holistic solutions to help protect players and foster a better experience online.
Modulate AI began building ToxMod in 2020. Modulate AI’s technology runs in the background of many popular games played by millions around the globe. Powered by Amazon Web Services (AWS), Modulate AI uses advanced machine learning technology to analyze in-game voice chats to flag bad and otherwise toxic behaviour. Using this technology, Modulate AI can monitor nuances of conversations that’ll better assist moderators in flagging such harassment and doling out strikes or bans appropriately. Speaking with Terry Chen, Modulate AI’s COO and Carter Huffman, company CTO, I discovered the hurdles in maintaining a database of harassment and toxic behaviours, flagging false positives, and localizing toxicity detection.
Online harassment in games comes in all different forms. Unfortunately, for players who are trying to enjoy some time with friends or a few quick rounds in a competitive setting, slurs and other remarks have become commonplace. Modulate AI has built a database of phrases you’d expect to be there. However, the gaming populace is so diverse with age ranges just as wide. This, of course, raises some issues in identifying new dialects, slang, and other data points.
“When you localize to a new language, you have to understand so many of the cultural nuances,” Chen explains. “A lot of the words seem innocuous but have many meanings or nefarious meanings. They’re sometimes playful because we like to detect positivity too. We actually work with child psychologists, linguists and social sociologists, within the respective languages that we work in. We currently support about 18 languages, but we’re working on developing a few more right now.”
While ToxMod runs through Modulate AI’s technology first and foremost, the company does leverage third-party contractors to assist. With global dialects being taken into account, Modulate AI has to vet and ensure proper measures are being taken into account when flagging harmful content. Understanding various dialects and inflections means that Modulate AI can provide game developers like Riot Games, Infinity Ward, Rockstar Games, and Lucky VR with more accurate flags for online harassment and toxicity. In online games, it’s common for players to exchange swear words during moments of excitement. Understanding the context in which words are being used and the inflections can circumvent false positives being detected.
“We have a team of data labelling contractors that we work with who we reach out to and train,” Huffman explains. “We look for speakers across the globe in the languages and dialects of those languages. As part of that kind of hiring or contracting process, part of that interview is like asking ‘What kinds of games are you familiar with? What kinds of dialects are you most familiar with?’ These kinds of things. We have kind of a whole group of people that we work with and look for, specifically distributed so they have that expertise that nobody on our team does.”
“We couple that with having at the very least [a group] for each major language and major region. Mexico versus Spain, for example. having somebody on the team who’s at least fluid in some dialect in that region. [Someone] who will have a better ability to interface with the kind of contractors that we find to help us label data. That labelling of data is not only for our machine learning models but also for transcription for toxicity detection. We use that not only for training models but also for reference materials for like, moderators, training, etc.”
Using machine learning technology, ToxMod can proactively pick up on harmful voice conversations as they occur within the game. The AI technology flags potentially harmful conversations for toxicity analysis. Developer studios can then proactively take action and clean up their online communities. The difficulty curve in this process is maintaining a bank of terms. It’s also essential that ToxMod properly analyzes the inflections and how certain words are used.
“We consider it our responsibility to detect and escalate with the appropriate level of severity the different categories of behaviours that may be code of conduct violations,” Huffman tells me. “So you know, there’s a profundity and adult language category. We consider it our job to correctly escalate, and prevent profanity in adult language. Studios can customize that, right? If we show them our standard profanity category and the studios [disagree], there is room for opinion and customization. But we consider it our responsibility to be accurately estimating that stuff.”
“So, when a new slur appears and becomes a part of the gamer lexicon, it’s our job to correctly escalate that. Studios can catch something if we miss it and say, ‘Hey, you know, we’re seeing some of this content, and you’re not flagging it,’ and then we’ll take that feedback. It’s not so much ‘We found this new terminology, and we think you should consider it.’ It’s our job to escalate that and give them the ability to deal with it if they need to.”
One of the biggest success stories seen so far revolves around Call of Duty, which for a long time has been notorious for toxic voice-chat communities. As Modulate AI explains, the length of how long ToxMod is active dictates the level of decreasing toxicity in games. The company explains that toxicity decreases roughly 30 to 40 percent within the first week. That continues to barrel towards a 70 percent reduction. “It’s that exposure reduction that we have in really large scales that is tied directly to retention. Retention increases in new or returning players to a franchise. They come back, and they’re exposed to a less toxic environment than they remember, and they stick around longer.”
“One studio that we worked with started out with pretty severe penalties for toxicity and either a very high level of confidence and high burden of proof. We were collaborating with them on different strategies for how else they could affect this behaviour and reduce toxicity without making such severe penalties. Even light touches like a message or being kicked from a match or a room and you can join again right away indicates that ‘Hey, you’re doing something that violates the code of conduct and just interrupts that flow.’ That can cut the amount of toxicity being admitted in a conversation by 80, 90 percent even, and that doesn’t require banning anybody.”
With ToxMod, Modulate AI can work with specific development studios and advise them on how to improve their online communities. Each developer can choose how to enact actions against violations within their own online policies. Do they shadow-ban an account? Hard ban the player from the game after a certain amount of strikes? In some cases, developers even dole out IP bans, restricting players from creating a brand new account and re-joining the online community.
There’s still an ample amount of work to be done within the gaming space to combat online toxicity. However, the work that’s being done by Modulate AI is indescribably important. For marginalized groups, younger players, and quite honestly, everyone, there’s no reason to log in to your favourite game and expect to be hit with harassment. ToxMod ships in a variety of plugins, ready for various game engines and voice infrastructures. It’s accessible to smaller development studios as well as large-scale projects. With services powered by AWS, data collected and analyzed is anonymized and protected by ISO 27001 standards, protecting players.
This interview was edited for language and clarity.
Image credit: Modulate AI
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.
GIPHY App Key not set. Please check settings