NME

Riot Games' Valorant

Riot Games has addressed issues surrounding toxicity within Valorant and its chat systems, outlining what it’s already done and what’s still to come.

In a blog shared today, Riot Games has shared its thoughts on abuse and toxicity within Valorant.

“While we can never remove the bad conduct itself from individuals, we can work to deter behavior such as insults, threats, harassment, or offensive language through our game systems,” explained the blog, whilst adding that there’s also room to “encourage “pro-social” behaviour”.

Outlining what the studio already does to combat toxicity, the blog states that Riot currently uses player feedback to identify when some individuals are deemed particularly toxic, so it can punish them faster. It also touches on the game’s recently-added muted words list, which aims to let players choose which words and phrases they do not want to see or hear in the game.

Valorant
Valorant. Credit: Riot Games, Stacey Henley.

On the topic of punishments, Riot says it has issued 400,000 voice chat and text bans and 40,000 bans in January alone.

Looking ahead, Riot shared its plans to expand its efforts to tackle toxicity in voice chat, on which it said the following:

When a player experiences toxicity, especially in voice comms, we know how incredibly frustrating it is and how helpless it makes us feel both during the game, as well as post game. Not only does it undermine all the good in Valorant, it can have lasting damage to our players and community overall. Deterring and punishing toxic behaviour in voice is a combined effort that includes Riot as a whole, and we are very much invested on making this a more enjoyable experience for everyone.”

Valorant
Valorant. Credit: Riot Games

One of the improvements Riot has outlined include “generally harsher punishments for existing systems” such as imposing tougher penalties on players who have been flagged via Valorant‘s automated systems. On the same topic, Riot also hopes to implement faster punishments for certain “zero tolerance” words, and aims to have these administered immediately after they are used.

In regards to voice chat moderation, Riot admits that “voice chat abuse is significantly harder to detect compared to text (and often involves a more manual process),” though adds that it’s making gradual improvements to the way this is handled.

Finally, Riot is also trialling “reporting line with player support agents – who will oversee incoming reports strictly dedicated to player behaviour – and take action based on established guidelines”. If the ongoing beta shows promise, Riot has said that it will consider expanding this to more regions.

In other news, Chrono Cross is finally coming to Europe – here’s everything else that was announced during last night’s Nintendo Direct.

The post ‘Valorant’ developer outlines how it’s tackling communications abuse appeared first on NME.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

 © amin abedi 

CONTACT US

Sending

Log in with your credentials

Forgot your details?