Original Post — Direct link

The global chat I'm in is currently discussing the merits of slave ownership, while also spreading anti-vaxx and anti-semitic conspiracies.

Why not just block them? Well, I did that. But is that the kind of first impression of the community you want to give to new players? Not having a report option was fine in Early Access, but now the game's released to the wider gaming audience and it will attract more of these types. Just look at any Steam Forum for a preview..

It's not a widespread problem yet but it can quickly spiral out of control (Mordhau anyone), and it would also help combating RMT bots which I've already seen pop up.

edit: This was already discussed two years ago.

Why am I not surprised that the Capital-G Gamer's immediately showed up in this thread.. muh Freeze Peach in vidya games!

External link →
9 months ago - /u/moxjet200 - Direct link

We care about the health of in game chat. I can point to the fact that we’re using an auto-moderation system that we’ve been tuning for a year that is quite costly compared to other options as proof (it’s far more than just a blocked list of words). If we’re seeing chat have off-topic dialogue that is extreme like this I’ll make sure we invest the resources into creating more moderation tools though like right click to report and keep team members active in muting, banning, warning, etc.

Also, re: gold sellers. This is one of the major things I was considering when thinking about the monetization model. Admittedly I think if we were free to play with some pay for convenience we’d be generating slightly more revenue for the studio. However, for gold sellers we can ban their accounts and they’re forced to make a choice - stop or pay for the game again and make a new account. I suspect after enough remaking accounts and having to pay this game cost again this won’t be lucrative enough for them to continue. We’ve already banned a ton of them.

9 months ago - /u/moxjet200 - Direct link

Originally posted by Duder_Mc_Duder_Bro

Why does the automod prevent someone from using the word "bilirubin"?

Bilirubin is an orange-yellow pigment that occurs normally when part of your red blood cells break down.

I guess I can understand why "vodka" is banned although that's funny because potions are occasionally labeled as beer in the game.

But the bilirubin one really surprised me. Why is EHG anti-biliruben?

The moderation system is an industry leading tool that contains probably millions of words. We commonly go through and adjust what it has moderated for future cases as with tolerance thresholds it gets a ton right but some it’s still too conservative on - like the case you just stated.

It moderates quite a lot we don’t immediately understand… then we bring up urban dictionary and get taught how creative people are and it makes us appreciate the tool even more. Lol

9 months ago - /u/moxjet200 - Direct link

Originally posted by DarkLordShu

I am also an EA adopter and I have happily used the chat to help others but if I have to fear losing my Steam account due to moderation, I might never use the chat to help anyone again.  You have to be careful not to create a situation like in Heroes of the Storm where no one talks because bans are automated and appeals are replied to by cut and pasters who just say no.  I understand OPs point but moderation can lead to disengagement to protect ourselves.

It has to be a pretty serious offense for a ban, like selling gold and spamming it in chat. Being helpful in chat I can assure you will not lead to an account ban

9 months ago - /u/moxjet200 - Direct link

Originally posted by Lillith_Was_Right

This thread is from September 2021 -- https://old.reddit.com/r/LastEpoch/comments/pid5t6/hate_speech_in_global_is_rife_please_provide_a/hbpcgeh/

Two and a half years ago you said the same thing about creating a way to report in chat, not trying to be a jerk here so please don't take this as an attack, I'm just wondering why it never happened in two and a half years.

Auto moderation is not the answer, chat filters are certainly not the answer, allowing reports and having someone read those reports and take action is the answer.

If you were to see the volume that automoderation picks up that is accurate I think you’d be swayed to agree that it’s beneficial. The volume of what would need to be moderated vs the size of our team is incongruent. Then there’s also the fact that it’s immediate, whereas moderation is inherently slow due to human response time. We’ll be bringing in more assistance there but even quadrupling our moderation folks wouldn’t be enough.

9 months ago - /u/moxjet200 - Direct link

Originally posted by Lillith_Was_Right

Oh believe me I don't think automoderation is useless, its very useful, only that it is not on its own the answer to this problem.

This problem also requires human intervention. Humans report the issues and other humans take appropriate action.

Right click report is the first part of that.

I think that’s makes good sense 👍 thank you for taking the time to give your feedback