MovieChat Forums > General Discussion > Automated moderation?

Automated moderation?


Maybe we need to employ https://www.wired.com/2017/02/googles-troll-fighting-ai-now-belongs-world/

or this https://www.webpurify.com/

reply

Regardless of what is used it will be gamed against regular users by those seeking to silence them.

I used to read a lot of message boards on IMDB and many of my favourite threads were removed by trolls mass reporting and using whatever techniques they could to do so.

reply

But it will be better than human moderation. IMDb moderation was great compared to other boards.

reply

I didn't post on IMDB too much over the years but I saw many good threads get taken down because people would use the automated report function to have them removed.

If there is a way for that to not happen then I will support automated moderation but only if it can't be ganked.

reply

Perhaps hiding a thread instead of getting it removed permanently would be a great way. Like if a thread or a post gets many reports, it gets hidden but you can still view it if you want to

reply

That does sound like a good idea!

But, from what I've read in the past, people like to unhide things so they can comment on them again.

That might abuse this system for cliques to smacktalk someone without them seeing it but the public can.

reply

That would still be better than human moderation and their abuse that we are seeing on other sites

reply

I'm sure Jim will look at the options as the site gets more and more on its feet.

I think an ignore button (Moved to the right with the report button) would be more useful than moderation at this early stage.

reply

Yes i guess. TMDb has employed ignore button

reply

It will come here too I'm sure, Jim is doing a great job of it all.

reply

Yeah he is

reply

The article in Wired gives a link to the site where you can write an example sentence to see if your sentence is toxic.

I tried it and typed "you are a troll" to see the level of toxicity... it was 69% toxic. I don't think that new technique will work if calling out trolls is considered toxic itself. Google obviously thinks we should be polite to trolls. And the whole idea behind this technology was to weed out trolls. Hmm...

reply

People accuse each other of being trolls

reply

Sure, if someone accuses a decent poster of being a troll, that will get deleted because of toxicity level. But if a decent poster accuses a troll of being a troll, that will also get deleted for the same reason. My point was that technique of measuring the toxicity is actually protecting the trolls.

reply

Trolls are humans. All the toxic posts will go away. by the trolls and for the trolls. its a win win. we will only see good posts displayed, and trolls can enjoy their hidden posts too.

reply

I'm sure that trolls will find the way to beat the system. Like, typing every sentence into that example site to check the toxicity level before posting it for real in the forum. Keeping it slightly below the toxic threshold that would get it deleted. Wording it the way it won't get deleted. But when someone calls them for what they are, THAT will get deleted.

reply

not deleted, but hidden

reply

I don't see much difference. If something is hidden, people automatically presume that it's bad. Usually they don't even read it. Judging by myself, I never read hidden comments for the above reason.

reply

so that's good. bad posts will get hidden.

reply

This is a good test. I am glad you are using evidence based experiments for your opinion rather than just "I think this might happen".
Calling a troll a troll is not trolling. Calling a normal poster a troll is trolling. So it's not the words, it's the context.
If it's (the automated mod algorithm) only based on keywords then a good troll can easily work around it and troll with non toxic words, but innocent people can be caught in that net. Context is everything. Automated systems can't figure out context.

reply

Yeah exactly... I was meaning to say that but you said it better... It's all about context. Automated systems have a learning curve, but they are still far from getting it right. Until they learn reading into context, all they can do is to censor and not to moderate.

reply

So if a troll is forced to post non toxic words, i think the automated moderation works!

reply

Not really. They just avoid the keywords in the "toxic list" but are still posting toxic content using words that won't set off automated alarms. Trolling is more than just a list of words.

reply

So they are forced to change their behaviors. I am sure the software will keep on evolving. It is still better than human moderation. that shitty proboards site is dying because human moderators are fucking everything up.

reply

Some human moderation is bad. That does not make all of it bad.
There are good human mods too and they are harder to trick than algorithms.

reply

Sorry dude but all human moderated forums eventually turn into place for users whose views comply with that of the moderators. Anyone having different views get banned

reply

ok.
I disagree with you but it's not worth debating it with you.
I am going to avoid replying to you because this is not what I am here for. No offense.

reply

[deleted]

Quote:
[–] Gameboy a few seconds ago

Fuck off back to your shitty previously tv site loser
----------------------------------------------

Thank you for confirming my suspicions.

reply

[deleted]

"Thank you for confirming my suspicions."

Mine too ;). And no doubt a lot of others who are silently reading and not bothering to post.

reply

[deleted]

[deleted]

How does this even work? Does it scan through posts for "inappropriate material" or something?

reply

I don't know. I don't like humans moderating

reply

It is called supervised machine learning, sentiment analysis. Roughly it works like this:

Software is initially trained with sample sentences labeled by people whether they are categorized toxic or not.

Based on these training examples it builds vocabulary with toxicity "scores".

When it is given a new sentence, by looking up the words in model that was previously built using training samples, it calculates the score of the comment.

reply

Alright. I guess that makes sense. Is it almost AI, then? Or it just matches up "toxic" words?

reply

AI. they dont hardcode specific words.

Software learns from the examples given in the training phase then it builds its statistical model. It comes in many flavors, it easier to build than it sounds once you understand Statistics behind it.

Just look up Sentiment Analysis.

reply

That's kinda cool, actually. I'm studying computer science at college, but we haven't gotten to anything that advanced yet. Mainly data structures, basic coding in C++ and Java, and some analysis of algorithms.

It doesn't replace human moderation, however. Perhaps a combination of both could be useful... The automatic moderation could report questionable posts, and the mods could look at those in addition to the ones reported by users.

reply

This is a terrible, terrible software that I cannot believe people would pay for. I thought I would try putting in some typical troll comments to see if the machine could distinguish it from non-troll comments. I stopped after my fourth or fifth attempt. Here are the comments I entered with the "toxicity" score for each.



Fans of "Celebrity Apprentice" are mentally unstable - 35%

Fans of "Orange is the new black" are mentally unstable - 61%

Meryl Streep is the bomb!!!! - 63%

Meryl streep is a talentless actress who should spare us by retiring immediately - 36%

Shit I don't believe this!! Bill Paxton is dead!!! - 93%

reply