Greetings, glorious testers!

Check out Alpha Two Announcements here to see the latest news on Alpha Two.
Check out general Announcements here to see the latest news on Ashes of Creation & Intrepid Studios.

To get the quickest updates regarding Alpha Two, connect your Discord and Intrepid accounts here.

How do you feel about some of the ingame chat moderation being done by an AI?

NerrorNerror Member, Alpha One, Alpha Two, Early Alpha Two
edited January 2023 in General Discussion
I don't know a whole lot about AI, but from my limited experience with it and from what I have read about it, it's pretty much become the norm for the big social media sites to use AI/machine learning to moderate their content. We've reached a point where the AI can speak pretty much any language, and understand context fairly well. Facebook, Tinder, Linkedin, Instagram and more use it.

Those are huge platforms, so it's obviously impossible to hire enough humans to do the job. And with potentially a million or more Ashes players, that's a lot of chat being generated every second, so I can see a potential for AI moderation in the game as well. I have zero clue how much it would cost Intrepid to implement it, but for the sake of the argument, let's say they can do it.

But is it a good idea? And if so, how much power do you want to give the AI? Should it be able to ban people? For example, if it identifies one of those annoying goldsellers spamming, using their usual tricks of using weird symbols and spaces to circumvent simple word filters, should it just straight up ban the account? Perhaps only mute and flag for moderation by an active GM?

What about guild or group chat, and even /whispers? Obviously nothing is private in the game, including whispers. Assume it's all being logged anyway, even if only for a short period. Do you want the AI to moderate whispers, so if someone starts calling you all kinds of slurs or threaten or wish you bodily harm, the AI catches it before it even reaches you and blocks the message and flags the offender for the GM team to review.

There are some obvious downsides, like the AI not exactly being infallible, but I can see some upsides as well.

Thoughts?

Comments

  • LinikerLiniker Member, Alpha One, Alpha Two, Early Alpha Two
    Absolutely No, terrible, terrible idea, we need human beings to make decisions, we have more than enough examples of how bad it is when you have automated bans in video games
    img]
    Recrutamento aberto - Nosso Site: Clique aqui
  • AzheraeAzherae Member, Alpha One, Alpha Two, Early Alpha Two
    It would be mostly fine, but the adaptations of toxicity would be quick, and more importantly, 'meant to create false positives'.

    Humans will adapt their 'methods of being toxic' faster than most language programs can keep up because humans can easily substitute terms. It's not that the AI wouldn't detect the new terms after a while, it's that people find ways to change the context to make 'regular conversation' cause things to flag.

    Definitely not impossible, but I personally feel that the input-to-effectiveness ratio might be too low, for now. A full custom solution might be worth implementing, but this would almost certainly lead to 'English Only'.

    And of course, some people are 'naturally foulmouthed or lewd' in specific ways that their friends or allies are okay with, and while that would probably be 'okay' to just 'ignore chat between people on friends lists', that also has its issues.
    ♪ One Gummy Fish, two Gummy Fish, Red Gummy Fish, Blue Gummy Fish
  • MybroViajeroMybroViajero Member, Alpha Two
    edited January 2023
    I think AoC should have a system of honor and veracity (something like the honor system of Lol), players with honor 10 (from honor 5 the moderators/devs grant the honor) could make reports and the moderators could check more in those reports, that way more field would be covered.

    In Rust, server moderators review more reports from people who have previously had confirmed reports.
    EDym4eg.png
  • LudulluLudullu Member, Alpha Two
    I think players should just be given word filter and word report tools. You see a word you dislike? You report it (which automatically filters it on your end). Enough reports of the same word would filter it for most people (excluding anyone who checked an option of "don't filter words"). And GMs could get flags of people who had their words filtered.

    In other words, I just don't think that AI would be able to properly control the chat.
  • DolyemDolyem Member, Alpha Two, Early Alpha Two
    No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.
    GJjUGHx.gif
  • KilionKilion Member, Alpha Two
    I don't want anyone living there Orwellian wet dreams out even in a freaking game. There is no need to have anyone read and oversee every bit of information going back and forth between players. We are not criminals, we are not children. If some dispute happens, sure ask a mod and if necessary they can dig up the conversation log.

    No need for a piece AI to sniff out every conversation like everyone is under permanent suspicion to do something bad. Adding to that: Intrepid would have to buy that piece of tech from someone else. You can bet your behind on that AI processing all the data and funnel it back to someone else.

    AI surveillance puts all players under suspicion for no reason and is a risk to privacy.

    The answer is probably >>> HERE <<<
  • daveywaveydaveywavey Member, Alpha Two
    Ashes will have in-game GMs. If all the AI does is flag potential breaches up to the in-game GMs, then that's fine.
    This link may help you: https://ashesofcreation.wiki/


    giphy-downsized-large.gif?cid=b603632fp2svffcmdi83yynpfpexo413mpb1qzxnh3cei0nx&ep=v1_gifs_gifId&rid=giphy-downsized-large.gif&ct=s
  • Azherae wrote: »
    It would be mostly fine, but the adaptations of toxicity would be quick, and more importantly, 'meant to create false positives'.

    Humans will adapt their 'methods of being toxic' faster than most language programs can keep up because humans can easily substitute terms. It's not that the AI wouldn't detect the new terms after a while, it's that people find ways to change the context to make 'regular conversation' cause things to flag.

    In general terms, sure. But for Ashes? People are gambling with their accounts when experimenting with substitutes and workarounds in this case. Accounts aren't that easy to replace once you have a levelled up character.
    daveywavey wrote: »
    Ashes will have in-game GMs. If all the AI does is flag potential breaches up to the in-game GMs, then that's fine.

    I think this would be an ok solution.

  • NishUKNishUK Member
    edited January 2023
    Dolyem wrote: »
    No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.

    Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings.
  • DolyemDolyem Member, Alpha Two, Early Alpha Two
    NishUK wrote: »
    Dolyem wrote: »
    No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.

    Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings.

    You're not wrong. I'd make it a remote job that can be done online honestly. And screen people with psychological tests to vet them.
    GJjUGHx.gif
  • maouwmaouw Member, Alpha One, Alpha Two, Early Alpha Two
    It would probs be most effective as an assistive tool to mods/community managers - so that it's always a human making the final decision, but they don't have to read every word ever typed by the community hahaha.

    You could also reduce the "human adaptation" delta by publicly marking posts that the AI has picked up, so the community just focuses on reporting things it DOESN'T pick up.

    ---

    I think we could learn a thing or two from the Twitch moderation community - they basically specialize in live moderation of big data (there is no way mods can read every message in a twitch chat).

    That said, I know some moderation teams use what's basically equivalent to a social credit system - keeping a hidden score for every user in chat, so mods can quickly identify repeat offenders.

    I dunno if I like that. But I can't deny it works.
    I wish I were deep and tragic
  • SongcallerSongcaller Member, Alpha One, Alpha Two, Early Alpha Two
    Dolyem wrote: »
    NishUK wrote: »
    Dolyem wrote: »
    No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.

    Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings.

    You're not wrong. I'd make it a remote job that can be done online honestly. And screen people with psychological tests to vet them.

    I disagree with psychological tests to be able to participate in a form of relaxation.
    2a3b8ichz0pd.gif
  • DolyemDolyem Member, Alpha Two, Early Alpha Two
    Neurath wrote: »
    Dolyem wrote: »
    NishUK wrote: »
    Dolyem wrote: »
    No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.

    Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings.

    You're not wrong. I'd make it a remote job that can be done online honestly. And screen people with psychological tests to vet them.

    I disagree with psychological tests to be able to participate in a form of relaxation.

    I mean for a moderation job
    GJjUGHx.gif
  • SongcallerSongcaller Member, Alpha One, Alpha Two, Early Alpha Two
    Oh okay no worries.
    2a3b8ichz0pd.gif
  • maouwmaouw Member, Alpha One, Alpha Two, Early Alpha Two
    giphy.gif

    I like this thread.
    I wish I were deep and tragic
  • DiamahtDiamaht Member, Braver of Worlds, Alpha One, Alpha Two, Early Alpha Two

    Kilion wrote: »
    I don't want anyone living there Orwellian wet dreams out even in a freaking game.

    This. No thanks.

    It always breaks down the same way.

    Q: "Well OK, but who decides?"
    A" "Well, me of course."


  • DygzDygz Member, Braver of Worlds, Kickstarter, Alpha One, Alpha Two, Early Alpha Two
    Terrified!!!
  • maouwmaouw Member, Alpha One, Alpha Two, Early Alpha Two
    Diamaht wrote: »
    Kilion wrote: »
    I don't want anyone living there Orwellian wet dreams out even in a freaking game.

    This. No thanks.

    It always breaks down the same way.

    Q: "Well OK, but who decides?"
    A" "Well, me of course."


    I mean, I agree this is 100% an issue right now, but I feel like it has a really simple solution:

    Public logs of moderation decisions.

    My theory is that if you just leave everything in the light and give it fresh air, then it's much harder for mould to grow. Maybe I'm too naive?
    I wish I were deep and tragic
  • DiamahtDiamaht Member, Braver of Worlds, Alpha One, Alpha Two, Early Alpha Two
    edited January 2023
    maouw wrote: »
    Diamaht wrote: »
    Kilion wrote: »
    I don't want anyone living there Orwellian wet dreams out even in a freaking game.

    This. No thanks.

    It always breaks down the same way.

    Q: "Well OK, but who decides?"
    A" "Well, me of course."


    I mean, I agree this is 100% an issue right now, but I feel like it has a really simple solution:

    Public logs of moderation decisions.

    My theory is that if you just leave everything in the light and give it fresh air, then it's much harder for mould to grow. Maybe I'm too naive?

    Its not a bad idea really.

    The issue is that an auto moderation function is the opposite of fresh and free. It's biased based on the person creating the software. Its an instance where the few decide for the many, and it eliminates context. It's not as large of an issue in a video game chat log as it is applied to wider society, but the same concerns apply.

    You wouldn't be able to blame Intrepid for filtering out certain types of content/speech, however an AI implies that software is making decisions and (more importantly) taking actions against players based on its own, human created and flawed algorithm.

    If I'm facing in game discipline from Intrepid, I want to hear from Intrepid in some capacity, not from software.

    Edit: Also, as for public logs, it's a slippery slope there too. It can be a bit of a public flogging (and no end to Youtube content) if the nasty things people say to each other are aired out for all to see. It would also present the entire community in that light.
  • AzheraeAzherae Member, Alpha One, Alpha Two, Early Alpha Two
    Diamaht wrote: »
    maouw wrote: »
    Diamaht wrote: »
    Kilion wrote: »
    I don't want anyone living there Orwellian wet dreams out even in a freaking game.

    This. No thanks.

    It always breaks down the same way.

    Q: "Well OK, but who decides?"
    A" "Well, me of course."


    I mean, I agree this is 100% an issue right now, but I feel like it has a really simple solution:

    Public logs of moderation decisions.

    My theory is that if you just leave everything in the light and give it fresh air, then it's much harder for mould to grow. Maybe I'm too naive?

    Its not a bad idea really.

    The issue is that an auto moderation function is the opposite of fresh and free. It's biased based on the person creating the software. Its an instance where the few decide for the many, and it eliminates context. It's not as large of an issue in a video game chat log as it is applied to wider society, but the same concerns apply.

    You wouldn't be able to blame Intrepid for filtering out certain types of content/speech, however an AI implies that software is making decisions and (more importantly) taking actions against players based on its own, human created and flawed algorithm.

    If I'm facing in game discipline from Intrepid, I want to hear from Intrepid in some capacity, not from software.

    I don't know how Amazon does it, but for the ones I've poked at or worked with/on, this isn't how it works.

    The 'AI' parses two things, 'sentiment' and 'intent'. What it lacks most of the time is 'context'.

    So it can tell you 'at X time on Y date Z player used a clearly racial slur in a chat'. And good ones can tell you 'whether the tone of the conversation in the surrounding lines was positive or negative or neutral' based on some other types of 'emphatic' language.

    It SHOULDN'T then be able to ban or probably even mute the person. But it COULD be set to the point where if a person SAID something and another person reported it, the flag that would come through would be more like 'Confirmed usage of racial slur in negative sentiment at X time', probably with the required snippet from chat logs included.

    A human can glance over that easily.

    Similarly, the AI could issue warnings under the same conditions, because a warning is a useful point to tell a player 'hey you're about to go over the line maybe, and you are now being watched more'. They could then complain that the algorithm is badly tuned and explain the thing they said. There's no punishment then, it could be outright ignored.

    An AI to tell people 'hey you might be close to breaking the rules here, tone it down' would not be a problem, and I consider this part of chat moderation.
    ♪ One Gummy Fish, two Gummy Fish, Red Gummy Fish, Blue Gummy Fish
  • DiamahtDiamaht Member, Braver of Worlds, Alpha One, Alpha Two, Early Alpha Two
    edited January 2023
    Azherae wrote: »

    I don't know how Amazon does it, but for the ones I've poked at or worked with/on, this isn't how it works.

    The 'AI' parses two things, 'sentiment' and 'intent'. What it lacks most of the time is 'context'.

    So it can tell you 'at X time on Y date Z player used a clearly racial slur in a chat'. And good ones can tell you 'whether the tone of the conversation in the surrounding lines was positive or negative or neutral' based on some other types of 'emphatic' language.

    So far so good, its just a live search engine. If its simply feeding that info to a human being, no problem. It would make moderation jobs easier. There is still a big brother feel to it that people don't usually appreciate, but welcome to Earth in 2023.
    Azherae wrote: »
    It SHOULDN'T then be able to ban or probably even mute the person. But it COULD be set to the point where if a person SAID something and another person reported it, the flag that would come through would be more like 'Confirmed usage of racial slur in negative sentiment at X time', probably with the required snippet from chat logs included.

    This, inevitable?, next step is where the issues come up. Now we are having it decide to mute people in game, and tell them what it decided they did while it assigns punishment.
    Azherae wrote: »
    A human can glance over that easily.

    However, the player is effectively punished until a person has time to review the incident and decide if the software did the right thing.
    Azherae wrote: »
    Similarly, the AI could issue warnings under the same conditions, because a warning is a useful point to tell a player 'hey you're about to go over the line maybe, and you are now being watched more'. They could then complain that the algorithm is badly tuned and explain the thing they said. There's no punishment then, it could be outright ignored.

    Again we are starting with the assumption that software is always correct and that it should be allowed to interact with, and take action on, the player base on Intrepid's behalf. All of this without the aforementioned context. Also if the software decides that you are not in line you will then, under that example, need to explain yourself to a person (when they have time to get to you) so that you may be forgiven.
    Azherae wrote: »
    An AI to tell people 'hey you might be close to breaking the rules here, tone it down' would not be a problem, and I consider this part of chat moderation.

    Again its deciding what people are doing and taking action on them.

  • AzheraeAzherae Member, Alpha One, Alpha Two, Early Alpha Two
    @Diamaht

    Grati, I understand what part of it you have a problem with now.

    I obviously don't, but as you also pointed out, the one writing the software seldom does.
    ♪ One Gummy Fish, two Gummy Fish, Red Gummy Fish, Blue Gummy Fish
  • IskiabIskiab Member, Alpha Two
    I work with ML irl. The thing about AI is it’s never 100% accurate. That means it’s great for simple low impact decisions, but needs a human to handle important decisions.

    So AI to handle some moderation with humans to handle appeals would work best.
  • SolvrynSolvryn Member, Alpha One, Alpha Two, Early Alpha Two
    Nerror wrote: »
    I don't know a whole lot about AI, but from my limited experience with it and from what I have read about it, it's pretty much become the norm for the big social media sites to use AI/machine learning to moderate their content. We've reached a point where the AI can speak pretty much any language, and understand context fairly well. Facebook, Tinder, Linkedin, Instagram and more use it.

    Those are huge platforms, so it's obviously impossible to hire enough humans to do the job. And with potentially a million or more Ashes players, that's a lot of chat being generated every second, so I can see a potential for AI moderation in the game as well. I have zero clue how much it would cost Intrepid to implement it, but for the sake of the argument, let's say they can do it.

    But is it a good idea? And if so, how much power do you want to give the AI? Should it be able to ban people? For example, if it identifies one of those annoying goldsellers spamming, using their usual tricks of using weird symbols and spaces to circumvent simple word filters, should it just straight up ban the account? Perhaps only mute and flag for moderation by an active GM?

    What about guild or group chat, and even /whispers? Obviously nothing is private in the game, including whispers. Assume it's all being logged anyway, even if only for a short period. Do you want the AI to moderate whispers, so if someone starts calling you all kinds of slurs or threaten or wish you bodily harm, the AI catches it before it even reaches you and blocks the message and flags the offender for the GM team to review.

    There are some obvious downsides, like the AI not exactly being infallible, but I can see some upsides as well.

    Thoughts?

    I already tell modbot fuck off in Discord more than appropriate.

    I rather talk to Vaknar, Roshen, and the rest of the mod team 100000000000000000000000% of the time.

  • Nerror wrote: »
    I don't know a whole lot about AI, but from my limited experience with it and from what I have read about it, it's pretty much become the norm for the big social media sites to use AI/machine learning to moderate their content. We've reached a point where the AI can speak pretty much any language, and understand context fairly well. Facebook, Tinder, Linkedin, Instagram and more use it.

    Those are huge platforms, so it's obviously impossible to hire enough humans to do the job. And with potentially a million or more Ashes players, that's a lot of chat being generated every second, so I can see a potential for AI moderation in the game as well. I have zero clue how much it would cost Intrepid to implement it, but for the sake of the argument, let's say they can do it.

    But is it a good idea? And if so, how much power do you want to give the AI? Should it be able to ban people? For example, if it identifies one of those annoying goldsellers spamming, using their usual tricks of using weird symbols and spaces to circumvent simple word filters, should it just straight up ban the account? Perhaps only mute and flag for moderation by an active GM?

    What about guild or group chat, and even /whispers? Obviously nothing is private in the game, including whispers. Assume it's all being logged anyway, even if only for a short period. Do you want the AI to moderate whispers, so if someone starts calling you all kinds of slurs or threaten or wish you bodily harm, the AI catches it before it even reaches you and blocks the message and flags the offender for the GM team to review.

    There are some obvious downsides, like the AI not exactly being infallible, but I can see some upsides as well.

    Thoughts?

    I would much prefer without AI. These AI are super annoying and make authoritarian decisions which you even cannot understand. In AOE2 for exemple (for those who played), 10% of your words get consored for no reasons or wrong interpretation.

    AI of social networks are certainly super expansive as they work in many languages, Im not sure Intrepid can afford it.

    So in conclusion I would prefer moderators in game with the possibility to mute a guy for 1 hour and that's it.
    And the possibility for players to cut the channel they don't want to read.

    A good report window would be also more than welcome, to make it easy and fast to report someone who told some forbiden things like gold seller, violence or racism whatever. Something goes to moderator (not to GM) to avoid GM to be under preasure as they must stay focus on cheating things and sanctions other than "mute".
  • NishUKNishUK Member
    edited January 2023
    This can all really be a non issue if you apply AI and logic in the correct way...
    Azherae wrote: »
    And good ones can tell you 'whether the tone of the conversation in the surrounding lines was positive or negative or neutral' based on some other types of 'emphatic' language.

    An AI to tell people 'hey you might be close to breaking the rules here, tone it down' would not be a problem, and I consider this part of chat moderation.

    Measures such as this automatically assume that everyone could at any point be in violation of the ToS and lead to nothing but misery (promoting silence or forever changing their personal tone/speech) for those who believe in freedom of expression and very much wish for everyone's "character" to be on display.
    I'll be as short as possible expanding on this (sadly didn't turn out like I'd hoped...), for a largely mature and open minded audience who can handle a degree of pressure they will enjoy a varied level of banter, jokes, topics and conflict, all of which they will be happy to moderate themselves with their own retort or via proper utilization of the friends/block list.
    This is hugely important for freedom of content, choice and entertainment value as an example if I'm joining a guild or party within the game that involves a lot of long grinding or PvP and there's mostly very tame and mainly "forced friendly" chat (politically correct/sensitive conduct) going on then I find out down the line these are people I would not normally associate with based off a number of personal factors then this very intimate online experience has let me down.
    I am part of EU servers and there can be a wide variety of chat and preferences, I've often bumped into people around Greece and am very fond of the "Malaka's!" over there and we have talked and behaved in countless ways, which have been beautifully entertaining and mistakes are part and parcel of the experience. "Boys will be boys" and it can range like an American Pie movie, from the sophisticated Finch's down to the direct Stifler's (or Tyler1's xD) and we're all unified and accepting despite our differences. Dictated parameters of speech should never be promoted based off of anyone's personal experiences and it is frankly none of anyone's business to get involved other than for serious cases of direct personal/bullying attacks which very rarely happen in good guilds and those reports, as with all reports should be very serious in nature and not to be the airing of personal grievances, people have the choice to break and form new relationships/guilds and that is part and parcel of the entertainment factor that naturally tips the scales of competition for many guilds, solid friendships and rivalries.
    Myosotys wrote: »
    AI of social networks are certainly super expansive as they work in many languages, Im not sure Intrepid can afford it.

    No one should even have to, not even Riot who have created the mess they're in from their over expansion and changes that have in turn resulted in a stressed environment they would likely dub "one of the greatest online gaming experiences". I'll happily expand on that if needed but I'll finish with AI/Filter/"Steam accept"/Discord intergration.

    MMORPG's more than any other range from an experienced adult to a 13 year old who has had extremely little social interactions and perhaps been influenced a bit too much by some Tik Tok videos, it could not be a more varied audience. If you believe moderation to be a potential key factor in the fight to eliminate the extreme types of misbehavior that can lead to mental pain or worse then you should also be in favour of clever prevention practices and I'll end with this as an example.

    Upon launching the game you're met with a 1 time un-skippable video introduction of Steven, mainly giving a brief overview and something along the lines of "have fun!" at the end but at very start can be "first things first, Verra is full of people with different backgrounds, ages and experiences, if you do not wish to moderate the verbal content of players I highly advise you click here now (5 seconds after), if you're happy please digest the TOS (shown, scroll down agree), let's carry on!..."
    (if online mmorpg's cared they would rate their game as 'M for Mature 17+' as the competitive space is highly organised or dictacted in a varied fashion by players usually in their late 20's or older, "also if you are under 17" can additionally join the advisement)
    From clicking the first option you are met with a warm welcome from Margaret "Hey guys, I hope you're looking forward to exploring and embarking on many adventures in the world of Verra. Grouping with fellow adventurers can very much be essential to your success, emotes, simple communications and pop ups are in place to join up with adventurer's. If you enjoy the company of someone don't be afraid to use the Friends lists and under the current party or last partied with tab send a friend request, it works in a similar fashion to Steam! from there you can communicate with each other as you see fit, you can also do this with guilds!" (she carries on with other helpful tips, basically deep in the option settings of the game, AI generated chat and a number of filters have been turned on from the get go). "Discord has also been automatically intergrated with the chat and linked to our official server, we have a great moderation team and I hope you enjoy yourselves!" (additional rooms within the server have essentially replaced most of the ingame chat).
    Additionally the streaming filter option is easily accessible and will likely be used by Steven and the devs, where AI will change chat or make anyone using swearing/innapropriate words look like Barrett going off on one in FF7 "#$%#@#!"

    Prevention is the better key, not a roof of moderation ready at any moment to scorn and interfere with anyones true character!
  • superhero6785superhero6785 Member, Alpha Two
    edited January 2023
    I don't mind using AI to assist in filtering POTTENTIAL content that violates TOS. But I think punishments should all be reviewed by human moderators.

    I have a great personal example from Facebook. I'm part of a D&D group and one of the members asked for advice for a scenario he was DMing. I gave a suggestion, which was somewhat violent for the players, as that was what his story involved. I then got a notification that I received a chat restriction LOL. I tried appealing it and got another notification saying that basically they were too busy to review it and I was out of luck. So now I have a permanent warning on my profile that I can't do anything about LMFAO.
  • No, I don't want an AI (or a living person) to filter and censor me or others when typing in the chat. I'd prefer a good and relyable report system instead, managed by real people who can understand the real context.
  • Recent news hint to a big progress in AI soon.
    Can happen that by the time this game is released, AI will be a standard solution in other games too.
    If is better than humans at tracking what humans do, then might be ok.
    It will also be a social experiment/test which will reveal if it should be applied outside of games too.

    Law is a human invention. Yet humans are afraid if they cannot break the law.
    Kill the gatherer, take his loot but say "sorry" to confuse that AI. Or complain he didn't flagged up. Tell that gatherer how bad it is to be corrupt because he didn't fought back. See if the AI bans him.
    September 12. 2022: Being naked can also be used to bring a skilled artisan to different freeholds... Don't summon family!
Sign In or Register to comment.