Azherae wrote: » It would be mostly fine, but the adaptations of toxicity would be quick, and more importantly, 'meant to create false positives'. Humans will adapt their 'methods of being toxic' faster than most language programs can keep up because humans can easily substitute terms. It's not that the AI wouldn't detect the new terms after a while, it's that people find ways to change the context to make 'regular conversation' cause things to flag.
daveywavey wrote: » Ashes will have in-game GMs. If all the AI does is flag potential breaches up to the in-game GMs, then that's fine.
Dolyem wrote: » No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication.
NishUK wrote: » Dolyem wrote: » No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication. Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings.
Dolyem wrote: » NishUK wrote: » Dolyem wrote: » No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication. Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings. You're not wrong. I'd make it a remote job that can be done online honestly. And screen people with psychological tests to vet them.
Neurath wrote: » Dolyem wrote: » NishUK wrote: » Dolyem wrote: » No thanks, AI isn't great with context as far as I can tell. Leave it to people with the least biases to moderate communication. Even Elon is struggling to do that atm with who he's currently hired but at least he has the excuse of he is managing more than 1 thing. This company is based in California so I have little faith that they can find or even bother to utilize smart people for the job that possess very little bias and they will bend the words "unsafe" and "toxic" for all their worth and vague meanings. You're not wrong. I'd make it a remote job that can be done online honestly. And screen people with psychological tests to vet them. I disagree with psychological tests to be able to participate in a form of relaxation.
Kilion wrote: » I don't want anyone living there Orwellian wet dreams out even in a freaking game.
Diamaht wrote: » Kilion wrote: » I don't want anyone living there Orwellian wet dreams out even in a freaking game. This. No thanks. It always breaks down the same way. Q: "Well OK, but who decides?" A" "Well, me of course."
maouw wrote: » Diamaht wrote: » Kilion wrote: » I don't want anyone living there Orwellian wet dreams out even in a freaking game. This. No thanks. It always breaks down the same way. Q: "Well OK, but who decides?" A" "Well, me of course." I mean, I agree this is 100% an issue right now, but I feel like it has a really simple solution: Public logs of moderation decisions. My theory is that if you just leave everything in the light and give it fresh air, then it's much harder for mould to grow. Maybe I'm too naive?
Diamaht wrote: » maouw wrote: » Diamaht wrote: » Kilion wrote: » I don't want anyone living there Orwellian wet dreams out even in a freaking game. This. No thanks. It always breaks down the same way. Q: "Well OK, but who decides?" A" "Well, me of course." I mean, I agree this is 100% an issue right now, but I feel like it has a really simple solution: Public logs of moderation decisions. My theory is that if you just leave everything in the light and give it fresh air, then it's much harder for mould to grow. Maybe I'm too naive? Its not a bad idea really. The issue is that an auto moderation function is the opposite of fresh and free. It's biased based on the person creating the software. Its an instance where the few decide for the many, and it eliminates context. It's not as large of an issue in a video game chat log as it is applied to wider society, but the same concerns apply. You wouldn't be able to blame Intrepid for filtering out certain types of content/speech, however an AI implies that software is making decisions and (more importantly) taking actions against players based on its own, human created and flawed algorithm. If I'm facing in game discipline from Intrepid, I want to hear from Intrepid in some capacity, not from software.