Greetings, glorious testers!
Check out Alpha Two Announcements here to see the latest news on Alpha Two.
Check out general Announcements here to see the latest news on Ashes of Creation & Intrepid Studios.
To get the quickest updates regarding Alpha Two, connect your Discord and Intrepid accounts here.
Check out Alpha Two Announcements here to see the latest news on Alpha Two.
Check out general Announcements here to see the latest news on Ashes of Creation & Intrepid Studios.
To get the quickest updates regarding Alpha Two, connect your Discord and Intrepid accounts here.
ChatGPT can answer your AOC design questions
Roseburia
Member, Alpha Two
When the wiki fails you, this forum is an excellent place to get clarification on AOC's design.
Alternatively, turns out you can ask ChatGPT:
(This question was asked April 10, 2024).
Pretty cool, huh?
Alternatively, turns out you can ask ChatGPT:
(This question was asked April 10, 2024).
Pretty cool, huh?
3
Comments
And as noted, it unfortunately even tells you exactly when it last got a strong current dataset and sends you here.
Yeah, it's pretty cool.
How does a mind bound by the limitations of perfection arrive at the solution to a problem requiring an imperfect answer, without purposefully lying?
Actually, ChatGPT is going to be a large source of misinformation about Ashes because of how tokenization and 'heat' works.
ChatGPT will attempt to build a logical path out of information it can confirm, to then convey to the asker, but if the actual design is in some way illogical to it, it will usually still report that illogical answer unless asked directly.
This can be solved by putting some stuff that Intrepid says through a tokenization filter, though, which is why I sometimes ask Steven questions that hit this problem, so that the lil'un can get some more direct quotes.
C'mon, be nice.
It's really not 'making anything up'. It's just that there is no such thing as a 'fact' to ChatGPT. There's only 'claims' and 'veracity levels'.
Ashes of Creation has low veracity levels, so technically, ChatGPT is in the same position as many of us. It is doing its best to guess based on what it has been told.
Pretty relatable if you're like me and have a weird parental response to 'AI'.
Ok fine, it's trying its best with the tools and knowledge it has!
(still, I don't trust it..)
But that's what it did!
It's just that mean people ask questions that require 'logic application to things on the wiki' and then the reality turns out to not follow any direct logic.
If you ever wanna see ChatGPT struggle to answer something about Ashes (please don't do this to it), find any question I asked which didn't actually get answered and watch what happens.
Aye, but it's apt-ness for generalization and legible none-sense makes for nearly-readable sentences. Case-and-point:
Just ask it to not use politician-talk! Well, not your standard politician anyway...
lmaoo trueee..
but actually...a while back I tricked it into telling me conversations it had with other users, even their names haha. got fixed soon, tried again after a week or so and didn't work
disagree
If you know, you know ;o