This post was rejected for the following reason(s):
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
Reading discussions around Gettier cases, JTB, fallibilism, etc., started feeling like watching the same idea circle itself with slightly fancier synonyms.
At some point you have to ask: What are we actually trying to determine?
If “knowledge” can’t exist without a perfectly objective observer, which does not exist, but every sense and memory we have is fallible, what are we even measuring?
And if you bring that into AI alignment:
Are we building “truth detectors,” or are we just trying to formalize human-like confidence and calling it “knowledge” to make it sound smarter?