I agree that the truth/verification distinction holds and didn't mean to imply the contrary.
Also, truth being a vanishing point or limit of justification reinvokes the false distinction between belief/knowledge (truth) that I began with.
The main claim is that the external world provides no such limit. Justification for belief does not (and cannot) rely on coherence with the external world because we are only ever inside the "world" of the internal--that is, we are stuck inside our beliefs and perceptions and never get beyond them into the external wo...
The semantic distinction between uses of the words belief and knowledge certainly occurs, however, because it is (even partly) external, it ceases to be a useful distinction.
Moreover, for the holder of the belief, justification does reduce to the belief of being justified. After all, one cannot both hold "I think X-belief is justified" and "I don't believe in X-belief". One believes in things that are justified (for them) and the beliefs one believes to be justified are the ones they believe are "true".
The orthodox use of "knowledge" implies access to "the...
If the distinction between belief/knowledge is collapsed, then "truth" is merely justified belief. That is, "truth" is just beliefs which we believe to be justified, however that justification doesn't say anything about the world "out there", or imply that we have access to that world.
In other words, the distinction implies that coherence between belief is what determines our naming of certain statements as "true" and not coherence with the truth/reality.
You're right--definitions are hard but here we go:
Belief: A mental state that an individual holds to be true, typically in the form "X is Y."
Knowledge: When a belief accurately corresponds to objective reality. In other words, a belief that is correct.
It’s important to realize that the ideal value model depends on the situation you are in. A common mistake in both tech and life is to use a value model that made sense in a different product or society, but doesn’t make sense in this one. In essence, you have a value model that did well at solving the problems you used to have, but not at solving the problems you have right now.
Tensions: It seems unintuitively true that everything can described as good or bad ("I'm not impulsive, I'm spontaneous!"). However, if we favor vectors over rigid categories...
It seems to me that the ability of emotions to evaluate/categorize stimuli is descriptively correct, but there is a large gap between that and a prescription. Emotions surely do act as shortcut heuristics to detail with the excess of detail, however this can lead to as many mistakes as there are successes. Moreover, AI already is capable of coming up with statements or decisions despit... (read more)