Aumann's Agreement Theorem was proved more than twenty years after Asch's experiments, but it only formalizes and strengthens an intuitively obvious point - other people's beliefs are often legitimate evidence.
No, other people's beliefs are often treated as evidence, and very powerful evidence at that.
Belief is not suitable as any kind of evidence when more-direct evidence is available, yet people tend to reject direct evidence in order to conform with the beliefs of others.
The human goal usually isn't to produce justified predictions of likelihood, but to ingratiate ourselves with others in our social group.
What are you attempting to do, Eliezer?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Unpack "local maxima". Maxima of what?
I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.
Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?
So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.
Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?
I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.