Comment author: Vladimir_Nesov 30 January 2011 03:28:11AM 1 point [-]

Unpack "local maxima". Maxima of what?

Comment author: Joshua 12 February 2011 08:34:43PM *  0 points [-]

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Comment author: Caledonian2 26 December 2007 02:37:51PM 3 points [-]

Aumann's Agreement Theorem was proved more than twenty years after Asch's experiments, but it only formalizes and strengthens an intuitively obvious point - other people's beliefs are often legitimate evidence.

No, other people's beliefs are often treated as evidence, and very powerful evidence at that.

Belief is not suitable as any kind of evidence when more-direct evidence is available, yet people tend to reject direct evidence in order to conform with the beliefs of others.

The human goal usually isn't to produce justified predictions of likelihood, but to ingratiate ourselves with others in our social group.

What are you attempting to do, Eliezer?

Comment author: Joshua 12 February 2011 08:03:06PM 0 points [-]

Isn't this exactly what was said in Hug The Query? I'm not sure I understand why you were down voted.

Comment author: Joshua 30 January 2011 03:15:28AM *  0 points [-]

While reading through this I ran into a problem. It seems intuitive to me that to be perfectly rational you would have to have instances in which given the same information two rationalists disagreed. I think this because I presume that a lack of randomness leads to a local maxima. Am I missing something?

In response to Fake Causality
Comment author: Steve_Massey 23 August 2007 10:43:44PM 1 point [-]

What TGGP said. Also, would an AI really be better at determining the falsifiability of a theory? It seems to me that, given a particular theory, an algorithm for determining the set of testable predictions thereof isn't going to be easy to optimize. How does the AI prove that one algorithm is better than another? Test it against a set of random theories?

Comment author: Joshua 29 January 2011 05:00:38AM *  1 point [-]

When I read that the "properly" part really stood out to me. I felt like I was reading about a "true" Scotsman, the sort that would never commit a crime.