Posts

Sorted by New

Wiki Contributions

Comments

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Isn't this exactly what was said in Hug The Query? I'm not sure I understand why you were down voted.

While reading through this I ran into a problem. It seems intuitive to me that to be perfectly rational you would have to have instances in which given the same information two rationalists disagreed. I think this because I presume that a lack of randomness leads to a local maxima. Am I missing something?

When I read that the "properly" part really stood out to me. I felt like I was reading about a "true" Scotsman, the sort that would never commit a crime.