Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ratheka comments on My Wild and Reckless Youth - Less Wrong

31 Post author: Eliezer_Yudkowsky 30 August 2007 01:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Joshua 12 February 2011 08:34:43PM *  0 points [-]

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Comment author: Ratheka 21 January 2012 03:32:16AM 1 point [-]

I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn't have anything to work on, or any direction to do so. Agency is hard to build, I think.

As always, of course, I could be wrong.

Comment author: ata 21 January 2012 06:23:22AM *  0 points [-]

Would a "perfect implementation of Bayes", in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.

Comment author: Ratheka 21 January 2012 08:07:19AM 0 points [-]

Well, I'm not personally capable of building AI's, and I'm not as deeply versed as I'm sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn't make sense. Bayes is in itself already math, easy to put into code, but we don't understand internally directed behavior well enough to model it, yet.