Comment author: PeerInfinity 20 January 2012 01:06:13AM *  4 points [-]

One obvious idea for an exercise is MBlume's Positive Bias Test, which is available online.

But of course everyone taking the course would probably already be familiar with the standard example implemented in the app. I would suggest updating the app to have several different patterns, of varying degrees of complexity, and a way for the user to choose the difficulty level before starting the app. I would expect that to be not too hard to implement, and useful enough to be worth implementing.

Comment author: Ratheka 25 January 2012 11:31:26AM 4 points [-]

What do you think of the idea of an RPG type game where the quests are designed to trigger biases in people, and that required clear thinking to win at? I'd be a big fan of a game that required you to read quests and think about them, and moved away from the 'track arrow, kill everything en route' model that many have today. Of course, it still needs to be fun to entice people to play it. Functional edutainment seems to be a rough balance to strike.

Comment author: NancyLebovitz 27 December 2011 03:56:48AM 5 points [-]

There are additional possibilities, like everyone agreeing on agnosticism or on some other religion.

Comment author: Ratheka 21 January 2012 10:51:57AM 0 points [-]

Can I vote Discordianism? Knowing how silly it all is is a property of the text, Isn't that helpful?

Comment author: ata 21 January 2012 06:23:22AM *  0 points [-]

Would a "perfect implementation of Bayes", in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.

Comment author: Ratheka 21 January 2012 08:07:19AM 0 points [-]

Well, I'm not personally capable of building AI's, and I'm not as deeply versed as I'm sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn't make sense. Bayes is in itself already math, easy to put into code, but we don't understand internally directed behavior well enough to model it, yet.

Comment author: Joshua 12 February 2011 08:34:43PM *  0 points [-]

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Comment author: Ratheka 21 January 2012 03:32:16AM 1 point [-]

I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn't have anything to work on, or any direction to do so. Agency is hard to build, I think.

As always, of course, I could be wrong.

Comment author: TheOtherDave 18 January 2012 10:10:00PM 1 point [-]

I assume that there are other tests involved, both before and after, but I don't see the relevance of that. I may be missing your point.

Comment author: Ratheka 18 January 2012 10:40:25PM 1 point [-]

Perhaps I missed yours? Rationality requires the ability to challenge social pressure, certainly. Are you questioning whether this procedure picks rationalists from nonrationalists? If so, and on its own, I don't argue that it would, just that it would probably be one member of a larger set of tests.

Comment author: TheOtherDave 05 November 2010 06:17:56PM 1 point [-]

(nods) Fair enough. Not knowing the goals, I'm in no position to judge this fictional selection procedure... I'd have to read more stories set in this world to be entitled to an opinion there.

Trivially, if what they want is a group that is good at mental arithmetic and resisting social pressure, they're going about it in a reasonable way.

More broadly, if they aren't claiming that their initiation procedure preferentially selects rationalists, then my concern becomes irrelevant.

Comment author: Ratheka 18 January 2012 09:46:38PM 1 point [-]

Nobody else seems to have added this response, so I will. We don't know that this moment, in the ritual room, is the only test they undergo. Perhaps one's ability to take a written exam is part of the public procedures. Perhaps a great open exam where anyone who wants to can sit it, running near continuously, is the first stage, and Brennan has had months in a cloisterlike environment in the public secret face of the conspiracy where the people who can study sciences but not generate new true science study?