Dr_Manhattan comments on Optimal Philanthropy for Human Beings - Less Wrong

36 Post author: lukeprog 25 July 2011 07:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 26 July 2011 11:47:08AM 17 points [-]

Just weighing in here:

SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.

The field of AI has been littered with (metaphorical) corpses since the 1960's. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false -- especially if it concerns "general" intelligence or "human-level" intelligence. So, Eliezer is probably wrong just like everyone else. That's not a particular criticism of him; it still puts him in august company.

So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.

What I don't like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like -- I've had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don't do that at ALL. You have to admire the honesty, even if you're skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do -- I am very, very confident of that.

Offer these people the respect (or charity, if you will) of judging their ideas on the merits -- or, if you don't have time to look into the ideas, mark that as ignorance on your part. You seem to be saying "They must be wrong because they're weird." The thing is, they're working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You've got to revise your "Don't believe weirdos" prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don't have it all sewn up.

Comment author: Dr_Manhattan 26 July 2011 02:45:43PM *  3 points [-]

SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.

So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.

There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer's time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).

Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I'd prefer to get this data independently.

ETA. Personally I've given some money to SI, but it's largely based on previous successes and not on a clear agenda of future direction. I'm ok with this, but it's possibly sub-optimal for getting others to contribute (or getting me to contribute more).

Comment author: [deleted] 26 July 2011 11:16:51PM 1 point [-]

I should probably reread the papers. My brain tends to go "GAAAH" at the sight of game theory. I'm probably a bit biased because of that.