Perplexed comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 21 August 2010 07:23:11PM *  3 points [-]

CEV is a bizarre wishlist, apparently made with minimal consideration of implementation difficulties ...

It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.

I think CEV is one proposal to consider, useful to focus discussion. I hate it, myself, and suspect that the majority of mankind would agree. I don't want some machine that I have never met and don't trust to be inferring my volition and acting on my behalf. The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI nor FAI ever happen. But, seen as an attempt to stimulate discussion, I think that the paper is great. And maybe discussion might improve the proposal enough to alleviate my concerns. Or discussion might show me that my concerns are baseless.

I sure hope EY isn't deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.

Comment author: timtyler 21 August 2010 07:25:09PM *  1 point [-]

The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI or FAI ever happen.

That seems unlikely to help. Luddites have never had any power. Becoming a Luddite usually just makes you more xxxxxd.

Comment author: timtyler 21 August 2010 07:39:31PM 0 points [-]

It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.

What - not at all? You want the moon-onna-stick - so that goes into your "preliminary requirements" document?

Comment author: Perplexed 21 August 2010 07:47:19PM 3 points [-]

Yes. Because there is always the possibility that some smart geek will say "'moon-onna-stick', huh? I bet I could do that. I see a clever trick." Or maybe some other geek will say "Would you settle for Sputnik-on-a-stick?" and the User will say "Well, yes. Actually, that would be even better."

At least that is what they preach in the Process books.

Comment author: timtyler 21 August 2010 08:45:48PM -2 points [-]

It sounds pretty surreal to me. I would usually favour some reality-imposed limits to fantasizing and wishful thinking from the beginning - unless there are practically no time constraints at all.

Comment author: timtyler 21 August 2010 07:29:01PM *  0 points [-]

I sure hope EY isn't deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.

If there was ever any real chance of success, governments would be likely to object. Since they already have power, they are not going to want a bunch of geeks in a basement taking over the world with their intelligent machine - and redistributing all their assets for them.