lukeprog comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 03 February 2011 05:42:02PM *  22 points [-]

Awesome. Now your reaction here makes complete sense to me. The way I worded my original article above looks very much like I'm in either the 1st category or the 4th category.

Let me, then, be very clear:

  • I do not want to raise questions so that I can make a living endlessly re-examining philosophical questions without arriving at answers.

  • I want me, and rationalists in general, to work aggressively enough on these problems so that we have answers by the time AI+ arrives. As for the fact that I don't have answers yet, please remember that I was a fundamentalist Christian 3 years ago, with no rationality training at all, and a horrendous science education. And I didn't discover the urgency of these problems until about 6 months ago. I've have had to make extremely rapid progress from that point to where I am today. If I can arrange to work on these problems full time, I think I can make valuable contributions to the project of dealing safely with Friendly AI. But if that doesn't happen, well, I hope to at least enable others who can work on this problem full time, like yourself.

  • I want to solve these problems in 15 years, not 20. This will make most academic philosophers, and most people in general, snort the water they're drinking through their nose. On the other hand, the time it takes to solve a problem expands to meet the time you're given. For many philosophers, the time we have to answer the questions is... billions of years. For me, and people like me, it's a few decades.

Comment author: lukeprog 12 February 2011 02:51:23AM 3 points [-]

Any response to this, Eliezer?

Comment author: Eliezer_Yudkowsky 12 February 2011 03:15:55AM 7 points [-]

Well, the part about you being a fundamentalist Christian three years ago is damned impressive and does a lot to convince me that you're moving at a reasonable clip.

On the other hand, a good metaethical answer to the question "What sort of stuff is morality made out of?" is essentially a matter of resolving confusion; and people can get stuck on confusions for decades, or they can breeze past confusions in seconds. Comprehending the most confusing secrets of the universe is more like realigning your car's wheels than like finding the Lost Ark. I'm not entirely sure what to do about the partial failure of the metaethics sequence, or what to do about the fact that it failed for you in particular. But it does sound like you're setting out to heroically resolve confusions that, um, I kinda already resolved, and then wrote up, and then only some people got the writeup... but it doesn't seem like the sort of thing where you spending years working on it is a good idea. 15 years to a piece of paper with the correct answer written on it is for solving really confusing problems from scratch; it doesn't seem like a good amount of time for absorbing someone else's solution. If you plan to do something interesting with your life requiring correct metaethics then maybe we should have a Skype videocall or even an in-person meeting at some point.

The main open moral question SIAI actually does need a concrete answer to is "How exactly does one go about construing an extrapolated volition from the giant mess that is a human mind?", which takes good metaethics as a background assumption but is fundamentally a moral question rather than a metaethical one. On the other hand, I think we've basically got covered "What sort of stuff is this mysterious rightness?"

What did you think of the free will sequence as a template for doing naturalistic cognitive philosophy where the first question is always "What algorithm feels from the inside like my philosophical intutions?"