timtyler comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 04 February 2011 09:33:26AM *  0 points [-]

Yudkowsky apparently councils ignoring the ticking as well - here:

Until you can turn your back on your rivals and the ticking clock, blank them completely out of your mind, you will not be able to see what the problem itself is asking of you. In theory, you should be able to see both at the same time. In practice, you won't.

I have argued repeatedly that the ticking is a fundamental part of the problem - and that if you ignore it, you just lose (with high probability) to those who are paying their clocks more attention. The "blank them completely out of your mind" advice seems to be an obviously-bad way of approaching the whole area.

It is unfortunate that getting more time looks very challenging. If we can't do that, we can't afford to dally around very much.

Comment author: Perplexed 04 February 2011 11:41:17AM 1 point [-]

Yudkowsky apparently councils ignoring the ticking as well

Yes, and that comment may be the best thing he has ever written. It is a dilemma. Go too slow and the bad guys may win. Go too fast, and you may become the bad guys. For this problem, the difference between "good" and "bad" has nothing to do with good intentions.

Comment author: timtyler 04 February 2011 09:36:16PM *  2 points [-]

Another analyis is that there are at least two types of possible problem:

  • One is the "runaway superintelligence" problem - which the SIAI seems focused on;

  • Another type of problem involves the preferences of only a small subset of human being respected.

The former problem has potentially more severe consequences (astronomical waste), but an engineering error like that seems pretty unlikely - at least to me.

The latter problem could still have some pretty bad consequences for many people, and seems much more probable - at least to me.

In a resource-limited world, too much attention on the first problem could easily contribute to running into the second problem.