jacob_cannell comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 01 February 2011 05:55:05PM *  3 points [-]

I think it is important to keep in mind that the approach currently favored here, in which your choice of meta-ethics guides your choice of decision theory, and in which your decision theory justifies your metaethics (in a kind of ouroborean epiphany of reflective equilibrium) - that approach is only one possible research direction.

There are other approaches that might be fruitful. In fact, it is far from clear to many people that the problem of preventing uFAI involves moral philosophy at all. (ETA: Or decision theory.)

To a small group, it sometimes appears that the only way of making progress is to maintain a narrow focus and to ruthlessly prune research subtrees as soon as they fall out of favor. But pruning in this way is gambling - it is an act of desperation by people who are made frantic by the ticking of the clock.

My preference (which may turn out to be a gamble too), is to ignore the ticking and to search the tree carefully with the help of a large, well-trained army of researchers.

Comment author: jacob_cannell 02 February 2011 07:09:51AM 2 points [-]

Much depends of course on the quantity of time we have available. If the market progresses to AGI on it's own in 10 years, our energies are probably best spent focused on a narrow set of practical alternatives.

If we have a hundred years, then perhaps we can afford to entertain several new generations of philosophers.

Comment author: Vladimir_Nesov 04 February 2011 10:37:08AM *  1 point [-]

If the market progresses to AGI on it's own in 10 years, our energies are probably best spent focused on a narrow set of practical alternatives.

But the problem itself seems to suggest that if you don't solve it on its own terms, and instead try to mitigate the practical difficulties, you still lose completely. AGI is a universe-exploding A-Bomb which the mad scientists are about to test experimentally in a few decades, you can't improve the outcome by building better shelters (or better casing for the bomb).