Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 28 December 2008 03:05:41PM 5 points [-]

@Vassar: That which is popularly regarded in philosophy as a "mountain" is a foothill of AI. Of course there can be individual philosophers who've already climbed to the top of the "mountain" and moved on; the problem is that the field of philosophy as a whole is not strong enough to notice when an exceptional individual has solved a problem, or perhaps it has no incentive to declare the problem solved rather than treating an unsolvable argument "as a biscuit bag that never runs out of biscuits".

@Tyler: CEV runs once on a collection of existing humans then overwrites itself; it has no need to consider cyborgs, and can afford to be inclusive with respect to Terry Schiavo or cryonics patients.