Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Grant comments on Can't Unbirth a Child - Less Wrong

24 Post author: Eliezer_Yudkowsky 28 December 2008 05:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Grant 29 December 2008 02:33:55AM 0 points [-]

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

Comment author: DanielLC 15 January 2013 05:51:21AM 0 points [-]

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind?

It's not going to make you more powerful than it if it's going to limit its ability to make you more intelligent in the future. It will make sure it's intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.