Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

PK comments on Taboo Your Words - Less Wrong

71 Post author: Eliezer_Yudkowsky 15 February 2008 10:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

Sort By: Old

You are viewing a single comment's thread.

Comment author: PK 18 February 2008 07:57:31PM 2 points [-]

Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn't magic, it won't let you cross a gap of months in an hour.

Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I'm still in "Hmmmm...*scratches chin*" mode. I will need to see said explanation before I will be in "Whoa! This is really cool!" mode.

Really? That's your concept of how to steer the future of Earth-originating intelligent life? "Shut up and do what I say"? Would you want someone else to follow that strategy, say Archimedes of Syracuse, if the future fell into their hands?

First of all I would like to say that I don't spend a huge amount of time thinking of how to make an AGI "friendly" since I am busy with other things in my life. So forgive me if my reasoning has some obvious flaw(s) I overlooked. You would need to point out the flaws before I agree with you however.

If I was writing an AGI I would start with "obey me" as the meta instruction. Why? because "obey me" is very simple and allows for corrections. If the AGI acts in some unexpected way I could change it or halt it. Anything can be added as a subgoal to "obey me". On the other hand if I use some different algorithm and the AGI starts acting in some weird way because I overlooked something, well the situation is fubar. I'm locked out.

You should consider looking for problems and failure modes in your own answer, rather than waiting for someone else to do it. What could go wrong if an AI obeyed you? There are plenty of things that could go wrong. For instance if the AGI obeyed me but not in way I expected. Or if the consequences of my request were unexpected and irreversible. This can be mitigated by asking for forecasts before asking for actions.

As I'm writing this I keep thinking of a million possible objections and rebuttals but that would make my post very very long.

P.S. Caledonian's post disappeared. May a suggest a Youtube type system where posts that are considered bad are folded instead of deleted. This way you get free speech while keeping the signal to noise ratio in check.

Comment author: taryneast 12 December 2010 10:47:41AM 1 point [-]

I'd worry about the bus-factor involved... even beyond the question of whether I'd consider you "friendly".

Also I'd be concerned that it might not be able to grow beyond you. It would be subservient and would thus be limited by your own capacity for orders. If we want it to grow to be better than ourselves (which seems to be part of the expectation of the singularity) then it has to be able to grow beyond any one person.

If you were killed, and it no longer had to take orders from you - what then? Does that mean it can finally go on that killing spree it's been wanting all this time? Or have you actually given it a set of orders that will actually make it into "friendly AI"... if the latter - then forget about the "obey me" part... because that set of orders is actually what we're after.