Eliezer_Yudkowsky comments on Rationality Quotes: December 2010 - Less Wrong

6 Post author: Tiiba 03 December 2010 03:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (331)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 15 December 2010 10:34:22AM 10 points [-]

I can't help but ask whether you've ever found this advice personally useful, and if so, how.

Comment author: xamdam 15 December 2010 10:50:02AM 4 points [-]

Never trust another computational agent unless you can see its source code?

Comment author: waitingforgodel 15 December 2010 11:33:18AM 1 point [-]

Never trust anyone unless you're talking in person? :p

Comment author: [deleted] 15 December 2010 01:40:36PM 1 point [-]

Never trust other thinking beings if you don't know the location of their intelligence center so that you can destroy it if necessary?

Comment author: topynate 15 December 2010 02:13:53PM 0 points [-]

Talking to Clippy? As in, I don't.

Comment author: Clippy 15 December 2010 04:44:41PM 0 points [-]

Why not?

Comment author: Larks 15 December 2010 04:28:46PM 3 points [-]

Telemarketers.

Comment author: bcoburn 15 December 2010 09:24:39PM 3 points [-]

The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.

Comment author: MBlume 15 December 2010 09:53:27PM 13 points [-]

Actually my first thought upon reading that was "follow the improbability" -- be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can't see the source of the optimization pressure.

Comment author: HonoreDB 15 December 2010 10:33:52PM 1 point [-]

This is the allusion I had in mind, but actually I've had occasion to quote this when talking about corporations and similar institutions. If an organization doesn't keep its brain inside a human skull (and I'm sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).

Comment author: Nentuaby 18 December 2010 02:12:32AM 6 points [-]

A much more concrete example is cloud computing. Granted, computers don't "think," but it's a close enough analogy.

You must always keep in mind that there is no magic "cloud"- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.