Eliezer_Yudkowsky comments on Rationality Quotes: December 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (331)
I can't help but ask whether you've ever found this advice personally useful, and if so, how.
A much more concrete example is cloud computing. Granted, computers don't "think," but it's a close enough analogy.
You must always keep in mind that there is no magic "cloud"- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
Actually my first thought upon reading that was "follow the improbability" -- be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can't see the source of the optimization pressure.
The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.
This is the allusion I had in mind, but actually I've had occasion to quote this when talking about corporations and similar institutions. If an organization doesn't keep its brain inside a human skull (and I'm sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
Telemarketers.
Never trust another computational agent unless you can see its source code?
Never trust other thinking beings if you don't know the location of their intelligence center so that you can destroy it if necessary?
Never trust anyone unless you're talking in person? :p
Talking to Clippy? As in, I don't.
Why not?