HonoreDB comments on Rationality Quotes: December 2010 - Less Wrong

6 Post author: Tiiba 03 December 2010 03:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (331)

Sort By: Popular

You are viewing a single comment's thread.

Comment author: HonoreDB 15 December 2010 09:51:22AM 5 points [-]

Never trust anything that can think for itself if you can't see where it keeps its brain.

--J. K. Rowling, Harry Potter and the Chamber of Secrets

Comment author: Eliezer_Yudkowsky 15 December 2010 10:34:22AM 10 points [-]

I can't help but ask whether you've ever found this advice personally useful, and if so, how.

Comment author: Nentuaby 18 December 2010 02:12:32AM 6 points [-]

A much more concrete example is cloud computing. Granted, computers don't "think," but it's a close enough analogy.

You must always keep in mind that there is no magic "cloud"- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.

Comment author: MBlume 15 December 2010 09:53:27PM 13 points [-]

Actually my first thought upon reading that was "follow the improbability" -- be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can't see the source of the optimization pressure.

Comment author: bcoburn 15 December 2010 09:24:39PM 3 points [-]

The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.

Comment author: HonoreDB 15 December 2010 10:33:52PM 1 point [-]

This is the allusion I had in mind, but actually I've had occasion to quote this when talking about corporations and similar institutions. If an organization doesn't keep its brain inside a human skull (and I'm sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).

Comment author: Larks 15 December 2010 04:28:46PM 3 points [-]

Telemarketers.

Comment author: xamdam 15 December 2010 10:50:02AM 4 points [-]

Never trust another computational agent unless you can see its source code?

Comment author: [deleted] 15 December 2010 01:40:36PM 1 point [-]

Never trust other thinking beings if you don't know the location of their intelligence center so that you can destroy it if necessary?

Comment author: waitingforgodel 15 December 2010 11:33:18AM 1 point [-]

Never trust anyone unless you're talking in person? :p

Comment author: topynate 15 December 2010 02:13:53PM 0 points [-]

Talking to Clippy? As in, I don't.

Comment author: Clippy 15 December 2010 04:44:41PM 0 points [-]

Why not?

Comment author: ata 15 December 2010 03:59:03PM 0 points [-]

That is racist against entities that think with things other than what we'd call brains.

Comment author: wedrifid 16 December 2010 10:16:06AM 4 points [-]

That is racist against entities that think with things other than what we'd call brains.

Don't you mean sexist? ;)

Comment author: nshepperd 18 December 2010 02:44:06AM 5 points [-]

Come now, that was below the belt.

Comment author: [deleted] 15 December 2010 04:51:49PM 3 points [-]

It isn't racist, it's realistic. If an entity thinks with something that we don't even call a brain, we shouldn't trust it because we have no way of knowing its motivations.

Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be "no", because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.

Comment author: wedrifid 16 December 2010 10:18:54AM *  2 points [-]

It isn't racist, it's realistic.

That category of things that we call racist does not exclude things simply because they are realistic. Political correctness isn't about being fair.

Comment author: [deleted] 16 December 2010 06:34:45PM 1 point [-]

I would actually call a statement racist if it's primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since "racist" has a lot of negative connotations, it isn't useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I'd call it something dumb like a racially-based realistic statement.

Comment author: ata 15 December 2010 05:28:28PM *  11 points [-]

It isn't racist, it's realistic. If an entity thinks with something that we don't even call a brain, we shouldn't trust it because we have no way of knowing its motivations.

Yes, but it says "never trust", not "don't trust by default". It should be possible for non-brain-based beings to demonstrate their trustworthiness.

Edit: Also, you can't spell "REALISTIC" without "RACIST LIE". Proof by anagram. So there.

Comment author: wedrifid 16 December 2010 10:26:09AM 0 points [-]

Yes, but it says "never trust", not "don't trust by default". It should be possible for non-brain-based beings to demonstrate their trustworthiness.

If we were going to be technical we'd have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.

Comment author: TheOtherDave 15 December 2010 05:04:20PM 4 points [-]

Talk about underconfidence!

I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.

In fact, I'm pretty much incorrigible on this point... that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.

I do understand that this is epistemicly problematic, and I sort of wish it weren't so... I don't like to enter incorrigible states... but there it is.

Comment author: [deleted] 15 December 2010 07:20:09PM 0 points [-]

What is your estimation of the likelihood that I was understating my beliefs about Clippy?

Comment author: TheOtherDave 15 December 2010 08:56:11PM 0 points [-]

You haven't actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.

Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.

As for the likelihood that you were understating your beliefs about LW readers... hm. I don't have much of a model of you, but treating LW-members as a reference class, I'd give that ~85% confidence.

The remaining ~15% is mostly that you weren't understating them so much as not bothering to think explicitly about them at all, and used "over 50%" as a generic cached formula for "more confident than not." Arguably that's a distinction that makes no difference.

I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.

Comment author: Clippy 15 December 2010 04:46:56PM 1 point [-]

Or a suggestion to generalize the concept of a "brain" for non-biological intelligences, such as paperclip maximizers.