wedrifid comments on What I've learned from Less Wrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (232)
Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous "how could they be that stupid?" Because, of course, it contains an implicit "I could never be that stupid" and "poor victim, I am of course far more rational". This just means your mind - in the context of being a general-purpose operating system that runs memes - does not have that particular vulnerability.
I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn't any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.
My message is: it can happen to you, and thinking it can't is more dangerous than nothing. Here are some defences against the dark arts.
[That's the thing I'm working on. Thankfully, the commonest delusion seems to be "it can't happen to me", so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]
This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can't design a program not to have bugs - just not to have ones that we know about. As a subset of that, we can't design a constructed intelligence not to have cognitive biases - just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I'd just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.
With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.
Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.
I can see that I've failed to convince you and I need to do better.
In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.
You really aren't running OpenBSD with those less rational people running Windows.
I do think being able to make such statements of confidence in one's immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience - but that isn't what you said.
I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?
Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.
(I can see that this bit is going to have to address the standard objection more.)
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don't even have significant familiarity with the wedrifid online persona let alone me.
It isn't so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.
I would be more inclined to answer such questions if they didn't come with explicitly declared rhetorical intent.
No, I'm actually interested in knowing. If "nothing", say that.