Wei_Dai comments on A belief propagation graph - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
Good point, I should at least explain why I don't think the particular biases Dmytry listed apply to me (or at least probably applies to a much lesser extent than his "intended audience").
I don't think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating "politics is the mind killer" as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that "if you are a true rationalist you have to worry about risks from AI" versus "it is rational to worry about risks from AI" (Note: I am not trying to claim anything here. Just trying to improve Dmytry's list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats 'rationality' in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here's a 1997 post:
You can also see here that I was strongly influenced by Vernor Vinge's novels. I'd like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I'm not sure how to check that, or if I wouldn't have been, that would be more rational.
I read that box as meaning "the list of cognitive biases" and took the listing of a few as meaning "don't just go 'oh yeah, cognitive biases, I know about those so I don't need to worry about them any more', actually think about them."
Full points for having thought about them, definitely - but explicitly considering yourself immune to cognitive biases strikes me as ... asking for trouble.
You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.
You assume zero bias? See, the issue is that I don't think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.
Maybe a small bias considering that the society is full of religious people.
I didn't notice your 'we' including the AI in the origin of that thread, so there is at least a little of this bias.
Yes. I am not listing only the biases that are for the AI risk. Fiction for instance can bias both pro and against, depending to choice of fiction.
But how small it is compared to the signal?
It is not about absolute values of the biases, it is about relative values of the biases against the reasonable signal you could get here.