David_Gerard comments on A belief propagation graph - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
You mean, applying it to something important but uncontroversial like global warming? ;-)
I find it hard to think of an issue that's both important enough to think about and well-known enough to discuss that won't be controversial. (I wouldn't class AI risk as well-known enough to be controversial except locally - it's all but unknown in general, except on a Hollywood level.)
None at all? I thought one of the first steps to rationality in corrupt human hardware was the realisation "I am a stupid evolved ape and do all this stuff too." Humans are made of cognitive biases, the whole list. And it's a commonplace that everyone sees cognitive biases in other people but not themselves until they get this idea.
(You are way, way smarter than me, but I still don't find a claim of freedom from the standard cognitive bias list credible.)
It strikes me as particularly important when building long inductive chains on topics that aren't of mathematical rigour to explicitly and seriously run the cognitive biases check at each step, not just as a tick-box item at the start. That's the message that box with the wide arrow sends me.
My point was that when introducing a new idea, the initial examples ought to be optimized to clearly illustrate the idea, not for "important to discuss".
I guess you could take my statement as an invitation to tell me the biases that I'm overlooking. :) See also this explicit open invitation.
On the assumption that you're a human, I don't feel the burden of proof is on me to demonstrate that you are cognitively similar to humans in general.
Good point, I should at least explain why I don't think the particular biases Dmytry listed apply to me (or at least probably applies to a much lesser extent than his "intended audience").
I don't think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating "politics is the mind killer" as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that "if you are a true rationalist you have to worry about risks from AI" versus "it is rational to worry about risks from AI" (Note: I am not trying to claim anything here. Just trying to improve Dmytry's list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats 'rationality' in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here's a 1997 post:
You can also see here that I was strongly influenced by Vernor Vinge's novels. I'd like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I'm not sure how to check that, or if I wouldn't have been, that would be more rational.
I read that box as meaning "the list of cognitive biases" and took the listing of a few as meaning "don't just go 'oh yeah, cognitive biases, I know about those so I don't need to worry about them any more', actually think about them."
Full points for having thought about them, definitely - but explicitly considering yourself immune to cognitive biases strikes me as ... asking for trouble.
You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.
You assume zero bias? See, the issue is that I don't think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.
Maybe a small bias considering that the society is full of religious people.
I didn't notice your 'we' including the AI in the origin of that thread, so there is at least a little of this bias.
Yes. I am not listing only the biases that are for the AI risk. Fiction for instance can bias both pro and against, depending to choice of fiction.
But how small it is compared to the signal?
It is not about absolute values of the biases, it is about relative values of the biases against the reasonable signal you could get here.
Probably quite a few biases that have been introduced by methods of rationality that provably work given unlimited amounts of resources but which exhibit dramatic shortcomings when used by computationally bounded agents.
Unfortunately I don't know what the methods of rationality are for computationally bounded agents, or I'd use them instead. (And it's not for lack of effort to find out either.)
So failing that, do you think studying decision theories that assume unlimited computational resources has introduced any specific biases into my thinking that I've failed correct? Or any other advice on how I can do better?
Let me answer with a counter question. Do you think that studying decision theories increased your chance of "winning"? If yes, then there you go. Because I haven't seen any evidence that it is useful, or will be useful, beyond the realm of philosophy. And most of it will probably be intractable or useless even for AI's.
That's up to how you define "winning". If you you define "winning" in relation to "solving risks from AI", then it will be almost impossible to do better. The problem is that you don't know what to anticipate because you don't know the correct time frame and you can't tell how difficult any possible sub goals are. That uncertainty allows you to retrospectively claim that any failure is not because your methods are suboptimal but because the time hasn't come or the goals were much harder than you could possible have anticipated, and thereby fool yourself into thinking that you are winning when you are actually wasting your time.
For example, 1) taking ideas too seriously 2) that you can approximate computationally intractable methods and use them under real life circumstances or to judge predictions like risks from AI 3) believe in the implied invisible without appropriate discounting.
A part of me wants to be happy, comfortable, healthy, respected, not work too hard, not bored, etc. Another part wants to solve various philosophical problems "soon". Another wants to eventually become a superintelligence (or help build a superintelligence that shares my goals, or the right goals, whichever makes more sense), with as much resources under my/its control as possible, in case that turns out to be useful. I don't know how "winning" ought to be defined, but the above seem to be my current endorsed and revealed preferences.
Well, I studied it in order to solve some philosophical problems, and it certainly helped for that.
I don't think I've ever claimed that studying decision theory is good for making oneself generally more effective in an instrumental sense. I'd be happy as long as doing it didn't introduce some instrumental deficits that I can't easily correct for.
Suboptimal relative to what? What are you suggesting that I do differently?
I do take some ideas very seriously. If we had a method of rationality for computationally bounded agents, it would surely do the same. Do you think I've taken the wrong ideas too seriously, or have spent too much time thinking about ideas generally? Why?
Can you give some examples where I've done 2 or 3? For example here's what I've said about AI risks:
Do you object to this? If so, what I should I have said instead?
This comment of yours, among others, gave me the impression that you take ideas too seriously.
You wrote:
This is fascinating for sure. But if you have a lot of confidence in such reasoning then I believe you do take ideas too seriously.
I agree with the rest of your comment and recognize that my perception of you was probably flawed.
Yeah, that was supposed to be a joke. I usually use smiley faces when I'm not being serious, but thought the effect of that one would be enhanced if I "kept a straight face". Sorry for the confusion!
I see, my bad. I so far believed to be usually pretty good at detecting when someone is joking. But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there. Although now I am not so sure anymore if people were actually serious on those other occasions :-)
I am going to send you a PM with an example.
Under normal circumstances I would actually regard the following statements by Ben Goertzel as sarcasm:
or
I guess what I encountered here messed up my judgement by going too far in suppressing the absurdity heuristic.
The absurd part was supposed to be that Ben actually came close to building an AGI in 2000. I thought it would be obvious that I was making fun of him for being grossly overconfident.
BTW, I think some people around here do take ideas too seriously, and reports of nightmares probably weren't jokes. But then I probably take ideas more seriously than the average person, and I don't know on what grounds I can say that they take ideas too seriously, whereas I take them just seriously enough.
I'm pretty sure the bit about the stock market crash was a joke.
To be fair I think Wei_Dai was being rather whimsical with respect to the anthropic tangent!
Not a new idea. Basic planning of effort . Suppose I am to try and predict how much income will a new software project bring, knowing that I have bounded time for making this prediction, much shorter time than the production of software itself that is to make the income. Ultimately, thus rules out the direct rigorous estimate, leaving you with 'look at available examples of similar projects, do a couple programming contests to see if you're up to job, etc'. Perhaps I should have used this as example, but some abstract corporate project does not make people think concrete thought. Most awfully, even when the abstract corporate project is a company of their own (those are known as failed startup attempts).
Do you define rationality as winning? That is a most-win in limited computational time task (perhaps win per time, perhaps something similar). That requires effort planning taking into account the time it takes to complete effort. Jumping on an approximation to the most rigorous approach you can think of is cargo cult not rationality. Bad approximations to good processes are usually entirely ineffective. Now, on the 'approximation' of the hard path, there is so many unknowns as to make those approximations entirely meaningless regardless of whenever it is 'biased' or not.
Also, having fiction as bias brings in all other biases because the fiction is written to entertain, and is biased by design. On top of that, the fiction is people working hard to find a hypothesis to privilege. The hypothesis can be privileged at 1 to 10^100 levels or worse easily when you are generating something (see religion).