You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Crazy Ideas Thread, Aug. 2015 - Less Wrong Discussion

7 Post author: polymathwannabe 11 August 2015 01:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (240)

You are viewing a single comment's thread.

Comment author: cousin_it 12 August 2015 01:47:14PM *  18 points [-]

This is a crazy idea that I'm not at all convinced about, but I'll go ahead and post it anyway. Criticism welcome!

Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it's possible at all. That might sound obvious, but such idealism can make the difference between failure and success even in science, and even at the highest levels.

For example, Descartes and Leibniz saw the world as something created by a benevolent God and full of harmony that can be discovered by reason. That's a very irrational belief, but they ended up making huge advances in science by trying to find that harmony. In contrast, their opponents Hume, Hobbes, Locke etc. held a much more LW-ish position called "empiricism". They all failed to achieve much outside of philosophy, arguably because they didn't have a strong irrational belief that harmony could be found.

If you want to achieve something great, don't be a skeptic about it. Be utterly idealistic.

Comment author: hosford42 12 August 2015 09:39:17PM 4 points [-]

In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don't always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new problem, etc. Mutation, selection, mutation, selection... Evolution, of a sort.

Understanding is an interaction between an internal world model and observable evidence. Every world model contains "behind the scenes" components which are not directly verifiable and which serve to explain the more superficial phenomena. This is a known requirement to be able to model a partially observable environment. The irrational beliefs of Descartes and Leibniz which you describe motivated them to search for minimally complex indirect explanations that were consistent with observation. The empiricists were distracted by an excessive focus on the directly verifiable surface phenomena. Both aspects, however, are important parts of understanding. Without intangible behind-the-scenes components, it is impossible to build a complete model. But without the empirical demand for evidence, you may end up modeling something that isn't really there. And the focus on minimal complexity as expressed by their search for "harmony" is another expression of Occam's razor, which serves to improve the ability of the model to generalize to new situations.

A lot of focus is given to the scientific method's demand for empirical evidence of a falsifiable hypothesis, but very little emphasis is placed on the act of coming up with those hypotheses in the first place. You won't find any suggestions in most presentations of the scientific method as to how to create new hypotheses, or how to identify which new hypotheses are worth pursuing. And yet this creative part of the cycle is every bit as vital as the evidence gathering. Creativity is so poorly understood compared to rationality, despite being one of the two pillars, verification with evidence being the other, of scientific and technological advancement. By searching for harmony in nature, they were engaging in a pattern-matching process, searching the hypothesis space for good candidates for scientific evaluation. They were supplying the fuel that runs the scientific method. With a surplus of fuel, you can go a long way even with a less efficient engine. You might even be able to fuel other engines, too.

I would love to see some meta-scientific research as to which variants of the scientific method are most effective. Perhaps an artificial, partially observable environment whose full state and function are known to the meta researchers could be presented to non-meta researchers, as objects of study, to determine which habits and methods are most effective for identifying the true nature of the artificial environment. This would be like measuring the effectiveness of machine learning algorithms on a set of benchmark problems, but with humans in place of the algorithms. (It would be great, too, if the social aspect of scientific research were included in the study, effectively treating the scientific community as a distributed learning algorithm.)

Comment author: Sarunas 12 August 2015 02:01:01PM *  4 points [-]

Am I correct to paraphrase you this way: maximizing EX and maximizing P(X > a) are two different problems.

Comment author: cousin_it 12 August 2015 02:09:16PM *  3 points [-]

Yeah, that's one part of it. Another part is that some irrational beliefs can be beneficial even on average, though of course you need to choose such beliefs carefully. Believing that the world makes sense, in the context of doing research, might be one such example. I don't know if there are others. Eliezer's view of Bayesianism ("yay, I've found the eternal laws of reasoning!") might be related here.

Comment author: [deleted] 12 August 2015 03:10:24PM 2 points [-]

What are the meanings of these symbols "EX", "P(X>a)"?

Comment author: cousin_it 12 August 2015 03:26:38PM *  7 points [-]

X is a random variable, E is expected value (a.k.a. average), P is probability. For example, if X is uniformly distributed between 0 and 1, then EX=0.5 and P(X>0.75)=0.25.

Sarunas is saying that some action might not affect the average value, but strongly affect the chances of getting a very high or very low value ("swing for the fences" so to speak). For example, if we define Y as X rounded to the nearest integer (i.e. Y=0 if X<0.5 and Y=1 if X>0.5), then EY=0.5 and P(Y>0.75)=0.5. The average of Y is the same as the average of X, but the probability of getting an extreme value is higher.

Comment author: username2 05 September 2015 11:08:13AM 0 points [-]

This is probably obvious for others, but it wasn't obvious for me that by paying 0.1 to go from the first game to the second one you both decrease your average earnings and increase the probability of high earnings.

Comment author: btrettel 14 August 2015 03:10:10PM *  1 point [-]

Good point. It's worth noting that you can use Markov's inequality to relate the two.

Comment author: Squark 12 August 2015 07:59:07PM *  2 points [-]

I think it is more interesting to study how to be simultaneously supermotivated about your objectives and realistic about the obstacles. Probably requires some dark arts techniques (e.g. compartmentalization). Personally I find that occasional mental invocations of quasireligious imagery are useful.

Comment author: Gunnar_Zarncke 12 August 2015 10:56:05PM 1 point [-]

Isn't this the same or related to mental contrasting?

Comment author: DanielLC 12 August 2015 05:47:35PM 2 points [-]

In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?

Comment author: cousin_it 12 August 2015 06:20:48PM *  1 point [-]

Yes, if you think Hume's problem was laziness :-)

Comment author: [deleted] 12 August 2015 03:08:27PM *  2 points [-]

Isn’t that growth mindset? (Is growth mindset not rational?)

Comment author: cousin_it 12 August 2015 03:32:35PM *  1 point [-]

Yeah, it's a bit similar to growth mindset. Is it rational to believe in growth mindset if you know that believing in it with probability 100% makes it work with probability 50%? :-) I guess it only works if you're good at compartmentalizing, which is itself an error by LW standards.

Comment author: Wei_Dai 12 August 2015 08:09:22PM *  1 point [-]

I wrote a post arguing that what is irrational overconfidence for an individual can be good for society. (In short, scientific knowledge is a public good, individual motivations to produce it is likely too low from a group perspective, and overconfidence increases individual motivation so it's good.)

To extend this a bit, if society pays people to produce scientific knowledge (in money and/or status), then overconfident people would be willing to accept a lower "salary" and outcompete more rational individuals for the available positions, so we should expect that most science is produced by overconfident people. (This also applies to any other attribute that increases motivation to work on scientific problems, like intellectual curiosity.) As a corollary, people who produce science about rationality (e.g., decision theorists) are probably more overconfident than average, people who work at MIRI are probably more overconfident than average, etc.

Comment author: Lumifer 12 August 2015 08:22:48PM 4 points [-]

This starts to look like Lake Woebegon.

The argument that overconfident people will be willing to accept lower compensation and so outcompete "more rational individuals" seems to be applicable very generally, from running a pizza parlour to working as a freelance programmer. So, is most everyone "more overconfident than average"?

Comment author: Wei_Dai 12 August 2015 09:01:05PM *  2 points [-]

Good point. :) I guess it actually has to be something more like "comparative overconfidence", i.e., confidence in your own scientific ideas or assessment of your general ability to produce scientific output, relative to confidence in your other skills. Theoretical science (including e.g., decision theory, FAI theory) has longer and weaker feedback cycles than most business fields like running a pizza parlor, so if you start off overconfident in general, you can probably keep your overconfidence in your scientific ideas/skills longer than your business ideas/skills.

Comment author: ChristianKl 13 August 2015 06:47:38PM 1 point [-]

I think that very much depends on what you mean with rationality. The kind of rationality this community practices leads for better or worse to a bunch of people holding contrarian beliefs that certain things are possible that general society doesn't consider to be possible.

In HPMOR you have "do the impossible", "heroic responsibility" and "having something to protect" all as part of the curriculum.

Comment author: [deleted] 12 August 2015 07:04:06PM 1 point [-]

As a data point to support this, overconfidence correlates positively with income.

Comment author: Lumifer 12 August 2015 03:59:53PM *  1 point [-]

Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it's possible at all.

That is true.

If you want to achieve something great, don't be a skeptic about it. Be utterly idealistic.

Well, umm... there is the slight issue of cost. If you are deliberately choosing a high-risk strategy to give yourself a chance of a huge payoff, you need to realize that the mode of outcomes is you failing. Convincing yourself that you are destined to become a famous actress does improve your chances of getting into the movies, but most people who believe this will end up as waitresses in LA.

It's like "If you want to become a millionaire, you need to buy lottery tickets" :-/

Comment author: cousin_it 12 August 2015 04:15:18PM 1 point [-]

Yeah. I actually wrote a post about that :-)

Comment author: IlyaShpitser 13 August 2015 08:54:05PM *  0 points [-]

I don't know if I would hate on Hume this much. Hume is pretty big.


I agree with your broader point, though, I think. At the highest levels, EVERYTHING has to go right, including having the hardware, and having a super work ethic, and having a synergistic morale (an irrationally huge view of own importance, etc.)

Comment author: Raiden 12 August 2015 06:54:44PM 0 points [-]

There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn't to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.

Comment author: polymathwannabe 12 August 2015 03:11:08PM 0 points [-]

Achieving something great may require your confidence in its possibility, but the reasonableness of that confidence is only discovered in hindsight. It's not uncommon to stumble upon a true belief while having begun with wrong evidence.

Comment author: Vaniver 12 August 2015 02:34:04PM 0 points [-]

Well, or find a way to bottle mania.

Comment author: NancyLebovitz 13 August 2015 01:15:33AM 0 points [-]

Hypomania would probably be better.

I would give a pretty for an sf novel about a society where hypomania/depression cycles were considered normal and accommodated.