Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: rysade 10 March 2011 10:17:54PM *  0 points [-]

I hold the opinion that one should be curious about everything but some things only superficially. If you dig deep into something, it changes the experience of it. There's something to be said about being intimately familiar with a subject.

Forcing yourself to be curious about every single thing that crosses your path is a good way to make yourself uncomfortable. I consider discomfort of that kind to be good practice when it comes to confronting the possibility I may be wrong about something.

I guess I have trouble living up to that ideal, but at the same time I have learned to be uncomfortable with being too comfortable. I worry that too much anti-curiosity would lead to too much comfort.

Comment author: Hook 12 March 2011 06:54:27PM 0 points [-]

One should also know everything, but clearly that's impossible.

There are some areas of knowledge that are so unlikely to yield anything useful that it's not worth spending any time being curious about them. For humanity in general, psi phenomena now fall into this category. There was a time when they didn't, but it's safe to say that time is over. For me as an individual, string theory falls into that category. I'm glad there are some people investigating it, but the effort required for me to have anything but a superficial understanding of the topic is extremely unlikely to help me achieve anything.

In response to Positive Thinking
Comment author: JGWeissman 07 March 2011 10:24:03PM 17 points [-]

but it seems that according to lesswrong doctrine, they are above the sanity waterline while my first friend group is below.

No. Having religous beliefs places an upper bound on how rational a person could be, past a certain level of rationality, a person will necessarily discard religion. But this does not mean that any particular atheist became an atheist by achieving that level of rationality. Most have not.

The article Raising the Sanity Waterline proposes not directly arguing against religion, but to instead teach the skills that would enable people to level up to the point where they systematically reject religion on their own, in part because just getting someone to reject religion does not actually make them more rational.

Comment author: Hook 08 March 2011 02:13:19AM 0 points [-]

I think you are approximately right here, but it's important to think about just how high that upper bound is, and what activities can only be accomplished by people above that bound. It might help to think in more concrete terms about what someone who believes in religion cannot achieve, that a non-believer can.

With sufficient compartmentalization of religious beliefs, I would venture to say the answer is a pretty small subset of activities. They may be important activities on a global scale, but mostly unimportant in peoples' day to day functioning.

It's very easy to imagine, or better yet, meet, theists who are far more rational in achieving their goals than even many of the people on this board.

Comment author: CronoDAS 13 February 2011 07:44:11AM 6 points [-]

For the record, I heard that, outside of chess, Bobby Fisher was a grade-A nutcase.

Comment author: Hook 20 February 2011 07:36:56PM 0 points [-]

Bobby Fischer, and a chess playing computer, highlight the difference between rationality and talent. Talent is simply the ability to do a particular task well. I tend to think of rationality as the ability to successfully apply one's talents to achieving one's reasonably complex goals. ("Reasonably complex" so the computer doesn't score very high on rationality for achieving it's one goal of winning chess games.)

Someone with limited talent could still be rational if he was making the best use of what strengths he did have. In a very real sense, we are all in that situation. It's easy to imagine possessing particular talents that would make achieving our goals much more likely.

That said, certain talents will be correlated with rationality and it's an interesting question to see to what extent chess is one of those talents.

Comment author: Jonathan_Graehl 09 February 2011 02:43:04AM 2 points [-]

My brother has used Dvorak for the past 10 years.

It's easy to learn. You can still retain qwerty proficiency. It does feel nicer for typing English. It doesn't help programming. It's annoying to use multiple/public computers.

There are quite a few layouts that may be better than Dvorak. But probably not by enough to justify the extra effort of choosing one.

Comment author: Hook 10 February 2011 01:17:19AM 2 points [-]

I first learned how to touch type on Dvorak, but switched to qwerty when I went to college so I wouldn't have issues using other computers. I found that I could not maintain proficiency with both layouts. One skill just clobbered the other.

Comment author: timtyler 15 July 2010 08:03:21PM *  0 points [-]

Not without some changes; yes - and: not part of the human economy.

Various machines certainly behave in goal-directed ways - and so have what can usefully be described as "vested interests" - along the lines described here:


Can you say what you mean by "interests"? Probably any difference of opinion here is a matter of differing definitions - and so is not terribly interesting.

Re: "The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values."

That wasn't what I meant - what I meant is that they don't completely share human values - not that they don't fulfill them.

Comment author: Hook 16 July 2010 12:33:36PM *  0 points [-]

By interests, I mean concerns related to fulfilling values. For the time being, I consider human minds to be the only entities complex enough to have values. For example, it is very useful to model a cancer cell as having the goal of replicating, but I don't consider it to have replicating as a value.

The cancer example also shows that our own cells don't fulfill or share our values, and yet we still model the consumption of cancer cells as the consumption of a human being.

If you really want to ignore direct consumption by machines - and pretend that the machines are all working exclusively for humans, doing our bidding precisely - then you have GOT to account for people and companies buying things for the machines that they manange - or your model badly loses touch with reality.

I think I might have the biggest issue with this line. Nobody is pretending that machines are all working exclusively for humans, no more than we pretend our cells are working exclusively for us. The idea is that we account for the machine consumption the same way we account for the consumption of our own cells, by attributing it to the human consumers.

Comment author: Roko 13 July 2010 05:57:18PM *  1 point [-]

Retroviral genetic engineering once we know what genes control rationality.

It has the advantage that it would be in people's self-interest to do this. I suspect that some kind of individually beneficial modification is the solution.

Comment author: Hook 15 July 2010 01:18:02PM 2 points [-]

Psychosurgery or pharmaceutical intervention to encourage some of the more positive autistic spectrum cognitive traits seems more likely to work than this. We are far from identifying the genetic basis of intelligence or exceptional intelligence, never mind an aspect as specific as rationality.

It's also not clear that it is in someone's self interest to do this. I know you said retroviral genetic engineering, but for now I'll assume that it would only be possible on embryos. In that case, if someone really wanted grand children, it is not clear that making these alterations in her children would be the best way to achieve that goal.

Comment author: timtyler 15 July 2010 07:22:09AM *  0 points [-]

Re: "before we were assuming that the Robots (ems) were consumers. Here we're assuming the opposite, that humans and only humans consume."

More accurately, Martin Ford was assuming that - and I was pointing out that trucks, fridges, washing machines, etc. are best modelled as consumers too - since they consume valuable low-entropy resources - and spit out useless waste products.

The idea that machines don't participate in the economy as consumers is not a particularly useful one. Machines - and companies - buy things, sell things, consume things - and generally do participate. Those machines that don't buy things have things bought for them on their behalf (by companies or humans) - and the overall effect on economic throughput is much the same as if the machines were buying things themselves.

If you really want to ignore direct consumption by machines - and pretend that the machines are all working exclusively for humans, doing our bidding precisely - then you have GOT to account for people and companies buying things for the machines that they manange - or your model badly loses touch with reality.

In practice, it is best to just drop the assumption. Computer viruses / chain letters are probably the most obvious illustration of the problem with the idea that machines are exclusively "on our side", labour on our behalf, and have no interests of their own.

The mis-handling of this whole issue is one of the problems with "The Lights in the Tunnel".

Comment author: Hook 15 July 2010 12:48:24PM 0 points [-]

Would this analysis apply to the ecosystem as a whole? Should we think of fungus as consuming low entropy plant waste and spitting out higher entropy waste products? Is a squirrel eating an acorn part of the economy?

Machines, as they currently exists, have no interests of their own. Any "interests" they may appear to have are as real as the "interest" gas molecules have in occupying a larger volume when the temperature increases. Computer viruses are simply a way that machines malfunction. The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values. Nothing does.

Comment author: timtyler 14 July 2010 04:54:10PM *  2 points [-]

Re: "If the average consumer is unemployed and has no income, he is obviously not going to be purchasing stuff for his machines. In fact, ownership of the machines will concentrate into a shrinking elite as machines take the jobs of average people."

Sure - but that was not the point which you were making that I was criticising:

You argued that unemployment would mean that spending would decline - and the economy would plummet or crash.

Whereas a more accurate analysis suggests that those in charge of the machines will ramp up their spending to feed the demands of their machines - thereby contributing to expenditure in the economy. In other words, the machines will buy and sell things - if not directly, then via companies or humans. This is not really a "jump into the stuff of science fiction" - people regularly buy things to "feed" their machines today.

The machines act as consumers - in that they consume things - even if there is a human or corporation who is buying the things involved somewhere. So: the whole idea that the economy will collapse because nobody is earning money and buying things falls down. Machines will still be earning money and buying things - even if they will be doing it by proxy agents in the form of corporate or human masters.

This idea is a fairly central one in "The Lights in the Tunnel" - and it is based on unsound thinking and bad economics :-(

Re: "Everything produced by the human economy is ultimately consumed by individual human beings."

That seems like a rather muddled way of looking at the situation. Machines have needs too. They slurp up gas, oil, electricity, raw materials. They consume - and excrete - along with all other agents in the biosphere. Companies act as non-human consumers too. It could be argued that machines and companies are slaves to humanity (to some extent - though the inverse perspective - that they are using us to manipulate the machine world into existence - also has considerable validity) - but that doesn't mean that we consume their waste products.

Re: "If too few people have the ability to purchase END products, the mass market economy will collapse."

No: the issue is not the number of human consumers, but their total spending power. Rich minorites commanding huge squads of machine minions could generate a large demand for resources - and the also ability to produce those resources - thus lubricating the economy very effectively.

Of course in a human democracy, voters would try hard to ramp up corporation taxes - in order to resist such huge inequality - but that is another issue entirely.

Comment author: Hook 14 July 2010 07:21:50PM 0 points [-]

So, if say a million people owned all of the machines in the world, and they had no use for the human labor of the other billions of people in the world, you would still classify the economy as very effective?

I guess the question is what counts as an economic crash? A million extremely well off people with machines to tend to their every need and billions with no useful skills to acquire capital seems like a crash to most of the people involved.

Comment author: Roko 14 July 2010 02:09:16PM 0 points [-]

To test this, you'd need to somehow identify a group of patients that were going to receive some kind of very specific brain surgery, and give them a pre- and post- rationality test.

Comment author: Hook 14 July 2010 02:57:12PM 1 point [-]

At this point I was mostly wondering if there were any motivating anecdotes such as Phineas Gage or gourmand syndrome, except with a noticeable personality change towards rationality. Someone changing his political orientation, becoming less superstitious, or gambling less as a result of an injury could be useful (and, as a caveat, all could be caused by damage that has nothing to do with rationality).

Comment author: Roko 13 July 2010 11:09:55PM 2 points [-]

Rationality seems more like a way of reasoning and a higher level trait than these 'specialized' forms of intelligence however.

Maybe. Actually, I think that the dominant theory around here is that rationality is actually the result of an atrophied motivated-cognition module, so perfect rationality is not a question of creating a new brain module, but subtracting off the distorting mechanisms that we are blighted with.

Comment author: Hook 14 July 2010 02:00:30PM 0 points [-]

I realize that "brain module" != "distinct patch of cortex real estate", but have there been any cases of brain damage that have increased a person's rationality in some areas?
I am aware that depression and certain autism spectrum traits have this property, but I'm curious if physical trauma has done anything similar.

View more: Next