Comment author: RomeoStevens 27 February 2014 11:00:38PM 3 points [-]

mean decrease in systolic pressure over 8 studies of 16mm Hg kicks the crap out of a lot of interventions, including some prescription ones.

Comment author: PeterDonis 01 March 2014 04:09:55AM 0 points [-]

Are there any theories about the mechanism involved here? I've done a fair bit of Googling about this but haven't found any discussion of underlying mechanisms, only the statistics. I know that CoQ10 is critical in the metabolic cycle that produces ATP, and therefore is involved in energy production everywhere in the body; but I'm not sure how to get from that to the specific result of lowering blood pressure (rather than something more general like "feel more energetic").

Comment author: Eliezer_Yudkowsky 05 June 2013 10:03:12PM 1 point [-]

Monopolies which efficiently reinvest their producer surplus in improving the product tend to be monopolies to which I object very little. E.g. Google.

Comment author: PeterDonis 27 February 2014 10:41:06PM 0 points [-]

Sorry for the late comment, but I'm just running across this thread.

The question is not whether Google reinvests their producer surplus better than other monopolies. The question is whether Google reinvests their producer surplus more efficiently, i.e., for greater total benefit to society as a whole, than would all the consumers who would otherwise get that surplus as consumer surplus. That seems highly unlikely since the options for reinvestment open to even a large company like Google will cover a much smaller range of possibilities than the options open to the entire set of consumers who would otherwise receive the surplus.

(Admittedly, there is an effect here in the other direction: Google has much more leverage than the average consumer. But I don't think that outweighs the effect I referred to above, because Google is not being compared to the average consumer; they are being compared to the sum total of activities of all consumers--more precisely, all consumers who would otherwise receive the surplus Google is getting.)

Comment author: PeterDonis 26 February 2014 09:18:30PM 2 points [-]

Even if we had teleporters, would future Tyler Cowens be writing that they're not as innovative as the car - and would they be correct, in that a teleporter is just a more efficient way of solving a problem that cars and airplanes had already partially solved?

I don't think so, because there are threshold effects. For example, consider the airplane vs. the car: having airplane travel available doesn't just mean your trips are shorter; it enables many trips that otherwise would not even be considered, and therefore enables many kinds of activities that otherwise would not be considered. If I can fly to a distant city in a few hours, that enables me to have relationships, both business and personal, with people in that city that I couldn't have if I had to take days to drive there. If things can be shipped across country overnight on an airplane, many more economic activities requiring "just in time" delivery become possible. And so on.

Comment author: thomblake 27 July 2012 06:43:22PM 1 point [-]

Hence, the intelligent person would be at least as good as you.

Yes, this reminds me of someone I talked to some years back, who insisted that she trusted people's intuitions about weather more than the forecasts of the weatherman.

It was unhelpful to point out that the weatherman also has intuitions, and would report using those if they really had better results.

Comment author: PeterDonis 27 July 2012 07:30:14PM 0 points [-]

In this particular case, I agree with you that the weatherman is far more likely to be right than the person's intuitions.

However, suppose the weatherman had said that since it's going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can't find a flaw in his argument?

Comment author: [deleted] 27 July 2012 06:38:44PM *  2 points [-]

Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer.

This doesn't make sense to me. The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate. And, by virtue of being more intelligent, they presumably have better/faster System 2 (deliberate) thinking, so if the particular problem being worked on does end up favoring careful thinking, they would be more accurate. Hence, the intelligent person would be at least as good as you.

Moreover, if the claim "the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions" actually implied that intuitions were orders of magnitude better, people would never use anything but their intuitions, because their intuitions would always be more accurate. This obviously is not how things work in practice.

I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.

Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren't able to debug yours. It doesn't matter if the intelligent person can't debug your program if they can write a buggy program that works better than your buggy program.

Comment author: PeterDonis 27 July 2012 07:15:19PM *  1 point [-]

The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate.

But by hypothesis, we are talking about a scenario where the intelligent person is proposing something that violently clashes with an intuition that is supposed to be common to everyone. So we're not talking about whether the intelligent person has an advantage in all situations, on average; we're talking about whether the intelligent person has an advantage, on average, in that particular class of situations.

In other words, we're talking about a situation where something has obviously gone wrong; the question is which is more likely to have gone wrong, the intuitions or the intelligent person. It doesn't seem to me that your argument addresses that question.

if the claim "the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions" actually implied that intuitions were orders of magnitude better

That's not what it implies; or at least, that's not what I'm arguing it implies. I'm only arguing that it implies that, if we already know that something has gone wrong, if we have an obvious conflict between the intelligent person and the intuitions built up over the evolution of humans in general, it's more likely that the intelligent person's arguments have some mistake in them.

Also, there seems to be a bit of confusion about how the word "intuition" is being used. I'm not using it, and I don't think the OP was using it, just to refer to "unexamined beliefs" or something like that. I'm using it to refer speciflcally to beliefs like "mass murder is wrong", which have obvious reasonable grounds.

Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren't able to debug yours. It doesn't matter if the intelligent person can't debug your program if they can write a buggy program that works better than your buggy program.

We're not talking about the intelligent person being able to debug "your" program; we're talking about the intelligent person not being able to debug his own program. And if he's smarter than you, then obviously you can't either. Also, we're talking about a case where there is good reason to doubt whether the intelligent person's program "works better"--it is in conflict with some obvious intuitive principle like "mass murder is wrong".

Comment author: [deleted] 27 July 2012 04:25:10AM *  4 points [-]

we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context.

This seems to be the fulcrum point of your essay, the central argument that your anecdote builds up to and all of your conclusions depend on. But it is lacking in support--why should we stand by our intuitions and disregard the opinions of more intelligent people? Can you explain why this is true? Or at the very least, link to Hayek explaining it? Sure, there are obvious cases where one's intuition can win over a more intelligent person's arguments, such as when your intuition has been trained by years of domain-specific experience and the more intelligent person's intuition has not, or if the intelligent person exhibits some obvious bias. But ceteris paribus, when thinking a topic for the first time, I'd expect the more intelligent person to be at least as accurate as I am.

Comment author: PeterDonis 27 July 2012 06:09:45PM 2 points [-]

why should we stand by our intuitions disregard the opinions of more intelligent people?

Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer. So if there is a conflict, it's far more probable that the intelligent people have made some mistake that we haven't yet spotted.

I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.

View more: Prev