Comment author: Douglas_Knight 19 September 2016 06:34:47PM *  6 points [-]

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this would mainly be noise, if he correctly reported the results, which he doesn't.

Second, there are some careful studies of high IQ (SMPY etc) by taking the well designed SAT test, which doesn't have a very high ceiling for adults and giving it to children below the age of 13. By giving the test to representative samples, they can well characterize the threshold for the top 3%. Using self-selected samples, they think that they can characterize up to 1/10,000. In any event, within the 3% they find increasing SAT score predicts increasing probability of accomplishments of all kinds, in direct contradiction of these claims.

Comment author: Val 19 September 2016 11:05:57PM 1 point [-]

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this is probably mainly noise.

Indeed. If an IQ test claims to provide accurate scores outside of the 70 to 130 range, you should be suspicious.

There are so many misunderstandings about IQ in the general population, ranging from claims like "the average IQ is now x" (where x is different from 100), to claims of a famous scientist having had an IQ score over 200, and claims of "some scientists estimating" the IQ of a computer, an animal, or a fictional alien species. Or things as simple as claiming to calculate an IQ score based on a low number (usually less than 10) of trivia questions about basic geography and names of celebrities.

Comment author: buybuydandavis 12 September 2016 02:55:40AM 1 point [-]

I was surprised at how popular Basic Income was in the recent survey.

I suspect one reason for that is that while some see it as an alternative to current programs, others see it as an additional program, and I don't believe the question specified which.

Comment author: Val 13 September 2016 03:23:03PM 0 points [-]

Also, many people on this site seem to have come from a liberal / libertarian upbringing, where it is a very popular trend to believe in. The survey supports this, by presenting support for BI for each political group.

Comment author: Val 09 September 2016 09:23:13PM 0 points [-]

Isn't the "Do I live in a simulation?" question practically indistinguishable from the question "does God exist?", for a sufficiently flexible definition for "God"?

For the latter, there are plenty of ethical frameworks, as well as incentives for altruism, developed during the history of mankind.

In response to Inefficient Games
Comment author: Gram_Stone 23 August 2016 07:15:56PM *  13 points [-]

It's nice to see that someone else has thought about this.

It's a popular rationalist pastime to try coming up with munchkin solutions to social dilemmas. A friend posed one such munchkin solution to me, and I thought he had an unrealistic idea of why regulations work, so I said to him:

Even though it's what you really want, I don't think the fact that you know everyone else will cooperate is the interesting thing per se about regulations, but that this is a consequence of the fact that you have decreased what was once the temptation payoff and thus constructed a different game. You have functionally reduced the expected payoff of the option "Don't pay taxes," by law. If you don't pay taxes, then you get fined or jailed. Now all players are playing a game where the Nash equilibrium is also Pareto optimal: Pay taxes or be fined or jailed. Clearly, one should pay taxes.

Now, ironically, this is good news if we want to cause better outcomes with less or no coercion, because it suggests that it is not coercion in itself that does the good work, but the fact that we have changed the payoffs to construct a different game; we can interpret coercion as just one instantiation of the general process by which 'inefficient games' become 'efficient games'. Coercion is perhaps a simple way to do the thing that all possible solutions to this problem seem to have in common, but there may be others that we can assume to syntactically change the payoffs in the way that coercion does, but which we may semantically interpret as something other than coercion.

A different time, a friend noticed that people building up trust seemed qualitatively similar to a Prisoner's Dilemma but couldn't see exactly how. I was like, "Have you heard of Stag Hunt? That's the whole reason Rousseau came up with it!" PD is just one kind of coordination game.

More generally, isn't it weird that the central objects of study in game theory, despite all of the formalization that has taken place since the beginning of the field, are remembered in the form of anecdotes?! You learn about the Stag Hunt and the Prisoner's Dilemma and Chicken and all other sorts of game, but there doesn't really seem to be any systematic notion of how different games are connected, or if any games are 'closer' to others in some sense (as our intuitions might suggest).

Meditations on Moloch was pretty but in the audience I coughed the words 'mechanism design'. It just seems like pointing out the mainstream academic work makes you boring when you're commenting on something poetic. You also might like Robinson and Goforth's Topology of the 2x2 Games. The math isn't that complex and it provides more insight than a barrage of anecdotes. Note that to my knowledge this is not taught in traditional game theory courses but probably should be one day. They refer to this general class of games as the 'social dilemmas', if I recall correctly.

Comment author: Val 24 August 2016 02:39:16PM 1 point [-]

In this case, we should really define "coercion". Could you please elaborate what you meant through that word?

One could argue, that if someone holds a gun to your head and demands your money, it's not coercion, just a game, where the expected payoff of not giving the money is smaller than the expected payoff of handing it over.

(Of course, I completely agree with your explanation about taxes. It's just the usage of "coercion" in the rest of your comment which seems a little odd)

Comment author: NancyLebovitz 12 August 2016 04:50:13PM 2 points [-]

Now that I'm thinking about it, psychological papers probably have more effect in the LW-sphere than in the world generally. Are you counting nutrition as part of medicine?

Comment author: Val 12 August 2016 10:32:09PM 2 points [-]

Parenting might be even worse, with plenty of contradictions between self-proclaimed experts, one claiming something is very important to do, the other claiming you must never do it under any circumstances.

Comment author: Val 12 August 2016 09:00:54PM 1 point [-]

Has anyone heard about the book "The egg-laying dog" from Beck-Bornholdt? I don't know about an English translation, I freely translated the title from German. It is a book about fallacies in statistics, research, especially in medicine, written in a style to be comprehensible by the layman.

It discusses at great length the problems plaguing modern research (well, the research of the 1990's when the book was written, but I doubt that very much has changed). For example, the required statistical significance for a publication is much more relaxed than it was a long time ago. Often a p-value of 5% is enough for a publication, so even with perfectly unbiased researchers, without p-fishing or other unethical tricks, there is a huge number of accepted publications around which are utterly rubbish. This is all made much worse by the fact that everyone wants new results, so few researchers can get funding by repeating and verifying already published results (unless the publication in question is on every headline), and also few researchers are inclined (or supported by the system) to publish negative results.

Comment author: Val 05 August 2016 09:56:06PM 0 points [-]

Let's be conservative and say the ratio is 1 in a billion.

Why?

Why not 1 in 10? Or 1 in 3^^^^^^^^3?

Choosing an arbitrary probability has good chances of leading us unknowingly into circular reasoning. I've seen too many cases of using for example Bayesian reasoning about something we have no information about, which went like "assuming the initial probability was x", getting some result after a lot of calculations, then defending the result to be accurate because the Bayesian rule was applied so it must be infallible.

Comment author: Liron 22 June 2016 12:21:02PM 1 point [-]

You're right that "those values are irrational" is a category mistake, if we're being precise. But Houshalter has an important point...

Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.

Like, "Yes the money pump just took some of my money, but you haven't considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me."

While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.

A "rationality test" is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.

Having so grounded all the terms, I mostly agree with pwno and Houshalter.

Comment author: Val 23 June 2016 07:07:19PM 0 points [-]

And why should we be utility maximization agents?

Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.

Would you?

Comment author: ChristianKl 17 June 2016 01:19:32PM 1 point [-]

What kind of hourly wage do you have that you think you should vote for 10 cents?

Comment author: Val 23 June 2016 06:59:44PM 0 points [-]

There are some people who think punishment and reward work linearly.

If I remember correctly (please correct me if I'm wrong) even Eliezer himself believes that if we assign a pain value in the single digits to very slightly pinching someone so they barely feel anything, and a pain value in the millions to torturing someone with the worst possible torture, then you should choose torturing a thousand people over slightly pinching half of the planet's inhabitants, if your goal was to minimize suffering. With such a logic, you could assign rewards and punishments to anything, and calculate pretty strange things out of that.

Comment author: pianoforte611 18 June 2016 05:34:52PM 2 points [-]

I don't understand why you would want this. It doesn't take exactly X times as much effort to provide X times as much productivity, but its a way better approximation than a log scale. Is the goal to discourage commerce, and promote self sufficiency?

Comment author: Val 19 June 2016 12:59:01AM 0 points [-]

Another problem would be, that unless this system suddenly and magically got applied to the whole world, it would not be competitive. It can't grow from a small set of members because the limits it imposes would hinder those who would have contributed the most to the size and power of the economy. By shrinking your economy, you will become less competitive against those who don't adopt this new system.

View more: Next