Related to: Practical Rationality Questionnaire

Here among this community of prior-using, Aumann-believing rationalists, it is a bit strange that we don't have any good measure of what the community thinks about certain things.

I no longer place much credence in raw majoritarianism: the majority is too uneducated, too susceptible to the Dark Arts, and too vulnerable to cognitive biases. If I had to choose the people whose mean opinion I trusted most, it would be - all of you.

So, at the risk of people getting surveyed-out, I'd like to run a survey on the stuff Anna Salamon didn't. Part on demographics, part on opinions, and part on the interactions between the two.

I've already put up an incomplete rough draft of the survey I'd like to use, but I'll post it here again. Remember, this is an incomplete rough draft survey. DO NOT FILL IT OUT YET. YOUR SURVEY WILL NOT BE COUNTED.

Incomplete rough draft of survey

Right now what I want from people is more interesting questions that you want asked. Any question that you want to know the Less Wrong consensus on. Please post each question as a separate comment, and upvote any question that you're also interested in. I'll include as many of the top-scoring questions as I think people can be bothered to answer.

No need to include questions already on the survey, although if you really hate them you can suggest their un-inclusion or re-phrasing.

Also important: how concerned are you about privacy? I was thinking about releasing the raw data later in case other people wanted to perform their own analyses, but it might be possible to identify specific people if you knew enough about them. Are there any people who would be comfortable giving such data if only one person were to see the data, but uncomfortable with it if the data were publically accessible?

New Comment
132 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What is your opinion on sentience? What features must a given computational process have to be considered morally significant?

What is the moral significance of:

Healthy adult humans
infants
fetuses
higher primates
mammals
humans declared brain-dead

etc?

Some little things:

  • "Professional field" should be multiple-choice.
  • What do you mean by "spiritual" under "religious views" – believe in the supernatural? take mysticism seriously in a way compatible with naturalism?
  • On p(Aliens), does "the Universe" mean past light cone, present surface of observable universe, or entire (potentially infinite) continuum? How about other Everett branches?
  • A definition of "supernatural" before the p(God) question would be nice.
  • "Three Worlds Ending" might benefit f
... (read more)
1robzahra
Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get
-1cousin_it
Going slightly offtopic: Eliezer's answer has irked me for a long time, and only now I got a handle on why. To reliably win by determining whether the opponent one-boxes, we need to be Omega-superior relative to them, almost by the definition of Newcomb's. But such powers would allow us to just use the trivial solution: "cooperate if I think my opponent will cooperate".
0Vladimir_Nesov
If you know that no matter what you do, the other one will cooperate, then you should defect.
0cousin_it
Eliezer wants to cooperate against a cooperating opponent, as depicted in the beginning of "Three Worlds Collide". What I "should" do is quite another matter.
1Vladimir_Nesov
You don't cooperate against a Paperclip maximizer if you know it'll cooperate even if you defect. If you cooperate in this situation, it's a murder of 1 billion people. I'm quite confident that if you disagree with this statement, you misunderstand the problem.
1cousin_it
Oh. Time to change my opinion - now I finally see what you and Eliezer mean by "general theory". It reduces to something like this: My source code contains a rule M that overrides everything else and is detectable by other agents. It says: I will precommit to cooperating (playing the Pareto-optimal outcome) if I can verify that the opponent's source code contains M. Like a self-printing program (quine), no infinite recursion in sight. And, funnily enough, this statement can persuade other agents to modify their source code to include M - there's no downside. Funky! But I still have no idea what Newcomb's problem has to do with that. Maybe should give myself time to think some more...
1Psy-Kosh
Or, more generally: "If, for whatever reason, there's sufficiently strong correlation between my cooperation and my opponent's cooperation, then cooperation is the correct answer"
1Vladimir_Nesov
You need causation, not correlation. Correlation considers the whole state space, whereas you need to look at correlation within each conditional area of state space, given one action (your cooperation), or another (your defection), which in this case corresponds to causation. If you only look for unconditional correlation, you are inadvertently asking the same circular question: "what will I do?". When you act, you determine which parts of the state space are to be annihilated, become not just counterfactual, but impossible, and this is what you can (ever) do. Correlation depends on that, since it's computed over what remains. So you can't search for that information, and use it as a basis for your decisions.
1Psy-Kosh
If you know the following fact: "The other guy will cooperate iff I cooperate", even if you know nothing about the nature of the cause of the correlation, that's still a good enough reason to cooperate. You ask yourself "If I defect, what will the outcome be? If I cooperate, what will the outcome be?" Taking into account the correlation, one then determines which they prefer. And there you go. For example, imagine that, say, two AIs that were created with the same underlying archetecture (though possibly with different preferences) meet up. They also know the fact of their similarity. Then they may reason something like "hrmm... The same underlying algorithms running in me are running in my opponent. So presumably they are reasoning the exact same way as I am even at this moment. So whichever way I happen to decide, cooperate or defect, they'll probably decide the same way. So the only reasonably possible outcomes would seem to be 'both of us cooperate' or 'both of us defect', therefore I choose the former, since it has a better outcome for me. Therefore I cooperate." In other words, what I chose is also lawful. That is, physics underlies my brain. My decision is not just a thing that causes future things, but a thing that was caused by past things. If I know that the same past things influenced my opponent's decision in the same way, then I may be able to infer "whatever sort of reasoning I'm doing, they're also doing, so..." Or did I completely fail to understand your objection?
1Vladimir_Nesov
Something like this. Referring to an earlier discussion, "Cooperator" is an agent that implements M. Practical difficulties are all in signaling that you implement M, while actually implementing it may be easy (but pointless if you can't signal it and can't detect M in other agents). The relation to Newcomb's problem is that there is no need to implant a special-purpose algorithm like M you described above, you can guide all of your actions by a single decision theory that implements M as a special case (generalizes M if you like), and also solves Newcomb's problem. One inaccuracy here is that there are many Pareto optimal global strategies (in PD there are many if you allow mixed strategies), with different payoffs to different agents, and so they must first agree on which they'll jointly implement. This creates a problem analogous to the Ultimatum game, or the problem of fairness.
1cousin_it
Didn't think about that. Now I'm curious: how does this decision theory work? And does it give incentive to other agents to adopt it wholesale, like M does?
1Vladimir_Nesov
That's the idea. I more or less know how my version of this decision theory works, and I'm likely to write it up in the next few weeks. I wrote a little bit about it here (I changed my mind about causation, it's easy enough to incorporate it here, but I'll have to read up on Pearl first). There is also Eliezer's version, that started the discussion, and that was never explicitly described, even on a surface level. Overall, there seem to be no magic tricks, only the requirement for a philosophically sane problem statement, with inevitable and long-known math following thereafter.
0cousin_it
OK, I seem to vaguely understand how your decision theory works, but I don't see how it implements M as a special case. You don't mention source code inspection anywhere.
0Vladimir_Nesov
What matters is the decision (and its dependence on other facts). Source code inspection is only one possible procedure for obtaining information about the decision. The decision theory doesn't need to refer to a specific means of getting that information. I talked about a related issue here.
0[anonymous]
Forgive me if I'm being dumb, but I still don't understand. If two similar agents (not identical to avoid the clones argument) play the PD using your decision theory, how do they arrive at C,C? Even if agents' algorithms are common knowledge, a naive attempt to simulate the other guy just falls into bottomless recursion as usual. Is the answer somehow encoded in "the most general precommitment"? What do the agents precommit to? How does Pareto optimality enter the scene?
0robzahra
Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent. However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yourself what you would do, and you know the answer. There may be some class of algorithms non-identical to you but which are still close enough to you to make this self-reflection increased evidence that your opponent will cooperate if you do.
0Vladimir_Nesov
No, you can't ask yourself what you'll do. It's like a calculator that seeks the answers to the question of "what is 2+2?" in a form "what will I answer to the question "what is 2+2"?", in which case the answer 57 will be perfectly reasonable. If you are cooperating with your copy, you only know that the copy will do the same action, which is a restriction on your joint state space. Given this restriction, the expected utility calculation for your actions will return a result different from what other restrictions may force. In this case, you are left only with 2 options: (C,C) and (D,D), of which (C,C) is better.
0robzahra
you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...

interesting questions that you want asked... post each as a separate comment

Select the Existential Risk you judge most likely to occur this century?

  • Nuclear holocaust
  • Badly programmed superintelligence
  • Genetically engineered biological agent
  • Accidental misuse of nanotechnology (“gray goo”)
  • Environmental catastrophe (eg runaway global warming)
  • etc
2Mike Bishop
Why not ask for probabilities for each, and confidence intervals as well?
0hirvinen
Related, but different: Which of these world-saving causes should receive most attention? (Maybe place these in order.) * Avoiding nuclear war * Create a Friendly AI, including prevention of creating AIs you don't think are Friendly * Create AI, no need to be Friendly. * Prevent creation of AIs until humans are a lot smarter * Improve human cognition(should this include uploading capabilities?) * Defense against biological agents * Delay nanotechnology development until we have sufficiently powerful AIs to set up defenses against gray goo * Creation and deployment of anti- gray goo nanotechnology * Avoiding environmental hazards * Space colonization * Fighting diseases * Fighting aging * something else?
0Jack
"Most attention" is ambiguous, particularly when some of the options are phrased as proactive and others reactive/preventative. Do you man funding? Public awareness? Plus there some issues might be incredibly important but require relatively little "attention" to solve while others might be less important but take a lot more resources to solve. I wouldn't know how to answer this question accept to say I don't think any effort should be spent on creating and deploying anti- grey goo nanotech.
-1CannibalSmith
You must also ask country of residence for this to be valid.
1hirvinen
I think we mean here by existential risks something alone the lines of, in Bostrom's words " - - either annihilate Earth-originating intelligent life or drastically and permanently curtail its potential", making countries irrelevant.
0CannibalSmith
Oops, I misread "century" as "country".

I wonder if you'd get better probability values if you used AJAX slider controls for a continuous value between 0 and 1. Less chance of anchoring percentages on multiples of 10 and 5.

0Zvi
In a survey does an increase in rounding errors in estimators a problem? As long as there's no bias in how they get rounded we should be fine. If there is such a bias I'm curious what it is and what causes it.
0JulianMorrison
I suspect it would have a strong bias towards obvious fractions and obvious multiples. That isn't a directional bias, but it's an anti-precise bias.
0gjm
That would make it effectively impossible to distinguish between 1 and 0.001, or 99 and 99.999. To get around that we'd need to work with something like log(odds ratio), but then there isn't any natural choice of endpoints, people's intuition for what goes where will generally be poor, etc.
[-]gjm40

I'd like to see a question on the best level of aid to the Third World (say, an estimated optimum as a fraction of GDP in affluent Western countries). The current level is nonzero but rather low (especially if you exclude things like military aid to allies); some people say it's scandalously low, others that such aid is actively harmful and the level should therefore be zero or very close. (I assume plenty of people also say that the level should be zero because someone in the US has no obligations to someone in sub-saharan Africa, but that opinion isn't expressed so often in public.)

3Eliezer Yudkowsky
It seems a bit transparent to me that there's no such thing as a "best level of aid to the Third World". That's asking "How much money do you have to throw at the problem to stop feeling guilty?" There are only marginal efficiencies which determine how much resources you would want to flow in that direction. In the case of Africa, African economists are pleading with us to stop the aid because it's destroying their continent. I don't know about the rest of the Third World. In any case it has to go project by project.
0gjm
1. A question posed simply in terms of "the best level" would be measuring some sort of tangled-up combination of respondents' values and their opinions about facts. That might be a bad thing (though I note that the question about political affiliation, at least, has the same feature). Instead, one could ask something like "what level of aid do you think would maximize Africa's GDP after 20 years?" or "what level of aid do you think would maximize average expected QALYs at birth over the whole human population". 2. When considering an individual's charitable activity, of course we should think in terms of marginal efficiencies. That's not so clear when considering the question of the total amount of aid that might go from the affluent West to the Third World. 3. You mean (unless you have relevant information I don't, which is eminently possible) that some African economists are saying that the aid is harmful. It would be much more interesting to know typical African economists' opinions. If nothing else, there is obvious sampling bias here: if two African economists approach an American publisher, one proposing to write a book saying "Aid is actively harmful; stop it now" and one proposing to write one saying "Aid is useful; please do a bit more of it", which one is going to get the contract? It seems to me that there are multiple different factors making it far more likely to be the first one that have scarcely any correlation with the actual truth of the matter. 4. Yes, of course, actual decisions need to be made project by project. That doesn't mean that one can't hold an opinion about the approximate gross amount of aid there should be. (Such as, for instance, "none", which is an opinion you don't seem to object to even though it's the ultimate in not-project-by-project answers since it necessarily returns the same answer for every project.)
3Scott Alexander
How would everyone feel about a question phrased something like: "True or false: the marginal effect of extra money being given to aid in Africa through a charity like UNICEF is generally positive."
0Jack
AIUI, It matters immensely what type of aid you're talking about, the processes by which it is distributed, anti-corruption mechanisms etc. Giving away food grown in Western countries is disastrous, microcredit, vaccinations, educating women etc. not so much. In any case I took the question to be trying to ascertain community positions on distributive justice issues and Western obligations to the developing world rather than distributive efficiency. So if there is really a widespread sense a question about aid wouldn't reflect those sorts of positions maybe a more theoretical question would be better.
2AlanCrowe
Implicit in the question is the idea that aiding the third world costs money. The World Bank claims that America's three billion a year subsidy to its own cotton farmers has knock on effects that make African cotton farmers three hundred million dollars a year worse off. But the American subsidy is a very wasteful internal transfer. If America wants to give African cotton farmers three hundred million dollars in aid, it need only scrap its subsidy at a net benefit to America of perhaps two billion dollars. Notice that I'm saying something different from "aid is actively harmful". I'm saying that we haven't plucked the low hanging fruit of passive win/win where we stop doing dumb shit and every nation is better off. After that comes active win/win such as building harbours and roads that increase the value of African products by making it cheaper to transport them to First World markets (win for Africans) while making African products more available to First World markets (win for First World). Mobile phones have reduced Third World poverty by letting farmers and fishermen direct their produce to the best markets, even while the mobile phone operators have profited by providing services. Fostering a zero-sum mentality with questions that assume that aiding the Third World costs the same amount of money as the benefit provided is misleading.
0mattnewport
Indeed, in the 'most important world saving causes' list earlier, ending agricultural subsidies wasn't even mentioned but that would probably be top of my list (battling with greatly relaxing immigration restrictions for the top spot).
0Mike Bishop
Have people answer two ways: 1) assume essentially no change in the type and quality of projects funded 2) assume some wise politicians make some realistic improvements in transparency, and accountability. The equivalent of No Child Left Behind for foreign aid.
0Zvi
Rather than argue over whether such a thing is possible I think that assuming the aid would be spent on whatever would do the most good would be the least convenient posisble world, and the one that gives us the opinion we're after here. Together with the opinion on the realistic case this tells us both what we think of the concept of aid if it works and what we think of it in practice.
0mattnewport
Why not just assume magical space fairies come down to earth and solve poverty? It's a more realistic expectation.
2MrShaggy
"Why not just assume magical space fairies come down to earth and solve poverty? It's a more realistic expectation." Right, like with the No Child Left Behind system, "still waiting for the magical space fairies to wisely make schools accountable since 2001."
[-]gjm40

For the IQ question, you should clarify what level of precision you're after. Exact results or rounded ones? Only from professionally-conducted tests, or not? Include ones taken in childhood, or not? And, though you hardly need this pointed out to you, whatever form the question ends up taking you should expect substantial sampling bias in the (non-blank) answers.

0Mike Bishop
In the U.S., more of us will probably know our SAT or GRE scores. We could also ask about G.P.A.

interesting questions...

What's your take on the simulation argument? If you've no strong opinion, pick the most likely:

  • The human species is very likely to go extinct before reaching a “posthuman” stage.
  • Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).
  • We are almost certainly living in a computer simulation.
  • I deny that at least one of the above propositions is true.
  • I'm unfamilier with the simulation argument.
2gjm
I would prefer a more general question about arguments of this form (doomsday argument, "thirdism" in the Sleeping Beauty problem, etc.). I know of very intelligent people who think such arguments are obviously sound and very intelligent people who think they are obviously unsound.
3mattnewport
Count me in the camp that thinks they are obviously pointless.
1MichaelHoward
One reason I'm interested in this is that people's choices vary but most people, myself included, believe their choice is clearly correct. In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
0[anonymous]
One reason I'm interested in this is people's choices vary widly, but most people, myself included, believe their choice is clearly correct. In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
0[anonymous]
One reason I'm interested in this is people's choices vary widly, but most people believe their choice is clearly correct. In fact, I'd ask for another question beneath it: If you picked one of the first 4 options, how confident are you that you're correct?
0Zvi
The question as phrased assumes that the simulation argument is valid if you accept the priors; you can say you're not familiar with the simulation argument but you can't say that you think it is wrong. This seems like another sign that opinions on this are strong - as stated this question reminds me of a push poll.
0MichaelHoward
Yes you can, option 4, but if that isn't clear then it should be written as something like: 'I disagree with the simulation argument - none of the first 3 propositions are true.'
0[anonymous]
Yes you can: option 4.
  • "Time per day on OB/LW" is hard to measure, since I'm just being online, studying and working in parallel.
  • "Political views" -- I'd like "not commited" as an option.
  • "Santa" -- understanding your position is a process, so e.g. clear-cut "yes/no" doesn't map on my "I contemplated the notion, and was unsure before growing old/perceptive enough to realize it's a running joke".
  • Before the questions on probabilities, it'd be nice to ask about the position on interpretation of probability.
  • There shou
... (read more)
5Scott Alexander
I specifically excluded "not committed" as an option on the political views section, because a lot of rationalists have a tendency to go towards "not committed" to signal how they're not blind followers of a party, when really they have very well defined political views. I, for example, absolutely refuse to register with a political party, answer "independent" to any questions about my political affiliation, talk a good talk about how both parties are equally crooks, and then proceed to vote for the Democrat nine times out of ten. I would kind of like to force people like me to put "Democrat" on there so that we get more data. I will change this if enough people agree with Vladimir.
2thomblake
I agree with Vladimir. Parties are not ideologies - they're coalitions (at least in the US). I see no reason to assume people are affiliated with a particular coalition, especially one in possibly a foreign country.
2Vladimir_Nesov
The problem is that In Russia there is only one Party, and studying what the classical options are, or what the little parties are, doesn't seem to be worth my time given the current situation.
-3CronoDAS
Do they have primary elections?
1Jack
I agree that there should be no "not committed" option, but asking non-Americans to identify with an American political party seems kind of unhelpful. Do we think think more traditional ideological terms are to vague to be unhelpful? Maybe: Conservative, Classical Liberal, Welfare State Liberal, Marxist/Post Marxist, etc?
0infotropism
I agree with Vladimir too, you can't always pinpoint people like that. I'd say I'm uncommitted too. By that I mean to encompass the general idea that I agree with a lot of the ideas that come from, for instance, libertarianism, and at the same time, with a lot of the ideas behind communism. As I never heard of a good synthesis between the two, so I stand uncommitted.
1hrishimittal
You can measure time per day on OB/LW or any other app/site using Rescuetime. http://www.rescuetime.com/
1outlawpoet
I use ManicTime, myself http://www.manictime.com/
1Mike Bishop
Anybody tried both of these? I think everyone should use similar software. Its an incredibly low cost route to more self-knowledge and discipline.

When looking for a long-term romantic partner: How important is intelligence? How important is a consequentialist moral outlook? How important is rationality?

3Alicorn
What about people who would find a consequentialist moral outlook in a potential partner negatively motivating? We'd give the same "how important" answer as someone who found it a positive trait.
0Mike Bishop
good point

"Do you think that the so called Dark Arts are inherently evil and should not be taught, learned and used by us? Why?"

1Zvi
I might offer options here beyond a yes/no: We should not learn them, we should learn them but not use them, we should use them only when we have no choice or we should use them freely.

Are you an active member of other communities built around X, where X is one of the following? (check all that apply)

  • Atheism / Skepticism
  • Pick-up artists
  • Fandom (SF, anime, webcomics, whatever)
  • Political debate
  • Political activism
  • Technology (programming, electronics ...)
  • Transhumanism / the singularity
  • Entrepreneurship
  • Wikipedia
  • Free software
  • Environmentalism
  • Self-help / self-improvement
  • Religion
  • Science
  • Other

(Does someone have a better way of formulating this question?)

nutritional supplements, diet, exercise habits, height and weight

0Scott Alexander
Please tell me the exact questions you want. Also, why height and weight? I appreciate wanting to know what effect diet has on weight, but that's way beyond the scope of this survey.
0Mike Bishop
We don't want the survey to take too long, so maybe height and weight shouldn't be priorities but there are a lot of reasons, none of them of overwhelming importance, that people might be interested. 1. As you said, the effect of diet on BMI. 2. simply to describe who we are 3. BMI as a very very imperfect measure of akrasia

Many questions pose a low risk of identity disclosure. For the few questions that pose a high risk

let people check a box which says "turn this answer into missing data before placing it in the public domain."

On the belief in god question, rule out simulation scenarios explicitly...I assume you intend "supernatural" to rule out a simulation creator as a "god"?

On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually romantically interact with"

[-]gjm30

The "inapplicable" option for the Robin-versus-Eliezer singularity question should be phrased in stronger terms than "don't believe in"; someone could easily (say) think that there's a 5% chance of a technological singularity some time, of which 4% is accounted for by one option and 0.5% by the other (and another 0.5% by all others together). But wouldn't it be better just to ask for probability estimates for (1) Robin-style singularity, (2) Elizer-style singularity, and (3) any singularity?

For this, and also for the Three Worlds Collide question, there should be at least one URL.

The survey should ask country of residence.

Perhaps there should be a short survey and a full survey? Or every question (or most other than demographics) have a "no answer" as an already marked default? It's a pretty intensive survey unless you spend a lot of time here I think.

0Mike Bishop
Agreed, as survey length rises, survey response rate falls. I recommend making two or more surveys. The first one should take less than 5 minutes and we should push everyone, including non-commenters to fill it out. We should use our handles, or another ID, to link the data from multiple surveys.

The likelihood of an existential risk actualizing during this century.

Looking into U.S. political parties especially beyond the big two doesn't look like a good use of my time. Consider replacing that with the scores from the World's smallest political quiz

4Jack
Those sorts of questions aren't bad ideas but I've become fairly confident that that quiz is designed to recruit more libertarians, not accurately place anyone's political views. This is a better, though longer, political view quiz.
6hirvinen
Strongly disagree on Political Compass being better. The questions are heavily loaded, the very first question being and many questions such as aren't at all about what should be done or what should be the state of things. What are you going to infer about my political beliefs based on my answer to that? (Edited to fix formatting.)
1Jack
Questions are loaded in different directions (in comparison to the World's Smallest where all the questions are loaded in the libertarian direction) so the results balance each other out. Admittedly there are some questions that I wouldn't immediately think would indicate anything about my political beliefs but its seems to accurately place people- at least those I've talked to. Have you taken it and felt that your placement was wrong? I have no doubt we could come up with a quiz better than either of these if we wanted to put in the time.
1hirvinen
The Political Compass seems to me, based on my own and friends' experiences to have a strong pressure towards the lower left corner. As one of them said, "you would have to want to sacrifice babies to corporations to end up in the upper right corner." The World's Smallest Political Quiz isn't entirely neutral, but to me it would seem to spread people much more evenly, and importantly all questions are clearly on the two axis along which it measures political stance.
1Jack
Pressure in that direction is definitely possible given that thats where most of my friends think they belong anyway. Though its strange then that they place the entire American political spectrum in the upper right. I'll reconsider my position on it. But the World's Smallest isn't a suitable alternative. Its just packed with weasel words and its going to obscure a lot of differences just because a lot of people are going to answer "Maybe sometimes" even if they lean heavily in one direction or the other. Also, the fact that this community is probably skewed libertarian anyway is just going to make it harder to interpret the results. The last thing we need is a poll that will automatically confirm our assumptions about this group's political views.
4Gordon Seidoh Worley
Although it is designed as part of libertarian recruitment and is used to start discussions with people about freedoms they already support and then "draw them in" by gradually exposing them to other ideas, the reality of the data is that not many people score libertarian (the Web data isn't very accurate because you get a lot more libertarians visiting the site). In my younger years I did some tabling for the Libertarian Party, giving the quiz, letting them place a dot on a blow up of the quiz grid to let them mark how they scored and compare with others. And I have to say, in all that time, I did not encounter one person out of several hundred that scored libertarian who was not already a card-carrying member. In fact, if anything, most people score down in the authoritarian range. This is just the data set I've collected, though. Maybe there is a better one out there than the one you can find from the online version of the quiz.
0Jack
Maybe. But they did a version that was less obviously biased through an actual polling firm and got these results which represent libertarians as eight times more common than the number of people who identify as libertarian. Now maybe there really are those kinds of sympathies for the libertarian position, I'm not sure. But it doesn't give me a lot of confidence since the one online is worse. But even if it doesn't skew libertarian it still lumps way to many people as centrist for it to be particularly useful. And any test that labels people as "authoritarian" (something usually reserved for totalitarian regimes) is pretty ridiculous.

You should survey superstition (astrology, bad luck avoidance, complementary medicine, etc).

1dclayh
Is "complementary" medicine the new euphemism for alternative/natural/Eastern/not-tested-with-science medicine? I haven't heard of it before.
1JulianMorrison
Not so new, but yup.
[-]gjm20

Robin Hanson notoriously thinks that most medicine does little or no good. I'd guess that he opposes large-scale socialized medicine on these grounds, though that's not a foregone conclusion and I don't think I've seen an explicit statement from him about this. It's probably more usual to think that medicine is great and we should all have easier access to more of it. How about a question somewhere in this vicinity?

1Scott Alexander
Yes, but how can we phrase this rigorously? "Medicine does little good" seems too open to interpretation.
0Zvi
There's a few options that come to mind, none of them perfect. One basic one is to ask how much we should be spending on health care; the risk here is if you think there is counterfactual effective medical spending. Another is what we feel is the marginal cost to current medicine of an additional year of life or healthy life, which could also be compared to what people think that year or a life saved is worth. What percentage of the current investment in medicine has a substantial benefit to the patient? is a way to try and measure this directly rather than indirectly.
1SoullessAutomaton
I vaguely recall Robin noting that socialized medicine (as implemented in other countries) tends to reduce both supply of medical treatment and money spent on such, so I'd actually expect that he would weakly support it in the sense of "more optimal than the current system". I could be wrong, though. However, I'm pretty sure he feels that other options would be superior.
[-]gjm20

You have a question about the Singularity, but none about the more general question of artificial general intelligence. It is at least possible to expect (e.g.) that humanish-level AI will become possible in the next century but that it will not lead to a technological singularity; for instance, someone who expects that the days of exponential performance improvements in computing are almost behind us and that the road to AI will be via full-brain simulation with rather little understanding might well have that expectation. So there would be some value -- I don't know whether enough to justify the extra length of the survey -- in having a question about AI as well as one about the Singularity.

Why ask for political parties? Political views are complicated, if all you can do is pick a party this complexity is lost. All those not from the US (like myself) might additionally have a hard time picking a party.

Those are not easy problems to solve and it is certainly questionable if thinking of some more specific questions about political views and throwing them all together will get you meaningful reliable and valid results. As long as you cannot do better than that asking just for preferred political parties is certainly good enough.

1outlawpoet
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
0JulianMorrison
Those won't divide the parties outside the US. Every political party in Britain aside from the extreme fringe are for the availability of abortion and government provision of free healthcare, for example. And things that do divide the parties here, like compulsory ID cards, don't divide the parties in the US.
2outlawpoet
I'm not really interested in actual party divisions so much as I am interested in a survey of beliefs. Affiliation seems like much less useful information, if we're going to use Aumann-like agreement processes on this survey stuff.

I'd be interested in music taste and sports participation as well...maybe on a secondary survey which asks about hobbies.

I'd suggest framing the "How religious was your family" question in a specific cultural context. For example, my family was 'Average Religious' for Canada, but from what I've gathered about the United Stats, that would make them less religious than normal.

Also, I'd be interested to learn what percentage of the members here own weapons for self defense (as opposed to decorative, or other purposes). I'd also suggest the term 'weapons' over 'guns,' once again due to many members living outside of the United States.

[-][anonymous]10

When considering the impact on your success and quality of life, how useful is a dedicated emphasis on improving rationality to you?

  • Dramatically improved my life
  • Somewhat useful
  • Irrelevant
  • Somewhat of a hindrance
  • A significant dissadvantage
0Scott Alexander
I thought Anna already covered that very well. Is there some reason you want to know this as part of an interaction with the other questions on the survey?

How educated do you consider yourself on the following topics:

  • Economics
  • World affairs / political geography
  • Energy / climate change / pollution / ecology
  • Law
  • Psychology / neurology
  • Biology / medical science
  • Artificial intelligence
  • Maths and Physics

(any others of interests? These are biased towards important on frequent topics here)

1Mike Bishop
Split psychology from neuroscience
1Paul Crowley
Maths and physics deserve finer classification: specific areas of interest might include * Probability and statistics * QM * Complexity theory * not sure what the word is for the field that would cover Godel, Turing, Kolmogorov, Chaitin etc
1timtyler
Kolmogorov and Chaitin is probably "algorithmic information theory".
1Emile
Theoretical Computer science? I agree that it would be worth splitting out probability and statistics ... as for the rest, I'm not sure, it might be getting too specific. QM is interesting for Yvain's question about MWI.
0Mike Bishop
The possible responses should be fairly concrete: a) I know less on this topic than an average undergraduate major at an average U.S. university, e.g. Michigan State to f) I am making research contributions to the cutting edge of this field
0Mike Bishop
Make a general category for Humanities, and make Applied Statistics distinct from Math. Added: Make a separate category for Philosophy as well.
2Alicorn
Why a general category for humanities? "Humanities" could mean anything from art to philosophy to literary criticism. Philosophy, at least, should be its own topic.

The likelihood of the creation of an AGI leading to an intelligence explosion?

ETA: The likelihood of human uploads leading to an intelligence explosion?

Does the first AGI have to be Friendly, or we're screwed?

0Paul Crowley
We'll probably be discussing that from Friday on - there's a bar on such discussions before then...
[-]gjm10

As others have said, (1) party affiliation is an oversimplification of political beliefs but (2) many many people do broadly hew to one or another party line. But precisely because #2 is true, you can get much the same information as you do from "Democrat or Republican?" by asking one or two questions addressed more directly to the issues.

At least once in the past (probably much more often) researchers have done a political survey, done a principal-component analysis (or something similar) on the results, and published their conclusions about the... (read more)

[-]gjm10

Have a note explicitly inviting people to add noise to their karma scores. Noisy karma scores are less useful for identification, though of course if anyone says "about 3000" and you believe them then it doesn't leave much room for doubt about who it is.

Occupation, income, self-perceived success in relationships and career, life satisfaction, experience of depression. Parents education and income and rationality. High school popularity, comfort and success interacting with people from different social/cultural groups.

[-][anonymous]00

As survey length rises, survey response rate falls.

I recommend making two or more surveys. The first one should take less than 5 minutes and we should push everyone, including non-commenters to fill it out.

We should use our handles, or another ID, to link the data from multiple surveys.

On the Newcomb question I think you should have an option for Wittgenstein-like positions, i.e. "the premise of the question contains hidden contradictions". I'd offer the same option for the other similarly-formatted questions, although I'm not aware of anyone making any such assertion about PD.

[-][anonymous]00

In what aspect of life is it most useful to improving your rational thinking?

  • Professional succes
  • Scientific and or technical achievement
  • Social life
  • Personal development (self improvement, Cognitive Behavioral Therapy, etc)
  • Important life decisions.
[-][anonymous]00

On the belief in god question, rule out simulation scenarios explicitly (I assume "supernatural" is meant to rule out a simulation creator subject to its own physics as a "god")

[-][anonymous]00

On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually hang out with"

[-]gjm00

It would be interesting to have at least one question in the general domain of economic/political forecasting, for two reasons: (1) such questions have some practical interest; (2) they can be tested later. (Especially valuable if we ask for confidence limits or something, so that calibration as well as accuracy can be tested.)

I don't have strong feelings about what such questions would be best; anyone who agrees with me that there should be such questions and does have strong feelings about what questions might put them in replies to this comment. Random examples: length and depth of the current recession; likely relative importance of (say) US, China, Europe, India in 20 years' time.

I would like to see the results made public, as well as seeing more surveys in general.

Don't have a good indicator of how many people would worry about public data, but as the survey-taking group size increases (as I presume will happen over time on LW) it should become easier to remain unidentifiable.

Plenty of people voluntarily fill out surveys about themselves on social networking sites, and those of us concerned with anonymity probably wouldn't be filling them out either way.

0byrnema
Some people are easier to identify than others (for example, if you're female or from a particular country) and any person may feel uncomfortable about a particular question, so that even marginal concern about being identified with an odd view may skew results. Consider making the data public in a way that gives the complete set of answers to each question, but doesn't allow comparison of how one person answered multiple questions. (I'm sure there's an easy way to say this, I don't know it.) So in other words, you can't tell that the person who answered "karma = -16" also answered "yes" to "superstitious". Any cross-correlations, of course, would need to be computed using the original, publicly unavailable data.

There is plenty of literature out there about how groups can go wrong. We need to make sure we do not fall victim to those traps. What are ways we can identify and avoid known pitfalls?

Perhaps we should include some questions about the perception of the community: diversity of viewpoints, strength of conformity, how much you personally identify with the group, things of that nature. These answers could be useful for self-diagnostic purposes, both for the group itself as well as its individual members comparing their answers against others in the group.

0Scott Alexander
I'm not sure what you mean. Can you suggest some?

I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?

[-]Emile-10

MBTI type (may not be the most "scientifically valid", but it's probably the one the most people would know).

1dfranke
Useless, I think. For any site that caters to the hacker/libertarian/technophile cluster, the results are invariably dominated by INTJs and INTPs with a few ENTJs, and everything else put together being in the single digits. The meaning of the specific numbers we get for this site will be completely drowned out by sampling bias and the general imprecision of the test.
1MBlume
actually, the test I took identified me as a feeling type, ENFP ETA: Though it could be relevant that I took the test since reading Feeling Rational which has made a lot of standard quiz questions about thinking vs. feeling read like nonsense to me.