Emile comments on 2011 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.
Which parts, specifically?
(it didn't have an effect like that on me, I didn't see that many surprising things)
I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here... In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.
Alicorn (top-poster) doesn't agree with Eliezer about ethics. PhilGoetz (top-poster) doesn't agree with Eliezer. Wei_Dai (top-poster) doesn't agree with Eliezer on AI issues. wedrifid (top-poster) doesn't agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn't agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don't agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
Holden Karnofsky has read all of the Sequences?
I wrote him an email to make sure. Here is his reply:
IIRC Peter Thiel can't give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it "receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families."
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples... I'm not sure what's going on, but I'm sure that whatever it is it's my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I'm to lazy to form my own opinions! :p
I just wait, especially considering that which interpretation of QM is correct doesn't have urgent practical consequences.
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn't give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.
It isn't obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
Sounds like plain old accidental compartmentalization. You didn't join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.
Surely it depends on which questions are meant by "important things".
Granted.
The most salient one would be religion.
What surprised you about the survey's results regarding religion?
That there are theists around?
Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.
Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome's contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw.
Also, many people could have answered the survey who are new here.
It's also fairly unlikely that all the theists and quasitheists on LW have outed themselves as such.
Nor is there any particular reason they should.
I assumed those were rare exceptions.
Why? Don't you encounter enough contrarians on LW?
You may think you encounter a lot of contrarians on LW, but I disagree - we're all sheep.
But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? ('Collapse, MWI, uh...' - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI - because Eliezer is such a fan...
I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.
Heh. The original draft of my comment above included just this example.
To be explicit, I don't believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.
I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as "I don't think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world." or misapplications of rational ideas like "Wouldn't Occam's razor favor a single world?"
I still don't know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.
It seems like doing this would only restore you to a non-informative prior, which still doesn't cohere with the survey result. What positive evidence is there in the QM sequence for MWI?
The positive evidence for WMI is that it's already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!
MWI, as beautiful as it is, won't fully convince me until it can explain the Born probability - other interpretations don't do it more, so it's not a point "against" MWI, but it's still an additional rule you need to make the "jump" between QM and what we actually see. As long you need that additional rule, I've a deep feeling we didn't reach the bottom.
I still had in my mind the arguments in favour of many-worlds, like "lots of scientists seem to take it seriously", and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve.
Well, it mentions some things like "it's deterministic and local, like all other laws of physics seem to be". Does that count?
Its determinism is of a very peculiar kind, not like that of other laws of physics seem to be.
Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don't actually know if that characterizes Eliezer.
It's slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.