Will_Newsome comments on I Stand by the Sequences - Less Wrong

14 Post author: Grognor 15 May 2012 10:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (248)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 15 May 2012 08:01:38PM 25 points [-]

You seem to mostly disagree in spirit with all Grognor's points but the last, though on that point you didn't share your impression of the H&B literature.

I'll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I've learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I've gained fifteen or twenty IQ points. I've read all of Eliezer's posts maybe three times over and I've read many of the cited papers and a few books, so my disagreement likely doesn't stem from not having sufficiently appreciated Eliezer's sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer's position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.

His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don't accept MWI just don't understand Occam's razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it's one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they're right about contentious issues where many people disagree with them, even when they haven't looked at their opponents' best arguments, and even when they don't have a coherent understanding of their opponents' position or their own position. Sometimes they don't even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.

Comment author: Emile 15 May 2012 08:14:09PM 7 points [-]

normal people are clearly biased in all sorts of ways

Does anybody actually dispute that?

religion is bad because it's one of the biggest impediments to a bright, Enlightened future;

For what it's worth, I don't hold that position, and it seems much more prevalent in atheist forums than on LessWrong.

Comment author: JoshuaZ 15 May 2012 08:25:22PM *  4 points [-]

it seems much more prevalent in atheist forums than on LessWrong.

Is it less prevalent here or is it simply less vocal because people here aren't spending their time on that particularly tribal demonstration? After all, when you've got Bayesianism, AI risk, and cognitive biases, you have a lot more effective methods of signaling allegiance to this narrow crowd.

Comment author: Eugine_Nier 16 May 2012 04:31:19AM 2 points [-]

Well we have openly religious members of our 'tribe'.

Comment author: JoshuaZ 16 May 2012 04:36:33AM 5 points [-]

Clear minority, and most comments defending such views are voted down. With the exception of Will, no one in that category is what would probably be classified as high status here, and even Will's status is... complicated.

Comment author: Eugine_Nier 16 May 2012 05:32:30AM 2 points [-]

Well this post is currently at +6.

Comment author: Will_Newsome 16 May 2012 07:34:04PM 2 points [-]

Also I'm not religious in the seemingly relevant sense.

Comment author: Will_Newsome 15 May 2012 08:38:03PM 5 points [-]

Does anybody actually dispute that?

Depends on what connotations are implied. There are certainly people who dispute, e.g., the (practical relevance of the) H&B results on confirmations bias, overconfidence, and so on that LessWrong often brings up in support of the "the world is mad" narrative. There are also people like Chesterton who placed much faith in the common sense of the average man. But anyway I think the rest of the sentence needs to be included to give that fragment proper context.

For what it's worth, I don't hold that position, and it seems much more prevalent in atheist forums than on LessWrong.

Granted.

Comment author: CuSithBell 15 May 2012 09:25:17PM 3 points [-]

Could you point towards some good, coherent arguments for supernatural phenomena or the like?

Comment author: Will_Newsome 15 May 2012 09:53:07PM 2 points [-]

Analyzing the sun miracle at Fatima seems to be a good starting point. This post has been linked from LessWrong before. Not an argument for the supernatural, but a nexus for arguments: it shows what needs to be explained, by whatever means. Also worth keeping in mind is the "capricious psi" hypothesis, reasonably well-explicated by J. E. Kennedy in a few papers and essays. Kennedy's experience is mostly in parapsychology. He has many indicators in favor of his credibility: he has a good understanding of the relevant statistics, he exposed some fraud going on in a lab where he was working, he doesn't try to hide that psi if it exists would seem to have weird and seemingly unlikely properties, et cetera.

But I don't know of any arguments that really go meta and take into account how the game theory and psychology of credibility might be expected to affect the debate, e.g., emotional reactions to people who look like they're trying to play psi-of-the-gaps, both sides' frustration with incommunicable evidence or even the concept of incommunicable evidence, and things like that.

Comment author: CuSithBell 15 May 2012 11:27:51PM 6 points [-]

Hm. This... doesn't seem particularly convincing. So it sounds like whatever convinced you is incommunicable - something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?

Comment author: Will_Newsome 15 May 2012 11:56:35PM 6 points [-]

Hm. This... doesn't seem particularly convincing.

Agreed. The actually-written-up-somewhere arguments that I know of can at most move supernaturalism from "only crazy or overly impressionable people would treat it as a live hypothesis" to "otherwise reasonable people who don't obviously appear to have a bottom line could defensibly treat it as a Jamesian live hypothesis". There are arguments that could easily be made that would fix specific failure modes, e.g. some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism, and Randi-style skeptics seem to like fully general explanations/counterarguments too much. But once those basic hurdles are overcome there still seems to be a wide spread of defensible probabilities for supernaturalism based off of solely communicable evidence.

So it sounds like whatever convinced you is incommunicable - something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?

Essentially, yes.

Comment author: steven0461 16 May 2012 12:47:22AM 3 points [-]

some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism

Is the point here that supernatural entities that would be too complex to specify into the universe from scratch may have been produced through some indirect process logically prior to the physics we know, sort of like humans were produced by evolution? Or is it something different?

Comment author: Will_Newsome 16 May 2012 01:09:49AM 0 points [-]

Alien superintelligences are less speculative and emerge naturally from a simple universe program. More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful. Coming up with a notion of "simplicity" after the fact based on past observations is coding theory and has nothing to do with the universal prior, which mortals simply don't have access to. Arguments should be about evidence, not "priors".

Comment author: siodine 16 May 2012 04:00:44AM *  -1 points [-]

More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful

...

Coming up with a notion of "simplicity" after the fact based on past observations is coding theory and has nothing to do with the universal prior. Arguments should be about evidence, not "priors".

It isn't technically a universal prior, but it counts as evidence because it's historically fruitful. That leaves you with a nitpick rather than showing "LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism."

Comment author: Will_Newsome 16 May 2012 04:17:07AM 1 point [-]

I don't think it's nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability. Bringing in Kolmogorov complexity is needlessly confusing, and even Bayesian probability isn't necessary because all we're really concerned with is the likelihood ratio. The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons. Such errors harm group epistemology.

Comment author: siodine 16 May 2012 12:35:20PM *  0 points [-]

I don't think it's nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability.

I don't see how you've done that. If KC isn't a universal prior like objective isn't technically objective but inter-subjective then you can still use KC as evidence for a class of propositions (and probably the only meaningful class of propositions). For that class of propositions you have automatic evidence for or against them (in the form of KC), and so it's basically a ready-made prior because it passes from the posterior to the prior immediately anyway.

The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons.

So a) you think LWers reasons for not believing in supernaturalism have nothing to do with KC, and b) you think supernaturalism exists outside the class of propositions KC can count as evidence as for or against?

I don't care about A, but If B is your position I wonder: why?

Comment author: CuSithBell 16 May 2012 12:15:17AM 2 points [-]

That's a shame. Any chance you might have suggestions on how to go about obtaining such evidence for oneself? Possibly via PM if you'd be more comfortable with that.

Comment author: Will_Newsome 16 May 2012 01:37:52AM *  5 points [-]

I have advice. First off, if psi's real then I think it's clearly an intelligent agent-like or agent-caused process. In general you'd be stupid to mess around with agents with unknown preferences. That's why witchcraft was considered serious business: messing with demons is very much like building mini uFAIs. Just say no. So I don't recommend messing around with psi, especially if you haven't seriously considered what the implications of the existence of agent-like psi would be. This is why I like the Catholics: they take things seriously, it's not fun and games. "Thou shalt not tempt the Lord thy God." If you do experiment, pre-commit not to tell anyone about at least some predetermined subset of the results. Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi's capricious then pre-commiting not to blab increases likelihood of significant effects.

Comment author: Eugine_Nier 16 May 2012 04:45:18AM 5 points [-]

Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi's capricious then pre-commiting not to blab increases likelihood of significant effects.

I just thought of something. What you're saying is that psi effects are anti-inductive.

Comment author: bogus 16 May 2012 08:52:56PM 0 points [-]

The capricious-psi literature actually includes several proposed mechanisms which could lead to "anti-inductive" psi. Some of these mechanisms are amenable to mitigation strategies (such as not trying to use psi effects for material advantage, and keeping one's experiments confidential); others are not.

Comment author: Will_Newsome 16 May 2012 04:59:21AM 0 points [-]

Indeed.

Comment author: Eugine_Nier 16 May 2012 05:34:49AM 6 points [-]

Ok, I feel like we should now attempt to work out a theory of psi caused by some kind of market-like game theory among entities.

Comment author: CuSithBell 18 May 2012 03:01:51AM 2 points [-]

Thanks for the advice! Though I suppose I won't tell you if it turns out to have been helpful?

Comment author: r_claypool 16 May 2012 10:15:10PM 0 points [-]

LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism

As lukeprog says here.

Comment author: Eugine_Nier 16 May 2012 04:39:35AM 1 point [-]

I don't entirely agree with Will here. My issue is that there seem to be some events, e.g., Fatima, where the best "scientific explanation" is little better than the supernatural wearing a lab-coat.

Comment author: CuSithBell 18 May 2012 03:01:13AM 1 point [-]

Are there any good supernatural explanations for that one?! Because "Catholicism" seems like a pretty terrible explanation here.

Comment author: Eugine_Nier 18 May 2012 05:57:26AM 3 points [-]

Because "Catholicism" seems like a pretty terrible explanation here.

Why? Do you have a better one? (Note: I agree "Catholicism" isn't a particularly good explanation, it's just that it's not noticeably worse than any other.)

Comment author: CuSithBell 19 May 2012 01:17:32AM 1 point [-]

I mentioned Catholicism only because it seems like the "obvious" supernatural answer, given that it's supposed to be a Marian apparition. Though, I do think of Catholicism proper as pretty incoherent, so it'd rank fairly low on my supernatural explanation list, and well below the "scientific explanation" of "maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years". I haven't really investigated the phenomenon myself, but I think, say, "the ghost-emperor played a trick" or "mass hypnosis to cover up UFO experiments by the lizard people" rank fairly well compared to Catholicism.

Comment author: Eugine_Nier 19 May 2012 03:51:30AM 2 points [-]

"maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years".

This isn't really an explanation so much as clothing our ignorance in a lab coat.

Comment author: JoshuaZ 19 May 2012 03:53:46AM 1 point [-]

It does a little more than that. It points to a specific class of hypotheses where we have evidence that in similar contexts such mechanisms can have an impact. The real problem here is that without any ability to replicate the event, we're not going to be able to get substantially farther than that.

Comment author: siodine 15 May 2012 09:45:16PM *  0 points [-]

Disagree in spirit? What exactly does that mean?

(I happen to mostly agree with your comment while mostly agreeing with Grognor's points--hence my confusion in what you mean, exactly.)

Comment author: Will_Newsome 15 May 2012 10:27:48PM *  6 points [-]

Hard to explain. I'll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that's a pretty standard opinion among quantum physicists. Eliezer's insistence that MWI is obviously correct is not justified given his arguments: he doesn't address the most credible alternatives to MWI, and doesn't seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is "obvious". Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer's metaethics: Disagree, especially considering Eliezer's said he thinks he's solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. "'People are crazy, the world is mad' is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature": Mostly disagree, LW is much too confident in the heuristics and biases literature and it's not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.

Comment author: steven0461 15 May 2012 11:51:31PM *  11 points [-]

Disagree, utilitarianism is retarded.

When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.

Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people's welfare? Is it retarded to quantify people's welfare? Is it retarded to add people's welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.

Comment author: Will_Newsome 16 May 2012 05:10:04AM 6 points [-]

I suppose it's easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren't a real contender for the position of actual morality. So I'm unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.

Comment author: Will_Newsome 16 May 2012 12:49:10AM *  0 points [-]

Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I've seen suggested by utilitarians. A quick Google search hasn't revealed any others. What are the alternatives?

I'm unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don't know where to begin, and I wouldn't know where to end.

Comment author: steven0461 16 May 2012 12:55:01AM *  4 points [-]

What are the alternatives?

Eudaimonia. "Thousand-shardedness". Whatever humans' complex values decide constitutes an intrinsically good life for an individual.

It's possible that I've been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed "welfare" or "happiness" counts as utilitarianism. But it seems like a more natural place to draw the boundary than "maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia".

Comment author: Will_Newsome 16 May 2012 01:58:34AM 2 points [-]

That sounds basically the same as was what I'd been thinking of as preference utilitarianism. Maybe I should actually read Hare.

What's your general approach to utilitarianism's myriad paradoxes and mathematical difficulties?

Comment author: Will_Newsome 26 May 2012 11:02:57AM *  -1 points [-]

You went into the kitchen cupboard
Got yourself another hour, and you gave
Half of it to me
We sat there looking at the faces
Of the strangers in the pages
'til we knew 'em mathematically

They were in our minds
Until forever
But we didn't mind
We didn't know better

So we made our own computer
Out of macaroni pieces
And it did our thinking
While we lived our lives
It counted up our feelings
And divided them up even
And it called our calculation
Perfect love [lives?]

Didn't even know
That love was bigger
Didn't even know
That love was so, so
Hey hey hey

Hey this fire, this fire
It's burning us up
Hey this fire, It's burning us
Oh, oo oo oo, oo oo oo oo

So we made the hard decision
And we each made an incision
Past our muscles and our bones
Saw our hearts were little stones

Pulled 'em out, they weren't beating
And we weren't even bleeding
As we laid them on the granite counter top

We beat 'em up against each other
We beat 'em up against each other
We struck 'em hard against each other
We struck 'em so hard, so hard until they sparked

Hey this fire, this fire
It's burning us up
Hey this fire
It's burning us up
Hey this fire It's burning us
Oh, oo oo oo, oo oo oo oo
Oo oo oo oo oo oo

— Regina Spektor, The Calculation

Comment author: steven0461 16 May 2012 12:16:12AM 5 points [-]

he doesn't address the most credible alternatives to MWI

I don't think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?

Comment author: Will_Newsome 16 May 2012 01:52:07AM *  0 points [-]

Is that a response to my point specifically or a general observation? I don't think "simply noting" is nearly enough justification to decide strongly in favor of MWI—maybe it's enough to decide in favor of MWI, but it's not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.

ETA: Tapping out because I think this thread is too noisy.

Comment author: steven0461 16 May 2012 01:58:52AM 0 points [-]

I suppose? It's hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?

Comment author: Eugine_Nier 16 May 2012 04:57:15AM 2 points [-]

It's hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome.

It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.

But maybe you have something in mind?

Well the Transactional Interpretation for example.

Comment author: steven0461 16 May 2012 05:19:02AM 2 points [-]

That would have been my guess. I don't really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?