Jayson_Virissimo comments on I Stand by the Sequences - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (248)
Why must we stand-by or stand-away-from? Personally, I lean towards the Sequences. Do you really need to feel lonely unless others affirm every single doctrine?
I accept the MWI of QM as "empirically adequate"; no more, no less.
Cryonics is interesting and worth considering, but the probabilities invovled are so low that it is not at all obvious it is a net win after factoring in signalling costs.
"Science" is so many different things that I think it is much more responsible to divide it up into smaller sets (some of which could really use some help from LessWrongists and others which are doing just fine, thank-you-very-much) before making such blanket generalizations.
This is a point on which I side with the mathematical economists (and not with the ethicists) and just say that there is no good way to make interpersonal utility comparisons when you are considering large diverse populations (or, for that matter, the "easy" case of a genetically related nuclear family).
I am confused about Eliezer's metaethics. If you ask 10 LessWrongers what Eliezer's metaethical theory is, you get approximately 10 distinct positions. In other words, I don't know how high a probability to assign to it, because I'm very unsure of what it even means.
I agree. The world really is mad. I seriously considered the hypothesis that it was I who am mad, but rejected this proposition, party because my belief-calibration seems to be better than average (precisely the opposite of what you would expect of crazy people). Of course, "madness" is relative, not absolute. I am no doubt insane compared to super-human intelligences (God, advanced-AI, Omega, etc...).
You seem to mostly disagree in spirit with all Grognor's points but the last, though on that point you didn't share your impression of the H&B literature.
I'll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I've learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I've gained fifteen or twenty IQ points. I've read all of Eliezer's posts maybe three times over and I've read many of the cited papers and a few books, so my disagreement likely doesn't stem from not having sufficiently appreciated Eliezer's sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer's position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.
His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don't accept MWI just don't understand Occam's razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it's one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they're right about contentious issues where many people disagree with them, even when they haven't looked at their opponents' best arguments, and even when they don't have a coherent understanding of their opponents' position or their own position. Sometimes they don't even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.
Does anybody actually dispute that?
For what it's worth, I don't hold that position, and it seems much more prevalent in atheist forums than on LessWrong.
Is it less prevalent here or is it simply less vocal because people here aren't spending their time on that particularly tribal demonstration? After all, when you've got Bayesianism, AI risk, and cognitive biases, you have a lot more effective methods of signaling allegiance to this narrow crowd.
Well we have openly religious members of our 'tribe'.
Clear minority, and most comments defending such views are voted down. With the exception of Will, no one in that category is what would probably be classified as high status here, and even Will's status is... complicated.
Well this post is currently at +6.
Also I'm not religious in the seemingly relevant sense.
Depends on what connotations are implied. There are certainly people who dispute, e.g., the (practical relevance of the) H&B results on confirmations bias, overconfidence, and so on that LessWrong often brings up in support of the "the world is mad" narrative. There are also people like Chesterton who placed much faith in the common sense of the average man. But anyway I think the rest of the sentence needs to be included to give that fragment proper context.
Granted.
Could you point towards some good, coherent arguments for supernatural phenomena or the like?
Analyzing the sun miracle at Fatima seems to be a good starting point. This post has been linked from LessWrong before. Not an argument for the supernatural, but a nexus for arguments: it shows what needs to be explained, by whatever means. Also worth keeping in mind is the "capricious psi" hypothesis, reasonably well-explicated by J. E. Kennedy in a few papers and essays. Kennedy's experience is mostly in parapsychology. He has many indicators in favor of his credibility: he has a good understanding of the relevant statistics, he exposed some fraud going on in a lab where he was working, he doesn't try to hide that psi if it exists would seem to have weird and seemingly unlikely properties, et cetera.
But I don't know of any arguments that really go meta and take into account how the game theory and psychology of credibility might be expected to affect the debate, e.g., emotional reactions to people who look like they're trying to play psi-of-the-gaps, both sides' frustration with incommunicable evidence or even the concept of incommunicable evidence, and things like that.
Hm. This... doesn't seem particularly convincing. So it sounds like whatever convinced you is incommunicable - something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
Agreed. The actually-written-up-somewhere arguments that I know of can at most move supernaturalism from "only crazy or overly impressionable people would treat it as a live hypothesis" to "otherwise reasonable people who don't obviously appear to have a bottom line could defensibly treat it as a Jamesian live hypothesis". There are arguments that could easily be made that would fix specific failure modes, e.g. some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism, and Randi-style skeptics seem to like fully general explanations/counterarguments too much. But once those basic hurdles are overcome there still seems to be a wide spread of defensible probabilities for supernaturalism based off of solely communicable evidence.
Essentially, yes.
Is the point here that supernatural entities that would be too complex to specify into the universe from scratch may have been produced through some indirect process logically prior to the physics we know, sort of like humans were produced by evolution? Or is it something different?
Alien superintelligences are less speculative and emerge naturally from a simple universe program. More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful. Coming up with a notion of "simplicity" after the fact based on past observations is coding theory and has nothing to do with the universal prior, which mortals simply don't have access to. Arguments should be about evidence, not "priors".
...
It isn't technically a universal prior, but it counts as evidence because it's historically fruitful. That leaves you with a nitpick rather than showing "LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism."
I don't think it's nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability. Bringing in Kolmogorov complexity is needlessly confusing, and even Bayesian probability isn't necessary because all we're really concerned with is the likelihood ratio. The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons. Such errors harm group epistemology.
That's a shame. Any chance you might have suggestions on how to go about obtaining such evidence for oneself? Possibly via PM if you'd be more comfortable with that.
I have advice. First off, if psi's real then I think it's clearly an intelligent agent-like or agent-caused process. In general you'd be stupid to mess around with agents with unknown preferences. That's why witchcraft was considered serious business: messing with demons is very much like building mini uFAIs. Just say no. So I don't recommend messing around with psi, especially if you haven't seriously considered what the implications of the existence of agent-like psi would be. This is why I like the Catholics: they take things seriously, it's not fun and games. "Thou shalt not tempt the Lord thy God." If you do experiment, pre-commit not to tell anyone about at least some predetermined subset of the results. Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi's capricious then pre-commiting not to blab increases likelihood of significant effects.
I just thought of something. What you're saying is that psi effects are anti-inductive.
The capricious-psi literature actually includes several proposed mechanisms which could lead to "anti-inductive" psi. Some of these mechanisms are amenable to mitigation strategies (such as not trying to use psi effects for material advantage, and keeping one's experiments confidential); others are not.
Indeed.
Thanks for the advice! Though I suppose I won't tell you if it turns out to have been helpful?
As lukeprog says here.
I don't entirely agree with Will here. My issue is that there seem to be some events, e.g., Fatima, where the best "scientific explanation" is little better than the supernatural wearing a lab-coat.
Are there any good supernatural explanations for that one?! Because "Catholicism" seems like a pretty terrible explanation here.
Why? Do you have a better one? (Note: I agree "Catholicism" isn't a particularly good explanation, it's just that it's not noticeably worse than any other.)
I mentioned Catholicism only because it seems like the "obvious" supernatural answer, given that it's supposed to be a Marian apparition. Though, I do think of Catholicism proper as pretty incoherent, so it'd rank fairly low on my supernatural explanation list, and well below the "scientific explanation" of "maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years". I haven't really investigated the phenomenon myself, but I think, say, "the ghost-emperor played a trick" or "mass hypnosis to cover up UFO experiments by the lizard people" rank fairly well compared to Catholicism.
This isn't really an explanation so much as clothing our ignorance in a lab coat.
Disagree in spirit? What exactly does that mean?
(I happen to mostly agree with your comment while mostly agreeing with Grognor's points--hence my confusion in what you mean, exactly.)
Hard to explain. I'll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that's a pretty standard opinion among quantum physicists. Eliezer's insistence that MWI is obviously correct is not justified given his arguments: he doesn't address the most credible alternatives to MWI, and doesn't seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is "obvious". Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer's metaethics: Disagree, especially considering Eliezer's said he thinks he's solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. "'People are crazy, the world is mad' is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature": Mostly disagree, LW is much too confident in the heuristics and biases literature and it's not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people's welfare? Is it retarded to quantify people's welfare? Is it retarded to add people's welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
I suppose it's easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren't a real contender for the position of actual morality. So I'm unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I've seen suggested by utilitarians. A quick Google search hasn't revealed any others. What are the alternatives?
I'm unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don't know where to begin, and I wouldn't know where to end.
Eudaimonia. "Thousand-shardedness". Whatever humans' complex values decide constitutes an intrinsically good life for an individual.
It's possible that I've been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed "welfare" or "happiness" counts as utilitarianism. But it seems like a more natural place to draw the boundary than "maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia".
That sounds basically the same as was what I'd been thinking of as preference utilitarianism. Maybe I should actually read Hare.
What's your general approach to utilitarianism's myriad paradoxes and mathematical difficulties?
— Regina Spektor, The Calculation
I don't think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
Is that a response to my point specifically or a general observation? I don't think "simply noting" is nearly enough justification to decide strongly in favor of MWI—maybe it's enough to decide in favor of MWI, but it's not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
I suppose? It's hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
Well the Transactional Interpretation for example.
That would have been my guess. I don't really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?
I'm even more confused that people seem to think it quite natural to spend years debating the ethical positions of someone watching the debate.
A little creative editing with Stirner makes for a catchy line in this regard:
Luke's sequence of posts on this may help. Worth a shot at least :)
Which he never finished. I've been waiting for Luke to actually get to some substantive metaethics right back since he was running Common Sense Atheism ;)