Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Eitan_Zohar 25 May 2015 09:44:57AM *  4 points [-]

I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.

Comment author: Kaj_Sotala 25 May 2015 03:50:29PM 6 points [-]

Oh, huh, umm. I certainly didn't want to cause anyone panic attacks by writing that, though in retrospect I should have realized that it's a bit of an information hazard.

I'm sorry.

If it's any comfort, I feel that my arguments in that article are pretty plausible, but that predicting the future is such a difficult thing filled with unknown unknowns that the vast majority of "pretty plausible" predictions are going to be wrong.

Comment author: Dahlen 25 May 2015 02:07:04PM 11 points [-]

Seeing as, in terms of absolute as well as disposable income, I'm probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It's something I know I couldn't participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn't afford anything else. Maybe one day when I'll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain't got no money, take yo' broke ass home!

Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good motives and consequences. On the one hand, the fact that there are people dedicated to donating large fractions of their income is a laudable thing in itself. On the other hand...

  • I don't believe for one second that effective altruism would have been nearly as big of a phenomenon on LessWrong, if the owners of LessWrong hadn't been living off people's donations. MIRI is a charity that wants money. Giving to charity is probably the biggest moral credential on LW. Coincidence? I think not.

  • Ensuring the flow of money in a particular direction may not be the very best effort one can put into making the world a better place. Sure, it's something, and at least in the short term a very vital something, but more than anything else it seems to be a way to patch up, or prop up, a part of the system that was shaky to begin with. The long-term end goal should be to make people less reliant on charity money. Sometimes there is a shortage of knowledge, or of power, or of good incentives, rather than of money. "Throwing money at a cause" is just one way to help -- although I suppose effective altruist organizations already incorporate the knowledge of this problem in their concept of "room for more funding".

  • We already have governments that take away a large portion of our incomes anyway, that have systems in place for allocating funds and efforts, and that purport to promote the same kinds of causes as charities, yet often function inefficiently and even harmfully. However, they're a lot more reliable in terms of actually ensuring the collection of "enough" funds. To pay taxes and to give to charity (yes, I'm aware that charitable giving unlocks tax deductions) is to contribute to two systems that are doing the same job, the second being there mostly because the first isn't doing its job as it should. In this way, and possibly assuming that EA would be a larger movement in the future than it is now, charity might work to mask government inefficiencies and damage or to clean up after them.

  • In the context of earning to give, participating in a particularly noxious industry as a way of earning your livelihood, and using part of that money to contribute to altruist causes, is something that looks to me like a tax on the well-being you thus cause into the world. I'm not sure that tax is always smaller than 100%. And it's more difficult to quantify the negative externalities from your job than it is to quantify the positive effects of your donations, because the first are more causally distant.

To take the discussion back to the meta level, I'm but one user with not so much karma and probably a non-central example of a LessWronger, so I don't demand that anyone accommodates me and my preferences not to discuss EA. However, knowing that other users basically come from an effective altruism mindset makes discussion with them somewhat difficult, since we don't have the same assumptions about the relationship between money and welfare. The most annoying of all is the very rare and very occasional display of charitable snobbery, or a commitment not to aid first world people who are not effective altruists, or who don't donate enough. (I've seen that, but Google seems to fail me at this moment.) It seems easier and more pleasant to discuss ethical matters with people who don't come from an EA worldview, and personally I'd like to see more of a plurality of approaches on the matter on LW.

tl;dr It's a rich people thing and therefore alien to me; as for objective merits, I've got mixed positive and negative feelings about it. But in the end, to each their own.

Comment author: Kaj_Sotala 25 May 2015 03:34:45PM 6 points [-]

I think that the image of EA on LW has been excessively donation-focused, but I'd like to point out that things like earning to give are only one part of EA.

EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can't donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:

Effective altruism says that, if you focus on the right career, you can have an even bigger impact! And the careers don't even need to be exotic, demanding ones that only a few select ones can do (even if some of them are). Some of the top potential careers that 80,000 hours has identified so far include thing as diverse as being an academic, civil servant, journalist, marketer, politician, or software engineer, among others. Not only that, they also emphasize finding your fit. To have a big impact on the world, you don't need to shoehorn yourself into a role that doesn't suit you and that you hate - in fact you're explicitly encouraged to find an high-impact career that fits you personally.

Analytic? Maybe consider research, in one form or another. Want to mostly support the cause from the side, not thinking about things too much? Let the existing charity evaluation organizations guide who you donate to and don't worry about the rest. Or help out other effective altruists. People person? Plenty of ways you could have an impact. There's always something you can do - and still be effective. It's not about needing to be superhuman, it's about doing the best that you can, given your personality, talents and interests.

Comment author: CellBioGuy 24 May 2015 07:39:28PM *  1 point [-]

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

Probably because they're unlikely to lead to anything special over and above general biology research.

Comment author: Kaj_Sotala 24 May 2015 11:21:50PM 0 points [-]

Funding for SENS might fund research that could be considered too speculative for more conventional bio funders, though.

Comment author: Mark_Friedenbach 24 May 2015 05:42:28PM *  1 point [-]

The academic field which is most conspicuously missing is artificial intelligence. I agree with Jacob that it is and should be concerning that the machine intelligence research institute has adopted a technical agenda which is non-inclusive of machine intelligence researchers.

Comment author: Kaj_Sotala 24 May 2015 11:18:20PM *  2 points [-]

I agree with Jacob that it is and should be concerning

That depends on whether you believe that machine intelligence researchers are the people who are currently the most likely to produce valuable progress on the relevant research questions.

One can reasonably disagree on MIRI's current choices about their research program, but I certainly don't think that their choices are concerning in the sense of suggesting irrationality on their part. (Rather the choices only suggest differing empirical beliefs which are arguable, but still well within the range of non-insane beliefs.)

Comment author: RobbBB 24 May 2015 07:02:50PM *  4 points [-]

No, that is not how it works: I don't need to either accept or reject MWI. I can also treat it as a causal story lacking empirical content.

To say that MWI lacks empirical content is also to say that the negation of MWI lacks empirical content. So this doesn't tell us, for example, whether to assign higher probability to MWI or to the disjunction of all non-MWI interpretations.

Suppose your ancestors sent out a spaceship eons ago, and by your calculations it recently traveled so far away that no physical process could ever cause you and the spaceship to interact again. If you then want to say that 'the claim the spaceship still exists lacks empirical content,' then OK. But you will also have to say 'the claim the spaceship blipped out of existence when it traveled far enough away lacks empirical content'.

And there will still be some probability, given the evidence, that the spaceship did vs. didn't blip out of existence; and just saying 'it lacks empirical content!' will not tell you whether to design future spaceships so that their life support systems keep operating past the point of no return.

By that logic, if I invent any crazy hypothesis in addition to an empirically testable theory, then it inherits testability just on those grounds. You can do that with the word "testabiity" if you want, but that seems to be not how people use words.

There's no ambiguity if you clarify whether you're talking about the additional crazy hypothesis, vs. talking about the conjunction 'additional crazy hypothesis + empirically testable theory'. Presumably you're imagining a scenario where the conjunction taken as a whole is testable, though one of the conjuncts is not. So just say that.

Sean Carroll summarizes collapse-flavored QM as the conjunction of these five claims:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.

  2. Wave functions evolve in time according to the Schrödinger equation.

  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.

  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.

  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5 -- i.e., it's an affirmation of wave functions and their dynamics (which effectively all physicists agree about), plus a rejection of the 'collapses' some theorists add to keep the world small and probabilistic. (If you'd like, you could supplement 'not 5' with 'not Bohmian mechanics'; but for present purposes we can mostly lump Bohm in with multiverse interpretations, because Eliezer's blog series is mostly about rejecting collapse rather than about affirming a particular non-collapse view.)

If we want 'QM' to be the neutral content shared by all these interpretations, then we can say that QM is simply the conjunction of 1 and 2. You are then free to say that we should assign 50% probability to claim 5, and maintain agnosticism between collapse and non-collapse views. But realize that, logically, either collapse or its negation does have to be true. You can frame denying collapse as 'positing invisible extra worlds', but you can equally frame denying collapse as 'skepticism about positing invisible extra causal laws'.

Since every possible way the universe could be adds something 'extra' on top of what we observe -- either an extra law (e.g., collapse) or extra ontology (because there are no collapses occurring to periodically annihilate the ontology entailed by the Schrodinger equation) -- it's somewhat missing the point to attack any given interpretation for the crime of positing something extra. The more relevant question is just whether simplicity considerations or indirect evidence helps us decide which 'something extra' (a physical law, or more 'stuff', or both) is the right one. If not, then we stick with a relatively flat prior.

Claims 1 and 2 are testable, which is why we were able to acquire evidence for QM in the first place. Claim 5 is testable for pretty much any particular 'collapse' interpretation you have in mind; which means the negation of claim 5 is also testable. So all parts of bare-bones MWI are testable (though it may be impractical to run many of the tests), as long as we're comparing MWI to collapse and not to Bohmian Mechanics.

(You can, of course, object that affirming 3-5 as fundamental laws has the advantage of getting us empirical adequacy. But 'MWI (and therefore also 'bare' QM) isn't empirically adequate' is a completely different objection from 'MWI asserts too many unobserved things', and in fact the two arguments are in tension: it's precisely because Eliezer isn't willing to commit himself to a mechanism for the Born probabilities in the absence of definitive evidence that he's sticking to 'bare' MWI and leaving almost entirely open how these relate to the Born rule. In the one case you'd be criticizing MWI theorists for refusing to stick their neck out and make some guesses about which untested physical laws and ontologies are the real ones; in the other case you'd be criticizing MWI theorists for making guesses about which untested physical laws and ontologies are the real ones.)

I am not super interested in having catholic theologians read about minimum descriptive complexity, and then weaving a yarn about their favorite hypotheses based on that.

Are you kidding? I would love it if theologians stopped hand-waving about how their God is 'ineffably simple no really we promise' and started trying to construct arguments that God (and, more importantly, the package deal 'God + universe') is information-theoretically simple, e.g., by trying to write a simple program that outputs Biblical morality plus the laws of physics. At best, that sort of precision would make it much clearer where the reasoning errors are; at worst, it would be entertainingly novel.

Comment author: Kaj_Sotala 24 May 2015 11:04:55PM 1 point [-]

At one point I started developing a religious RPG character who applied theoretical computer science to his faith.

I forget details, but among other details he believed that although the Bible prescribed the best way to live, the world is far too complex for any finite set of written rules to cover every situation. The same limitation applies to human reason: cognitive science and computational complexity theory have shown all the ways in which we are bounded reasoners, and can only ever hope to comprehend a small part of the whole world. Reason works best when it can be applied to constrained problems where clear objective answer can be found, but it easily fails once the number of variables grows.

Thus, because science has shown that both the written word of the Bible and human reason are fallible and easily lead us astray (though the word of the Bible is less likely to do so), the rational course of action for one who believes in science is to pray to God for guidance and trust the Holy Spirit to lead us to the right choices.

Comment author: jacob_cannell 24 May 2015 06:26:54AM 1 point [-]

AFAIK, part of why the technical agenda contains the questions it does is that they're problems that are of interest to people to mathematicians and logicians even if those people aren't interested in AI risk.

This is concerning if true - the goal of the technical agenda should be to solve AI risk, not appeal to mathematicians and logicians (by say making them feel important).

Comment author: Kaj_Sotala 24 May 2015 10:03:15AM 3 points [-]

That sounds like an odd position to me. IMO, getting as many academics from other fields as possible working on the problems is essential if one wants to make maximal progress on them.

Comment author: Mark_Friedenbach 23 May 2015 02:23:54PM *  3 points [-]

If you are not familiar with MIRI's current technical agenda, then you may wish to retract this claim.

I am familiar with MIRI's technical agenda and I stand by my words. The work MIRI is choosing for itself is self-isolating and not relevant to the problems at hand in practical AGI work.

Comment author: Kaj_Sotala 23 May 2015 09:11:36PM 2 points [-]

The work MIRI is choosing for itself is self-isolating

AFAIK, part of why the technical agenda contains the questions it does is that they're problems that are of interest to people to mathematicians and logicians even if those people aren't interested in AI risk. (Though of course, that doesn't mean that AI researchers would be interested in that work, but it's at least still more connecting with the academic community than "self-isolating" would imply.)

Comment author: Mark_Friedenbach 21 May 2015 11:20:41PM *  12 points [-]

You have exhausted all of the examples that I can recall from the entire series. That's what's wrong.

The rest of the time Harry thinks up a clever explanation, and once the explanation is clever enough to solve all the odd constraints placed on it, (1) he stops looking for other explanations, and (2) he doesn't check to see if he is actually right.

Nominally, Harry is supposed to have learned his lesson in his first failed experimentation in magic with Hermoine. But in reality and in relation to the overarching plot, there was very little experimentation and much more "that's so clever it must be true!" type thinking.

"That's so clever it must be true!" basically sums up the sequence's justification for many-worlds, to tie us back to the original complaint in the OP.

Comment author: Kaj_Sotala 22 May 2015 12:49:50PM 18 points [-]

The rest of the time Harry thinks up a clever explanation, and once the explanation is clever enough to solve all the odd constraints placed on it, (1) he stops looking for other explanations, and (2) he doesn't check to see if he is actually right.

Examples:

Comed-tea in ch. 14

Hariezer decides in this chapter that comed-tea MUST work by causing you to drink it right before something spit-take worthy happens. The tea predicts the humor, and then magics you into drinking it. Of course, he does no experiments to test this hypothesis at all (ironic that just a few chapters ago he lecture Hermione about only doing 1 experiment to test her idea).

Wizards losing their power in chap. 22

Here is the thing about science, step 0 needs to be make sure you’re trying to explain a real phenomena. Hariezer knows this, he tells the story of N-rays earlier in the chapter, but completely fails to understand the point.

Hariezer and Draco have decided, based on one anecdote (the founders of Hogwarts were the best wizards ever, supposedly) that wizards are weaker today than in the past. The first thing they should do is find out if wizards are actually getting weaker. After all, the two most dangerous dark wizards ever were both recent, Grindelwald and Voldemort. Dumbledore is no slouch. Even four students were able to make the marauders map just one generation before Harry. (Incidentally, this is exactly where neoreactionaries often go wrong- they assume things are getting worse without actually checking, and then create elaborate explanations for non-existent facts).

Anyway, for the purposes of the story, I’m sure it’ll turn out that wizards are getting weaker, because Yudkoswky wrote it. But this would have been a great chance to teach an actually useful lesson, and it would make the N-ray story told earlier a useful example, and not a random factoid.

Atlantis in chap. 24

Using literally the exact same logic that Intelligent Design proponents use (and doing exactly 0 experiments), Hariezer decides while thinking over breakfast:

Some intelligent engineer, then, had created the Source of Magic, and told it to pay attention to a particular DNA marker.

The obvious next thought was that this had something to do with “Atlantis”.

Gateway to the after life in chap. 39:

Here is Hariezer’s response to the gateway to the afterlife:

That doesn’t even sound like an interesting fraud,“ Harry said, his voice calmer now that there was nothing there to make him hope, or make him angry for having hopes dashed. "Someone built a stone archway, made a little black rippling surface between it that Vanished anything it touched, and enchanted it to whisper to people and hypnotize them.”

Do you see how incurious Hariezer is? If someone told me there was a LITERAL GATEWAY TO THE AFTERLIFE I’d want to see it. I’d want to test it, see it. Can we try to record and amplify the whispers? Are things being said?

Laws of magic in chap. 85:

No surprise, then, that the wizarding world lived in a conceptual universe bounded - not by fundamental laws of magic that nobody even knew - but just by the surface rules of known Charms and enchantments…Even if Harry’s first guess had been mistaken, one way or another it was still inconceivable that the fundamental laws of the universe contained a special case for human lips shaping the phrase ‘Wingardium Leviosa’. …What were theultimate possibilities of invention, if the underlying laws of the universe permitted an eleven-year-old with a stick to violate almost every constraint in the Muggle version of physics?

You know what would be awesome? IF YOU GOT AROUND TO DOING SOME EXPERIMENTS AND EXPLORING THIS IDEA. The absolute essence of science is NOT asking these questions, it’s deciding to try to find out the fucking answers! You can’t be content to just wonder about things, you have to put the work in! Hariezer’s wonderment never gets past the stoned-college-kid wondering aloud and into ACTUAL exploration, and its getting really frustrating.

Vision and the invisibility cloak in chap. 95:

arry had set the alarm upon his mechanical watch to tell him when it was lunchtime, since he couldn’t actually look at his wrist, being invisible and all that. It raised the question of how his eyeglasses worked while he was wearing the Cloak. For that matter the Law of the Excluded Middle seemed to imply that either the rhodopsin complexes in his retina were absorbing photons and transducing them to neural spikes, or alternatively, those photons were going straight through his body and out the other side, but not both. It really did seem increasingly likely that invisibility cloaks let you see outward while being invisible yourself because, on some fundamental level, that was how the caster had - not wanted - but implicitly believed - that invisibility should work.

This would be an excellent fucking question to explore, maybe via some experiments. But no. I’ve totally given up on this story exploring the magic world in any detail at all. Anyway, Hariezer skips straight from “I wonder how this works” to “it must work this way, how could we exploit it"

Centaurs and astrology in chap. 101

Still in the woods, Hariezer encounters a centaur who tries to kill him, because he divines that Hariezer is going to make all the stars die.

There are some standard anti-astrology arguments, which again seems to be fighting the actual situation because the centaurs successfully use astrology to divine things.

We get this:

“Cometary orbits are also set thousands of years in advance so they shouldn’t correlate much to current events. And the light of the stars takes years to travel from the stars to Earth, and the stars don’t move much at all, not visibly. So the obvious hypothesis is that centaurs have a native magical talent for Divination which you just, well, project onto the night sky.”

There are so, so many other hypothesis Hariezer. Maybe starlight has a magical component that waxes and wanes as stars align into different magical symbols or some such. The HPMOR scientific method:

observation -> generate 1 hypothesis -> assume you are right -> it turns out that you are right.

Comment author: Vaniver 21 May 2015 08:56:43PM 16 points [-]

On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

This part isn't clear to me. The researcher who goes into generic anti-cancer work, instead of SENS-style anti-aging work, probably has made an epistemic mistake with moderate consequences, because of basic replaceability arguments.

But to say that MIRI's approach to AGI safety is due to a philosophical mistake, and one with significant consequences, seems like it requires much stronger knowledge. Shooting very high instead of high is riskier, but not necessarily wronger.

Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

I think you underestimate how much MIRI agrees with FLI.

Why they do not get more play in the effective altruism community is beyond me.

SENS is the second largest part of my charity budget, and I recommend it to my friends every year (on the obvious day to do so). My speculations on why EAs don't favor them more highly mostly have to do with the difficulty of measuring progress in medical research vs. fighting illnesses, and possibly also the specter of selfishness.

Comment author: Kaj_Sotala 22 May 2015 11:57:10AM *  15 points [-]

I think you underestimate how much MIRI agrees with FLI.

Agreed - or, at least, he underestimates how much FLI agrees with MIRI. This is pretty obvious e.g. in the references section of the technical agenda that was attached to FLI's open letter. Out of a total of 95 references:

  • Six are MIRI's technical reports that've only been published on their website: Vingean Reflection, Realistic World-Models, Value Learning, Aligning Superintelligence, Reasoning Under Logical Uncertainty, Toward Idealized Decision Theory
  • Five are written by MIRI's staff or Research Associates: Avoiding Unintended AI Behaviors, Ethical Artificial Intelligence, Self-Modeling Agents and Reward Generator Corruption, Problem Equilibirum in the Prisoner's Dilemma, Corrigibility,
  • Eight are ones that tend to agree with MIRI's stances and which have been cited in MIRI's work: Superintelligence, Superintelligent Will, Singularity A Philosophical Analysis, Speculations concerning the first ultraintelligent machine, The nature of self-improving AI, Space-Time Embedded Intelligence, FAI: the Physics Challenge, The Coming Technological Singularity

That's 19/95 (20%) references produced either directly by MIRI or people closely associated with them, or that have MIRI-compatible premises.

Comment author: Mark_Friedenbach 22 May 2015 02:24:49AM 8 points [-]

Oh that's a great idea. I'm going to start suggesting people who ask to donate to one of my favorite charities on my birthday. It beats saying I don't need anything which is what I currently do.

Comment author: Kaj_Sotala 22 May 2015 07:22:05AM 10 points [-]

Consider also doing an explicit birthday fundraiser. I did one on my most recent birthday and raised $500 for charitable causes.

View more: Next