Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RedMan 05 January 2018 01:32:59PM 1 point [-]

Thank you for the detailed response!

I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: 'effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended'

I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying 'heaven' vastly outnumber the copies suffering 'hell', on balance, uploading is a good. Based on your paper's citation of Omelas, I assert that you would weight 'all future heaven copies' in aggregate, and all future hell copies individually.

So if the probability of one or more hell copies of an upload coming into existence for as long as any heaven copy exceeds the probability of a single heaven copy existing long enough to outlast all the hell copies, that person's future suffering will eventually exceed all suffering previously experienced by biological humans. Under the EA philosophy described above, this creates a moral imperative to prevent that scenario, possibly with a blender.

If uploading tech takes the form of common connection and uploading to an 'overmind', this can go away--if everyone is Borg, there's no way for a non-Borg to put Borg into a hell copy, only Borg can do that to itself, which is, at least from an EA standpoint, probably an acceptable risk.

At the end of the day, I was hoping to adjust my understanding of EA axioms, not be talked down from chasing my friends around with a blender, but that isn't how things went down.

SF is a tolerant place, and EAs are sincere about having consistent beliefs, but I don't think my talk title "You helped someone avoid starvation with EA and a large grant. I prevented infinity genocides with a blender" would be accepted at the next convention.

Comment author: Kaj_Sotala 07 January 2018 04:17:53PM *  1 point [-]

I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: 'effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended'

I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying 'heaven' vastly outnumber the copies suffering 'hell', on balance, uploading is a good. Based on your paper's citation of Omelas, I assert that you would weight 'all future heaven copies' in aggregate, and all future hell copies individually.

Well, our paper doesn't really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really "refute" this is to say that you don't agree with that view, but that's an opinion-based view rather than a refutation.

Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about "Lexical Threshold NU", which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.

That said, it's still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.

(As an aside, I tend to think that personal identity is not ontologically basic, so I don't think that it matters whose copy ends up getting tortured; but that doesn't really help with your dilemma.)

If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)

I really don't think so. First, I'm pretty sure that your instincts tell you that killing people who don't want to be killed, when that doesn't save any other lives, is something you really don't want to do. That's something that's at least worth treating as a strong ethical injunction, to only be overridden if there's a really really really compelling reason to do so.

And second, even if you didn't care about ethical injunctions, it looks pretty clear that going around killing people wouldn't actually serve your goal much - you'd just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as "one of those crazy murderers" and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.

Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them - like supporting AI alignment research - are already perfectly reputable within EA circles. :)

You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don't buy the Omelas argument.

(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading - or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)

Comment author: RedMan 05 January 2018 12:15:25AM 1 point [-]

Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/ Awesome paper.

Comment author: Kaj_Sotala 05 January 2018 06:55:15AM 1 point [-]

Awesome paper.

Thank you very much!

Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/

So, I agree that mind uploads being tortured indefinitely is a very scary possibility. And it seems very plausible that some of that is going to happen in a world with mind uploads, especially since it's going to be impossible to detect from the outside, unless you are going to check all the computations that anyone is running.

On the other hand, we don't know for sure what that world is going to be like. Maybe there will be some kind of AI in charge that does check everyone's computations, maybe all the hardware that gets sold is equipped with built-in suffering-detectors that disallow people from running torture simulations, or something. I'll admit that both of these seem somewhat unlikely or even far-fetched, but then again, someone might come up with a really clever solution that I just haven't thought of.

Your argument also seemed to me to have some flaws:

Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be 'free',

You can certainly make the argument that, for any event with non-zero probability, then over a sufficiently long lifetime that event will happen at some point. But if you are using that to argue that an upload will be captured by someone sadistic eventually, shouldn't you also hold that they will also escape eventually?

This argument also doesn't seem to be unique to mind uploading. Suppose that we achieved biological immortality and never uploaded. You could also make the argument that, now that people can live until the heat-death of the universe (or at least until our sun goes out), then their lifetimes are sufficiently long that at some point in their lives they are going to be kidnapped and tortured indefinitely by someone sadistic, so therefore we should kill everyone before we get radical life extension.

But for biological people, this argument doesn't feel anywhere near as compelling. In particular, this scenario highlights the fact that even though there might be a non-zero probability for any given person to be kidnapped and tortured during their lifetimes, that probability can be low enough that it's still unlikely to happen even during a very long lifetime.

You could reasonably argue that for uploads, it's different, since it's easier to make a copy of an upload undetected etc., so the probability of being captured during one's lifetime is larger. But note that there have been times in history that there actually was a reasonable chance for a biological human to be captured and enslaved during their lifetime! Back during the era of tribal warfare, for example. But we've come a long way from those times, and in large parts of the world, society has developed in such a way to almost eliminate that risk.

That, in turn, highlights the point that it's too simple to just look at whether we are biological or uploads. It all depends on how exactly society is set up, and how strong are the defenses and protections that society provides to the common person. Given that we've developed to the point where biological persons have pretty good defenses against being kidnapped and enslaved, to the point where we don't think that even a very long lifetime would be likely to lead to such a fate, shouldn't we also assume that upload societies could develop similar defenses and reduce the risk to be similarly small?

[Link] Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

1 Kaj_Sotala 03 January 2018 02:39PM
Comment author: curi 29 November 2017 08:03:03PM 0 points [-]

I already did put work into that. Then they refused to read references, for unstated reasons, and asked me to rewrite the same things I already wrote, as well as rewrite things written by Popper and others. I don't want to put in duplicate work.

Comment author: Kaj_Sotala 02 December 2017 12:37:20PM 1 point [-]

Any learning - including learning how to communicate persuasively - requires repeated tries, feedback, and learning from feedback. People are telling you what kind of writing they might find more persuasive, which is an opportunity for you to learn. Don't think of it as duplicate work, think of it as repeatedly iterating a work and gradually getting towards the point where it's persuasive to your intended audience. Because until you can make it persuasive, the work isn't finished, so it's not even duplicating anything. Just finishing what you originally started.

Of course, if you deem that to be too much effort, that's fair. But the world is full of writers who have taken the opportunity to learn and hone their craft until they could clearly communicate to their readers why their work is worth reading. If you don't, then you can't really blame your potential readers for not bothering to read your stuff - there are a lot of things that people could be reading, and it's only rational for them to focus on the stuff that shows the clearest signs of being important or interesting.

Comment author: curi 28 November 2017 05:45:03PM 0 points [-]

You are requesting I write new material for you because you dislike my links to websites with thousands of free essays, because you find them too commercial, and you don't want to read books. Why should I do this for you? Do you think you have any value to offer me, and if so what?

Comment author: Kaj_Sotala 29 November 2017 01:17:51PM *  2 points [-]

Why should I do this for you? Do you think you have any value to offer me, and if so what?

You have it the wrong way around. This is something that you do for yourself, in order to convince other people that you have value to offer for them.

You're the one who needs to convince your readers that your work is worth engaging with. If you're not willing to put in the effort needed to convince potential readers of the value of your work, then the potential readers are going to ignore you and instead go read someone who did put in that effort.

[Link] LW2.0 now in public beta (you'll need to reset your password to log in)

2 Kaj_Sotala 23 September 2017 12:00PM
Comment author: Alicorn 15 September 2017 08:23:41PM 11 points [-]

I feel more optimistic about this project after reading this! I like the idea of curation being a separate action and user-created sequence collections that can be voted on. I'm... surprised to learn that we had view tracking that can figure out how much Sequence I have read? I didn't know about that at all. The thing that pushed me from "I hope this works out for them" to "I will bother with this myself" is the Medium-style individual blog page; that strikes a balance between desiderata in a good place for me, and I occasionally idly wish for a place for thoughts of the kind I would tweet and the size I would tumbl but wrongly themed for my tumblr.

I don't like the font. Serifs on a screen are bad. I can probably fix this client side or get used to it but it stood out to me a surprising amount. But I'm excited overall.

Comment author: Kaj_Sotala 21 September 2017 03:42:09PM 0 points [-]

I think the font feels okay (though not great) when it's "normal" writing, but text in italics gets hard to read.

Comment author: Viliam 19 September 2017 11:04:59PM 5 points [-]

I think "Less Wrong" was an appropriate name at the beginning, when the community around the website was very small. Now that we have grown, both in user count and in content size, we could simply start calling ourselves "Wrong". One word, no problems with capitalization or spacing.

Comment author: Kaj_Sotala 21 September 2017 03:39:38PM *  1 point [-]

Calling ourselves "Wrong" or "Wrongers" would also fix the problem of "rationalist" sounding like we'd claim to be totally rational!

Comment author: ignoranceprior 18 September 2017 05:38:41PM 1 point [-]

I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?

Comment author: Kaj_Sotala 18 September 2017 10:04:08PM *  2 points [-]

I've had periods when I described myself as pretty close to pure-NU, but currently I view myself as a moral parliamentarian: my values are made up of a combination of different moral systems, of which something like NU is just one. My current (subject to change) position is to call myself "NU-leaning prioritarian": I would like us to survive to colonize the universe eventually, just as long as we cure suffering first.

(Also it's not clear to me that this kind of an operation would be a net negative even on pure NU grounds; possibly quite non-effective, sure, but making it negative hinges on various assumptions that may or may not be true.)

Comment author: Kaj_Sotala 18 September 2017 10:53:31AM *  2 points [-]

I met Denkenberger at the same ALLFED workshop that Hanson participated in (as a part of the GoCAS research program on existential risk); I also thought his work was quite impressive and important.

View more: Next