Wei_Dai comments on Three Approaches to "Friendliness" - Less Wrong

14 Post author: Wei_Dai 17 July 2013 07:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 17 July 2013 09:40:15AM *  4 points [-]

Doesn't negative utilitarianism present us with the analogous challenge of preventing "astronomical suffering", which requires an FAI to have solutions to the same philosophical problems mentioned in the post? I guess I was using "astronomical waste" as short for "potentially large amounts of negative value compared to what's optimal" but if it's too much associated with total utilitarianism then I'm open to suggestions for a more general term.

Comment author: cousin_it 17 July 2013 11:52:05AM *  9 points [-]

I'd be happy with an AI that makes people on Earth better off without eating the rest of the universe, and gives us the option to eat the universe later if we want to...

Comment author: Wei_Dai 17 July 2013 12:19:22PM 8 points [-]

If the AI doesn't take over the universe first, how will it prevent Malthusian uploads, burning of the cosmic commons, private hell simulations, and such?

Comment author: Douglas_Knight 17 July 2013 10:12:54PM 3 points [-]

Those things you want to prevent are all caused by humans, so the AI on Earth can directly prevent them. The rest of the universe is only relevant if you think that there are other optimizers out there, or if you want to use it, probably because you are a total utilitarian. But the small chance of another optimizer suggests that anyone would eat the universe.

Comment author: Wei_Dai 17 July 2013 11:02:24PM 1 point [-]

Cousin_it said "and gives us the option to eat the universe later if we want to..." which I take to mean that the AI would not stop humans from colonizing the universe on their own, which would bring the problems that I mentioned.

Comment author: cousin_it 18 July 2013 12:27:23PM *  1 point [-]

On second thought, I agree with Douglas_Knight's answer. It's important for the AI to stop people from doing bad things with the universe, but for that the AI just needs to have power over people, not over the whole universe. And since I know about the risks from alien AIs and still don't want to take over the universe, maybe the CEV of all people won't want that either. It depends on how many people think population growth is good, and how many people think it's better to leave most of the universe untouched, and how strongly people believe in these and other related ideas, and which of them will be marked as "wrong" by the AI.

Comment author: Wei_Dai 18 July 2013 09:39:34PM 4 points [-]

I find your desire to leave the universe "untouched" puzzling. Are you saying that you have a terminal goal to prevent most of the universe from being influenced by human actions, or is it an instrumental value of some sort (for example you want to know what would happen if the universe is allowed to develop naturally)?

Comment author: cousin_it 18 July 2013 10:13:53PM *  2 points [-]

Well, it's not a very strong desire, I suspect that many other people have much stronger "naturalistic" urges than me. But since you ask, I'll try to introspect anyway:

Curiosity doesn't seem to be the reason, because I want to leave the universe untouched even after I die. It feels more like altruism. Sometime ago Eliezer wrote about the desire not to be optimized too hard by an outside agent. If I can desire that for myself, then I can also desire it for aliens, give them a chance to not be optimized by us... Of course if there are aliens, we might need to defend ourselves. But something in me doesn't like the idea of taking over the universe in preemptive self-defense. I'd prefer to find some other way to stay safe...

Sorry if this sounds confusing, I'm confused about it too.

Comment author: Wei_Dai 19 July 2013 05:05:04AM 2 points [-]

That helps me to understand your position, but it seems unlikely that enough people would desire it strongly enough for CEV to conclude we should give up colonizing the universe altogether. Perhaps some sort of compromised would be reached, for example the FAI would colonize the universe but bypass any solar systems that contain or may evolve intelligent life. Would that be sufficient to satisfy (or mostly satisfy) your desire to not optimize aliens?

Comment author: jmmcd 17 July 2013 03:30:45PM 0 points [-]

Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the "take over the universe" step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?

Comment author: Baughn 17 July 2013 05:30:46PM *  1 point [-]

Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn't wait?

If the AI isn't taking over the universe, that might leave the option open that something else will. If it doesn't control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?

Comment author: Kaj_Sotala 17 July 2013 11:03:41AM 3 points [-]

Yes, you could probably broaden the concept to cover negative utilitarianism as well, though Bostrom's original article specifically defined astronomical waste as being

an opportunity cost: a potential good, lives worth living, is not being realized.

That said, even if you did redefine the concept in the way that you mentioned, the term "astronomical waste" still implies an emphasis on taking over the universe - which is compatible with negative utilitarianism, but not necessarily every ethical theory. I would suspect that most people's "folk morality" would say something like "it's important to fix our current problems, but expanding into space is morally relevant only as far as it affects the primary issues" (with different people differing on what counts as a "primary issue").

I'm not sure whether you intended the emphasis on space expansion to be there, but if it was incidental, maybe you rather meant something like

I put "Friendliness" in quotes in the title, because I think what we really want, and what MIRI seems to be working towards, is closer to "optimality": create an AI that makes our world into what we'd consider the best possible one

?

(I hope to also post a more substantial comment soon, but I need to think about your post a bit more first.)

Comment author: Wei_Dai 17 July 2013 10:12:11PM *  1 point [-]

That said, even if you did redefine the concept in the way that you mentioned, the term "astronomical waste" still implies an emphasis on taking over the universe - which is compatible with negative utilitarianism, but not necessarily every ethical theory. I would suspect that most people's "folk morality" would say something like "it's important to fix our current problems, but expanding into space is morally relevant only as far as it affects the primary issues" (with different people differing on what counts as a "primary issue").

This is getting a bit far from the original topic, but my personal approach to handling moral uncertainty is inspired by Bostrom and Ord, and works by giving each moral faction a share of my resources and letting them trade to make Pareto improvements. So the "unbounded utility" faction in me was responsible for writing the OP (using its share of my time), and the intended audience is the "unbounded utility" factions in others. That's why it seems to be assuming unbounded utility and has an emphasis on space expansion, even though I'm far from certain that it represents "correct morality" or my "actual values".