CarlShulman comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 18 June 2010 04:38:47PM *  0 points [-]

Yep, these are key considerations.

So there's the utility difference between business-as-usual (no AI), and getting a small share of resources optimized for your preference, and the utility difference between getting small and large shares of resources. If the second difference is much larger than the first, then (1) is crucial, and (2) and (3) are not so good. But if the first difference is much bigger than the second, the pattern is the reverse.

And if we're comparing expected utility conditioning on no local FAI here and EU conditioning on FAI here, moderate credences can suffice (depending on the shape of your utility function).

Comment author: Vladimir_Nesov 18 June 2010 05:14:49PM *  2 points [-]

no local FAI here

Whether FAI is local or not can't matter, whether something is real or counterfactual is morally irrelevant. If we like small control, it means that the possible worlds with UFAI are significantly valuable, just as the worlds with FAI, provided there are enough worlds with FAI to weakly control the UFAIs; and if we like only large control, it means that the possible worlds with UFAI are not as valuable, and it's mostly the worlds with FAI that matter.

Comment author: PhilGoetz 15 July 2011 07:07:02PM 0 points [-]

What do "small control" and "large control" mean?

Comment author: Vladimir_Nesov 18 June 2010 05:07:02PM *  2 points [-]

But if the first difference is much bigger than the second, the pattern is the reverse.

It's not literally the reverse, because if you don't create those FAIs, nobody will, and so the UFAIs won't have the incentive to give you your small share. It's never good to increase probability of UFAI at the expense of probability of FAI. I'm not sure whether there is any policy guideline suggested by these considerations, conditional on the pattern in utility you discuss. What should we do differently depending on how much we value small vs. large control? It's still clearly preferable to have UFAI to having no future AI, and to have FAI to having UFAI, in both cases.

Comment author: CarlShulman 19 June 2010 02:23:44AM 0 points [-]

Worrying less about our individual (or national) shares, and being more cooperative with other humans or uploads seems like an important upshot.