Qiaochu_Yuan comments on "Stupid" questions thread - Less Wrong

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 14 July 2013 03:13:28PM *  4 points [-]

"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?

Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?

In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.

Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.

SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?

This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.

Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?

Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?

Comment author: Qiaochu_Yuan 14 July 2013 06:04:03PM *  6 points [-]

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility.

Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.

Comment author: somervta 15 July 2013 02:34:54AM 0 points [-]

So you wouldn't care about sentient/sapient aliens?

Comment author: Qiaochu_Yuan 15 July 2013 03:10:06AM 4 points [-]

I would care about aliens that I could get along with.

Comment author: pedanterrific 17 July 2013 06:23:20PM -1 points [-]

Do you not care about humans you can't get along with?

Comment author: Qiaochu_Yuan 17 July 2013 07:04:37PM 3 points [-]

Look, let's not keep doing this thing where whenever someone fails to completely specify their utility function you take whatever partial heuristic they wrote down and try to poke holes in it. I already had this conversation in the comments to this post and I don't feel like having it again. Steelmanning is important in this context given complexity of value.

Comment author: wedrifid 17 July 2013 06:33:29PM *  1 point [-]

Do you not care about humans you can't get along with?

Caring about all humans and (only) cooperative aliens would not be an inconsistent or particularly atypical value system.