diegocaleiro comments on Superintelligence 24: Morality models and "do what I mean" - Less Wrong

7 Post author: KatjaGrace 24 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: diegocaleiro 25 February 2015 05:33:59PM *  4 points [-]

I’m mildly shocked by the comment by Olle Häggström`s.

He contends that after we, these babies with a detonator in hand, miraculously actually managed to get it all right and create an FAI, make it correctly understand morality, and spread moral stuff through over 99.9% of the universe, generating more good than was thought imaginable in all of human history by many many orders of magnitude, we would be monsters to request our tiny galaxy for our purposes.

What does that amount to: saying that we can’t let the combination of uncertainty over the whole project, plus possibility that the universe is infinite and actually our actions were not morally relevant by adding only a finite amount of good, plus negotiating with the future in the sense of we believe it more likely that we will correctly create FAI if our descendants permit us to take some resources from our galaxy to explore our posthuman potential. If the universe is infinite, or if we still have moral uncertainty after the AI thought a lot about this, or if we are babies with an H-bomb in our hands and we managed to get it right! It feels to me profoundly uncaring/inhuman to not give us the posthuman opportunity to play and celebrate in our garden.

Comment author: LawrenceC 03 March 2015 08:46:43PM *  0 points [-]

I'm not so sure. I feel very uncertain about the question of how aggregate utilities at such a scale. By your logic, if the universe is infinite, wouldn't ANY sort of finite action not matter?

Comment author: diegocaleiro 08 March 2015 09:56:45PM *  0 points [-]

Let me spell that out. Either the universe is infinite, or not. Either morality is agreggative or not. If both of these are true, then finite actions don't make a difference.