You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Estimate the Cost of Immortality - Less Wrong Discussion

-4 Post author: Algernoq 13 December 2015 11:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (115)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 20 December 2015 09:57:55PM 1 point [-]

The flawless AGI under the name of Gosplan was the limit to which the Soviet Union aspired.

Comment author: passive_fist 20 December 2015 10:40:27PM -1 points [-]

aspired =/= achieved.

Your comment seemed to be equating Xyrik's scenario with the Soviet system, implying that for that reason it's not desirable. I'm pointing out that the two systems cannot be equated.

Comment author: Lumifer 20 December 2015 10:51:15PM 1 point [-]

My point is that the Soviet system wanted to be like Xyrik's scenario and tried to get as close to it as it could.

The assertion that an AI would make everything hunky-dory is not falsifiable. It's just a different term for elven magic.

Comment author: passive_fist 20 December 2015 10:58:25PM 0 points [-]

The assertion that an AI would make everything hunky-dory is not falsifiable.

Huh? Of course it's falsifiable. The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.

Comment author: RichardKennaway 21 December 2015 10:44:07AM 1 point [-]

The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.

The entire premise of Xyrik's scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or "everyone decides to do the right thing". There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.

I tried to ask him about mechanism here, but the only response so far has been a downvote.

Comment author: Xyrik 23 December 2015 11:34:02AM *  -1 points [-]

The entire premise of Xyrik's scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work.

Well, to be fair, I never claimed that I had any ideas for how to actually achieve a scenario with a flawless AGI, and I don't think I even said I was under the impression that this would be a good idea, although in the case that we DID have a flawless AGI, I would be open to a reasoning that proclaimed so.

But all I was asking was what potential downsides this could have, and people have risen to the occasion.

Comment author: Lumifer 20 December 2015 11:08:43PM 1 point [-]

Of course it's falsifiable.

Demonstrate, please.

Comment author: Xyrik 23 December 2015 11:44:34AM -1 points [-]

Demonstrate, please.

You know, this seems amusingly analogous to the scene in the seventh Harry Potter novel in which Xenophillius Lovegood asks Hermione to falsify the existence of the Resurrection Stone.