You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife - Less Wrong Discussion

7 Post author: crmflynn 02 November 2015 11:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 05 November 2015 07:22:09PM 0 points [-]

AGI running on a billion dollar super computer is not practical

Why not?

Say you have the code/structure for an AGI all figured out, but it runs in real-time on a billion dollar/year supercomputer. You now have to wait decades to train/educate it up to an adult.

Furthermore, the probability that you get the seed code/structure right on the first try is essentially zero. So rather obviously - to even get AGI in the first place you need enough efficiency to run one AGI mind in real-time on something far far less than a supercomputer.

It isn't a problem of what math to implement - we have that figured out.

Oh, really? I'm afraid I find that hard to believe.

Hard to believe only for those outside ML.

Comment author: Lumifer 05 November 2015 07:34:18PM *  0 points [-]

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

Hard to believe only for those outside ML.

Right... build a NN a mile wide and a mile deep and let 'er rip X-/

Comment author: jacob_cannell 06 November 2015 12:31:00AM *  0 points [-]

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

[Furthermore], the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

No, I never said it is not a software issue - because the distinction between software/hardware issues is murky at best, especially in the era of ML where most of the 'software' is learned automatically.

You are trolling now - cutting my quotes out of context.