bogdanb comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong

55 Post author: Eliezer_Yudkowsky 06 June 2013 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (260)

You are viewing a single comment's thread. Show more comments above.

Comment author: bogdanb 07 June 2013 08:33:26PM -1 points [-]

There are many statistical testing methods that output what are essentially proofs; e.g. statements of the form "probability of a failure existing is at most 10^(-100)". Why would this not be sufficient?

(Approximate orders of magnitude:)

Number of atoms in universe : 10^80

Number of atoms in a human being: 10^28

Number of humans that have existed: 10^10

Number of AGI-creating-level inventions expected to be made by humans: 10^0–10^1

Number of AGI-creating-level inventions expected to be made by 1% (10^-2) of the universe turned into computronium, with no more that human level thought-to-matter efficiency, extrapolating linearly: 10^(80 - 2 - 10 - 28) = 10^40.

Hmm, that doesn’t sound that bad, but we got from 10^(-100) to 10^(-60) really fast. Also, I don’t think Eliezer was talking about that kind of statistical method.

Comment author: jsteinhardt 07 June 2013 09:46:34PM 1 point [-]

I mean, I could easily make the 100 into a 400, so I don't think this is that relevant.

Comment author: bogdanb 07 June 2013 10:57:44PM *  0 points [-]

Yes, the last sentence is probably my real “objection”. (Well, I don’t object to your statements, I just don’t think that’s what Eliezer meant. Even if you run a non-statistical, deterministic theorem prover, using current hardware the probability of failure is much above 10^-100.)

The silly part of the comment was just a reminder (partly to myself) that AGI problems can span orders of magnitude so ridiculously outside the usual human scale that one can’t quite approximate (the number of atoms in the universe)^-1 as zero without thinking carefully about it.