bogdanb comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (260)
(Approximate orders of magnitude:)
Number of atoms in universe : 10^80
Number of atoms in a human being: 10^28
Number of humans that have existed: 10^10
Number of AGI-creating-level inventions expected to be made by humans: 10^0–10^1
Number of AGI-creating-level inventions expected to be made by 1% (10^-2) of the universe turned into computronium, with no more that human level thought-to-matter efficiency, extrapolating linearly: 10^(80 - 2 - 10 - 28) = 10^40.
Hmm, that doesn’t sound that bad, but we got from 10^(-100) to 10^(-60) really fast. Also, I don’t think Eliezer was talking about that kind of statistical method.
I mean, I could easily make the 100 into a 400, so I don't think this is that relevant.
Yes, the last sentence is probably my real “objection”. (Well, I don’t object to your statements, I just don’t think that’s what Eliezer meant. Even if you run a non-statistical, deterministic theorem prover, using current hardware the probability of failure is much above 10^-100.)
The silly part of the comment was just a reminder (partly to myself) that AGI problems can span orders of magnitude so ridiculously outside the usual human scale that one can’t quite approximate (the number of atoms in the universe)^-1 as zero without thinking carefully about it.