Vladimir_Nesov comments on Closet survey #1 - Less Wrong

53 [deleted] 14 March 2009 07:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 14 March 2009 04:59:51PM 13 points [-]

I believe that the solution to the Fermi paradox is possibly (I don't place any considerable strength in this belief, besides it's a quite useless thing to think about) that physics has unlimited local depth. That is, each sufficiently intelligent AI with most of the likely goal systems arising from its development finds it more desirable to spend time configuring the tiny details of its local physical region (or the details of reality that have almost no impact on the non-local physical region), than going to the other regions of the universe and doing something with the rest of the matter. That also requires a way to protect itself without necessity to implement preventive offensive measures, so there should also be no way to seriously hurt a computation once it has digged itself sufficiently deep in the physics.

Comment author: Nick_Tarleton 14 March 2009 07:55:26PM 5 points [-]

Any reason AIs with goal systems referring to the larger universe would be unlikely?

Comment author: billswift 16 March 2009 04:58:28PM 2 points [-]

In Stross's novel "Accelerando", even without the locally deeper physics, the AIs formed Matrioshka Brains and more or less ignored the rest of the universe because of communication difficulties - mainly reduced bandwidth but also time lags.

Comment author: Vladimir_Nesov 14 March 2009 10:06:49PM *  1 point [-]

Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it's a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It's a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.

P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a "crazy thread" somewhat amends the problem.