A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?

New Answer
New Comment

2 Answers sorted by

Noosphere89

20

This is very, very dependent on what assumptions you fundamentally make about the nature of physical reality, and what assumptions you make about how much future civilizations can alter physics.

I genuinely think that if you want to focus on the long term, unfortunately we'd need to solve very, very difficult problems in physics to reliably give answers.

For the short term limitations that are relevant to AI progress, I'd argue that the biggest one is probably thermodynamics stuff, and in particular the Landauer limit is a good approximation for why you can't make radically better nanotechnology than life without getting into extremely weird circumstances, like reversible computation.

what assumptions you make about how much future civilizations can alter physics

I don't think the concept of "altering physics" makes sense. Physics is the set of rules that determine reality. By definition, everyone living in this universe is subject to the laws of physics. If someone were to find a way so, say, locally alter what we call Planck's constant, that would just mean that it's not actually a constant, but the emergent product of a deeper system that can be tinkered with, which doesn't mean you're altering the laws of physics - it merely peels... (read more)

9Muireall
Stochastic thermodynamics may be the general and powerful framework you're looking for regarding molecular machines.
1M. Y. Zuo
Most higher level engineering textbooks cover this topic pretty thoroughly. At least from the Thermodynamics II and III, Fluid Mechanics II and III, Solid Mechanics II and III, etc., courses that I took back in school. It's also all derivable from the fundamental symmetries of physics, plus some constants, axioms, and maybe some math tricks when it comes to Maxwell/Heaviside equations and the not-yet-resolved contradictions between gravity and quantum mechanics. 
2dr_s
I wouldn't say they do. This is not about known science, it's about science we don't know, and what can we guess about it. Some considerations on gravity and quantum mechanics do indeed put a lower bound on the energy scale at which we expect new physics to manifest, but that doesn't mean that even lower energy new physics aren't theoretically possible - if they weren't, there would be no point doing anything at the LHC past the discovery of the Higgs Boson, since the Standard Model doesn't predict anything else. Thought to be fair, the lack of success in finding literally anything predicted either by string theory or by supersymmetry isn't terribly encouraging in this respect.
2Noosphere89
This is exactly the situation where your question unfortunately doesn't have an answer, at least right now.
1M. Y. Zuo
Like you said the science we don't know is at inaccessibly large or small scales.  Yes maybe in the far future in a society spread across multiple galaxies, or that can make things near Planck lengths, they could do something that would totally stump us. But your never going to find a final answer to this in the present day for exactly those reasons. In fact it's unlikely anyone on LW could even grasp the answers even if by some miracle a helpful time traveller from the future showed up and started answering.
2dr_s
Well, as I said, there might be some general insight. For example biological cells are effectively nanomachines far beyond our ability to build, yet they are not all-powerful; no individual bacterium has single-handedly grown to grey goo its way through the entire Earth, despite there being no particular reasons why it wouldn't be able to. This likely comes from a mixture of limits of the specific substrate (carbon, DNA for information storage), the result of competition between multiple species (which can be seen as inevitable result of imprecise copying and following divergence, even though mostly cells have mechanisms to try and prevent those sort of mistakes) and perhaps intrinsic thermodynamic limits of Von Neumann machines as a whole. So understanding which is which would be interesting and useful.
1M. Y. Zuo
This kind of understanding is already available in higher level textbooks, within known energy and space-time scales, as previously mentioned? If your asking, for example, whether with infinite time and energy some sort of grey goo 'superorganism' is possible, assuming some sort of far future technology that goes beyond our current comprehension, then that is obviously not going to have an answer for the aformentioned reasons... Assuming you already have sufficient knowledge of the fundamental sciences and engineering and mathematics at the graduate level, then finding the textbooks, reading them, comparatively analyzing them, and drawing your own conclusions wouldn't take more then a few weeks. This sort of exhaustive analysis would presumably satisfy even a very demanding level of certainty (perhaps 99.9% confidence?).  If your asking for literally 100% certainty then that's impossible. In fact, nothing on LW every written, nor ever can be written, will meet that bar, especially when the Standard Model is known to be incomplete. If your asking whether someone has already done this and will offer it in easily digestable chunks in the form of LW comments, then it seems exceedingly unlikely.
2dr_s
I'm asking if there is a name and a specific theory of these things. I strongly disagree that just studying thermodynamics or statistical mechanics answers these questions, at least directly - though sure, if there is a theory of it, those are the tools you need to derive it. There are obvious thermodynamic limits of course, but they are usually ridiculously permissive. I'm asking if there's a theory that tries to study things at a lower level of generality, is all, and sets more narrow bounds than just "any nanomachine could not go above Carnot efficiency" or "any nanomachine would be subject to Brownian motion" or such.
1M. Y. Zuo
Why do you believe there is one? 
2dr_s
I don't? I wondered if there might be one, and asked if anyone else knew any better.
1M. Y. Zuo
Then on what basis do you "strongly disagree that just studying thermodynamics or statistical mechanics answers these questions, at least directly"? How did you attain the knowledge for this?
2dr_s
By having a MD in Engineering and a Physics PhD, following the same exact courses you recommend as potentially containing the answer and in fact finding no direct answer to these specific questions in them. You could argue "the answer can be derived from that knowledge" and sure, if it exists it probably can, but that's why I'm asking. Lots of theories can be derived from other knowledge. Most of machine learning can be derived from a basic knowledge of Bayes' theorem and multivariate calculus, but that doesn't make any math undergrad a ML expert. I was asking so that I could read any previous work on the topic. I might actually spend some more time thinking about approaches myself later, but wouldn't do it without first knowing if I'm just reinventing the wheel, so I was probing for answers. I don't think this is particularly weird or controversial.

Ilio

00

[downvoted]

I was just about to say "wait that's just Dust Theory" and then you mentioned Permutation City yourself. But also, in that scenario, the guy moving the stones certainly has the power to make anything happen - but the entities inside the universe don't, as they are bound by the rules of the simulation. Which is as good as saying that if you want to make anything happen, you should pray to God.

1Ilio
Actually the point is: if one can place rocks at will then their computing power is provably as large as any physically realistic computer. But yes, if one can’t place rocks at will then it might be better to politely ask the emulator. Actually that’s less, because in Dust theory we don’t even need to place the rocks. 😉
5 comments, sorted by Click to highlight new comments since:

I think maybe this gwern essay is the one I was thinking of, but I'm not sure. It doesn't quite answer your question.

But there isn't a complexity-theoretic argument that's more informative than general arguments about humans not being the special beings of maximal possible intelligence. We don't know precisely what problems a future AI will have to solve, or what approximations it will find appropriate to make.

Thanks for the essay! As you say, not quite what I was looking for, but still interesting (though mostly saying things I already know/agree with).

My question is more in line with the recent post about the smallest possible button and my own about the cost of unlocking optimization from observation. The very idea not of what problems computation can solve, but of how far can problem-solving carry you in actually affecting the world. The limit would be I guess "suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?". So, like, is tech so advanced that it truly looks like magic even to us possible at all? I assume some things (deadly enough in their own right) like nanotech and artificial life are, but wonder about even more exotic stuff.

suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?

I guess CT experts (e.g. not me) would say it either depends on boring details or belong to one of three possibilities:

  • if you only care about « probably approximately correct » solutions, then it’s probably in BPP
  • if you care about « unrealistically powerfull but still mathematically checkable » solutions, then it’s as large as PSPACE (see interactive proofs)
  • if you only care about convincing yourself and don’t need formal proof, then it’s among Turing degrees, because one could show it’s better than you for compressing most strings without actually proving it’s performing hypercomputation.

My point is that there have to be straight up impossibilities in there. For example, if you had a constraint to only use 3 atoms to build a molecule, there are only so many stable combinations. When one considers for example nanomachines it is reasonable to imagine that there is a minimum physical size that can embed a given program, and that size also puts limitations on effectiveness, lifetime, and sensory abilities. Like e.g. you lose resolution on movement because the smaller you are the stronger the effect of Brownian forces, stuff like that, at the crossroads between complexity theory and thermodynamics.

I see, thanks for clarifying.