Specifically, I am asking whether reversible computers let you implement PSPACE-complete algorithms to solve PSPACE-complete problems, and in particular, do so efficiently, ideally as efficient as normal computers compute the complexity class P.

I'm interested in this question because I've seen some sources saying that reversible computation can implement PSPACE algorithms while conventional computers can only implement algorithms in the complexity class P.

The sources I have are these:

https://www.lesswrong.com/posts/2BJBZh7Rvxr6GaLQs/negentropy-overrated#bHr5gobPhh5KLvxbA

and this Chegg source, which claims that reversible Turing Machines with a polynomial bound on space is equal to PSPACE.

https://www.chegg.com/homework-help/reversible-pspace-pspace-shown-problem-quantified-satisfiabi-chapter-3-problem-9p-solution-9781107002173-exc

I'd like any answer to this question, but ideally an answer would either show that reversible computation is able to implement PSPACE-complete algorithms as efficiently as normal computers implement algorithms in the P complexity class, or show that reversible computation can't do this, and show what complexity class can reversible computation efficiently implement algorithms.

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

paulfchristiano

May 09, 2023

1911

No. You should think of reversible computers as quantitatively more efficient than normal computers, but basically the same thing. If you have polynomial time and energy then reversible computers still implement P.

My post observes that if the universe had a long enough lifetime, then a reversible computer would be able to compute PSPACE even if you only started out with polynomially much negentropy (whereas a normal computer would quickly burn up all the negentropy and leave you with a universe too hot to compute). So time and space and durability might be more important than you'd think relative to negentropy or free energy, and potentially the scale of computation done by our civilization will be much vaster than you would guess if you tried to do the entropy accounting.

Also see this underappreciated paper on "catalytic computing," which points out that you can do some useful computations using physical degrees of freedom even without knowing the initial states of those degrees of freedom, i.e. a reversible computer could theoretically use a giant disordered system to do useful work even without free energy to initialize it.

Mitchell_Porter

May 09, 2023

90

All that is known rigorously, is that PSPACE contains P. But almost all complexity theorists believe PSPACE is strictly bigger than NP, and that NP is strictly bigger than P. (The most interesting exception might be Leslie Valiant, who has suggested that even P^#P = P. But he is a wild outlier.) 

The proposition at chegg.com says nothing about P, it is about "reversible PSPACE", i.e. PSPACE when one is restricted to reversible computation. I believe this paper contains the original proof that Reversible PSPACE = PSPACE. 

I can't tell what @Douglas_Knight was thinking. 

Right, the point is that a Reversible PSPACE appears physically realizable, while currently existing computers could not actually run for the exponential time necessary to compute PSPACE problems because they would also require exponentially much (free) energy.

Joseph Van Name

May 09, 2023

60

One should think of fully reversible computation as a restricted version of computation where one is not allowed to delete any information throughout the process of computation. This means that any reversible computation can be emulated by a classical irreversible computer without any computational complexity overhead, so the reversible computer will take at least as many steps to perform a calculation as an irreversible computer. It is unlikely that an irreversible computer will be able to solve a problem in PSPACE in polynomial time, it is therefore unlikely that a reversible computer will be able to solve a problem in PSPACE in polynomial time since that would imply that P=PSPACE.

In practice, the purpose of reversible computation will be energy efficiency since the energy efficiency of irreversible computation is limited by Landauer's limit. Landauer's principle states that it costs k*T*ln(2) energy for every bit of information that you delete. Here, k=1.38*10^(-23)Joules/Kelvin, T is the temperature, and ln(2)=0.69314... This means that to delete 1 gigabyte of information and either overwrite that information with something else or replace that information with zeros, it will cost k*T*ln(2)*2^33 energy. In practice, one should expect to spend 100*k*T energy for every reliable irreversible operation regardless of what kind of irreversible hardware one uses; this energy is needed to overcome thermal noise. Since we do not have the perfect hardware, one should expect for every irreversible operation to cost at least thousands of times 1000*k*T, so there is not much room for improvement of our computing hardware until we will need to use energy efficient partially reversible hardware for our high performance computation. There are currently no energy efficient reversible computers since the energy efficiency of computation is not close enough to Landauer's limit that reversible computation has already become necessary, but since our computation is close enough to Landauer's limit that we should expect to see energy efficient reversible computers in the foreseeable future (the first potential application of reversible computing will be for cryptocurrency mining especially if the mining algorithm was designed for reversibility). Research by Ralph Merkle and Eric Drexler among others indicates that (partial) reversible computation based on molecular mechanical machines could be several orders of magnitude more energy efficient than Landauer's limit (though manufacturing such machines seems very difficult), so we should expect for reversible computation to eventually replace irreversible computation. There are other proposals for energy efficient reversible computing hardware, so be prepared for reversible computation to be the future of computation.

The sources that you have cited actually say that any reversible computer can solve a problem in PSPACE in a polynomial amount of space without deleting any information. But we actually have much better time/space overheads for reversible computation. It turns out that an irreversible computation that takes T time and S space can be done reversibly in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))). These are very modest time/space overheads, and these overheads can be improved by trading resources such as the amount of reversibility, time, space, and parallel computation. The tradeoffs between these resources can easily and effectively be modeled by pebble games such as Charles Bennett's pebble game and its generalizations, and Charles Bennett has originally proven these computational complexity inequalities for reversible computation in his 1989 paper Time/Space Tradeoffs for Reversible Computation (this paper was already cited by the other answerer Mitchell_Porter). I really like Bennett's pebble game because it is very simple and can easily be explained to an elementary school student, and an elementary school student can easily play Bennett's pebble game and they can sometimes form strategies that capture the main idea that Bennett used to prove the bounds of the time/space overheads in reversible computation. I highly recommend for everyone to play Bennett's pebble game with their kids or nieces and nephews.

Since reversible computation can model any irreversible computation in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))), any computation that takes a polynomial amount of space on an irreversible machine can take a polynomial amount of space on an irreversible machine.

The 1995 paper An Analysis of Bennett's Pebble Game by E. Knill proves an optimal strategy for Bennett's pebble game, and the number of steps that this optimal strategy takes follows a simple recursion formula. Knill was then able to give an asymptotic result for the amount of time*space it will take to perform a computation reversibly using Bennett's pebble game.

Reversible computation is the future.

Reversible computation is the future

But will it ever be relevant to human beings? I mean, will it ever be relevant in however long we have, before superhuman AI emerges?

Quantum computing is a kind of reversible computing that already exists in limited form. But will classical reversible computing ever matter? We already have the phenomenon of "dark silicon", in which parts of the chip must go unused so it can cool down, and we have various proposals for "adiabatic computation" and "adiabatic logic" to push the boundaries of this, but it's unclear to me how much that overlaps with the theory of reversible computing. 

1Joseph Van Name1y
So are you saying that AI will be efficient enough to replace humanity before reversible computing becomes practical? I do not believe that this is likely since it would require superhuman AI to be possible on inefficient hardware without making any use of any reversible-friendly algorithms like backtracking. Any superhuman AI will be able to figure out reversible computation, so we should expect superhuman AI to be reversible. And quantum computing is good for only specific algorithms doing very specific tasks, while reversible computing will be good for all forms of computation.  There is absolutely no reason for quantum computation and AGI to obtain a free pass while everyone suddenly becomes skeptical of reversible computation. Yes, we should be skeptical, but we should apply our skepticism equally to reversible computation, quantum computation, robotics, and AI. I do not believe that potentially profitable energy efficient reversible computation is that far off. After all, one can design cryptocurrency mining algorithms for reversible computing hardware; these mining algorithms can be made to be 99.9% reversible without any time/space overhead that is typically incurred from using reversible instead of irreversible computation. If we wanted to, we can make profitable reversible ASICs for cryptocurrency mining in the foreseeable future, but this will only work if people stop ignoring reversible computation. After all, the energy used for cryptocurrency mining is otherwise wasted, so we can use it to accelerate reversible computing development. If and when we use cryptocurrencies with mining algorithms that are designed for reversibility, the development of reversible computing hardware will not stop, and it will replace all computing hardware. But if you are afraid of reversible computing since it would make it that much easier for AI to take over, then you should invest in a cryptocurrency with a reversibility resistant mining algorithm.
2Mitchell_Porter1y
I once knew someone who had planned to create superintelligence by building a dedicated computer network on the moon. Then the Internet came along and he was able to dispense with the lunar step in his plan. Reversible computing seems like that - physically possible, but not at all a necessary step.  From where I stand, we're rushing towards superhuman AI right now, on the hardware we already have. If superhuman AI were somehow 30 years away, then yes, there might be time for your crypto bootstrap scheme to yield a new generation of reversible chips. But in a world that already contains GPT-4, I don't think we have that much time. 
1Joseph Van Name1y
I agree that having a computer network on the moon is impractical. I cannot really see the purpose of putting a computer network on the moon unless it is a part of a lunar station, and I certainly cannot see how that will result in a superintelligence. But I can certainly imagine how reversible computation will result in a superintelligence since the superintelligence will eventually incorporate reversible computation, so the only real question is whether AGI will come before reversible computation. AI is often hyped, and a lot of this hype happens for a good reason. We are currently in an AI spring or an AI summer where people are excited and worried about AI, and during an AI spring, people may tend to overestimate the abilities of AI and the future progress of AI.  The reason why some people may overestimate the future progress of AI is because we need to be cautious about AI since it is a potentially very dangerous technology, and this caution is healthy. But the previous AI springs were followed by AI winters, and this time I do not believe we will have another AI winter, but we may have another AI fall where AI has a more proper amount of hype. The difference between previous AI winters and my predicted future AI fall is that in previous AI winters, there was a lot more room for progress with irreversible hardware, but we are at a point where there is not much room to improve our irreversible hardware. Rest in peace, Gordon Moore (1929-2023). This means that in order to get AGI without reversible computation, there better be very great algorithmic improvements. Therefore, on a hardware regard, my predicted AI fall will see improvements that are not quite as great as they used to be. Are you sure you want to base your AGI on mainly just software improvements and minor hardware improvements along with hardware improvements that are orthogonal to the energy efficiency per logic gate operation? If not, then we will also need reversible computation. So if we hav
2Mitchell_Porter1y
Probably the scenario involved von Neumann machines too - a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one's posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.  You may be aware that Robin Hanson and Eliezer Yudkowsky have debated a few times over differing scenarios for the AI future. One of the differences is that Robin envisages a kind of pluralism and gradualism, a society and an economy where humans and human uploads and autonomous AIs are interacting as peers for quite some time. On the other hand, Eliezer predicts that the AGI era yields a superintelligent agent quite rapidly, one which, in the words of Bill Joy, "doesn't need us".  I think an AGI using a crypto bootstrap to develop reversible hardware, really only makes sense in a future like Robin's. In Eliezer's scenario, the AI just directly appropriates whatever resources it needs for its plans. 
1Joseph Van Name1y
It will probably be easier to make self reproducing robots in a lab instead of on the moon. After all, in a laboratory, you can control variables such as the composition of minerals, energy sources, and hazards much better than you can just by sending the robots to the moon. But by the time we are able to have self-reproducing robots, we probably would have made reversible computers already. But if your and Eliezer's predictions come true, you will need to not only get superhuman AGI running before we have energy efficient reversible computation that is profitable for many purposes, but you will also need this superhuman AGI to be able to reproduce itself and take over the world without anyone noticing before it is too late. Are you sure that your superhuman AGI will be able to figure out how to take over the world without even having efficient reversible hardware in the first place? It is one thing for a superhuman AGI to take over the world without anyone being able to do anything about it, but it is another thing entirely for that superhuman AGI to begin with limited and inefficient hardware resources. P.S. You are also making an assumption that superhuman AGI will not have any use for a currency. In order for this assumption to be reasonable, there must be only a few (like 3 or 4) instances of superhuman AGI where they all know each other. This also seems unlikely. Obtaining currency is one of those instrumentally convergent goals that all these superhuman AGIs with goals would have.
1Noosphere891y
My guess is probably not, but the probably matters here, though it depends on when superhuman AI emerges, and when reversible computers become practical. And in any case, you're responding to a different scenario than the answer was focusing on, as you added the condition of reversible computers appearing before AI taking all our jobs.
1Joseph Van Name1y
Well, AI has already received a lot of funding, and a lot of progress has been made. But we are near the limit of the roadmap for our conventional irreversible CMOS technology. In other words, since we already have chips with features about 3 nm long, there is not much room to shrink the features of these chips. We are running out of improvements of irreversible computation. Maybe we do not see as much progress in reversible computation development because of a lack of funding and interest and not because reversible computing is somehow infeasible. If there were as much interest in reversible computing as there were in AI, the promise of energy efficient reversible computing may seem much closer and a problem which is actually solvable. I therefore do not think it is fair to compare reversible computing with AI this way because of the lack of interest in reversible computing.
2 comments, sorted by Click to highlight new comments since: Today at 2:15 AM

Did you click through from Paul's LW post to his blog? He gives a proof that a reversible computer can implement a PSPACE algorithm with only polynomially many erasures, and thus only polynomially much energy consumption, at the cost of running a little longer, hardly a noticeable difference compared to the exponential time required. But he also provides context which I suspect you need.

You say “reversible computation can implement PSPACE algorithms while conventional computers can only implement algorithms in the complexity class P.” This is not true in any interesting sense. What class a problem is in is a statement about how fast the space and time increase as the size of the problem increases. Whether a problem is feasible is a statement about how much time and space we’re willing to spend solving it. A sufficiently large problem in P is infeasible, while a sufficiently small problem in PSPACE is feasible. For example, playing chess on an n by n board is in PSPACE. But 8 is small enough that computer chess is perfectly feasible.