No. You should think of reversible computers as quantitatively more efficient than normal computers, but basically the same thing. If you have polynomial time and energy then reversible computers still implement P.
My post observes that if the universe had a long enough lifetime, then a reversible computer would be able to compute PSPACE even if you only started out with polynomially much negentropy (whereas a normal computer would quickly burn up all the negentropy and leave you with a universe too hot to compute). So time and space and durability might be more important than you'd think relative to negentropy or free energy, and potentially the scale of computation done by our civilization will be much vaster than you would guess if you tried to do the entropy accounting.
Also see this underappreciated paper on "catalytic computing," which points out that you can do some useful computations using physical degrees of freedom even without knowing the initial states of those degrees of freedom, i.e. a reversible computer could theoretically use a giant disordered system to do useful work even without free energy to initialize it.
All that is known rigorously, is that PSPACE contains P. But almost all complexity theorists believe PSPACE is strictly bigger than NP, and that NP is strictly bigger than P. (The most interesting exception might be Leslie Valiant, who has suggested that even P^#P = P. But he is a wild outlier.)
The proposition at chegg.com says nothing about P, it is about "reversible PSPACE", i.e. PSPACE when one is restricted to reversible computation. I believe this paper contains the original proof that Reversible PSPACE = PSPACE.
I can't tell what @Douglas_Knight was thinking.
Right, the point is that a Reversible PSPACE appears physically realizable, while currently existing computers could not actually run for the exponential time necessary to compute PSPACE problems because they would also require exponentially much (free) energy.
One should think of fully reversible computation as a restricted version of computation where one is not allowed to delete any information throughout the process of computation. This means that any reversible computation can be emulated by a classical irreversible computer without any computational complexity overhead, so the reversible computer will take at least as many steps to perform a calculation as an irreversible computer. It is unlikely that an irreversible computer will be able to solve a problem in PSPACE in polynomial time, it is therefore unlikely that a reversible computer will be able to solve a problem in PSPACE in polynomial time since that would imply that P=PSPACE.
In practice, the purpose of reversible computation will be energy efficiency since the energy efficiency of irreversible computation is limited by Landauer's limit. Landauer's principle states that it costs k*T*ln(2) energy for every bit of information that you delete. Here, k=1.38*10^(-23)Joules/Kelvin, T is the temperature, and ln(2)=0.69314... This means that to delete 1 gigabyte of information and either overwrite that information with something else or replace that information with zeros, it will cost k*T*ln(2)*2^33 energy. In practice, one should expect to spend 100*k*T energy for every reliable irreversible operation regardless of what kind of irreversible hardware one uses; this energy is needed to overcome thermal noise. Since we do not have the perfect hardware, one should expect for every irreversible operation to cost at least thousands of times 1000*k*T, so there is not much room for improvement of our computing hardware until we will need to use energy efficient partially reversible hardware for our high performance computation. There are currently no energy efficient reversible computers since the energy efficiency of computation is not close enough to Landauer's limit that reversible computation has already become necessary, but since our computation is close enough to Landauer's limit that we should expect to see energy efficient reversible computers in the foreseeable future (the first potential application of reversible computing will be for cryptocurrency mining especially if the mining algorithm was designed for reversibility). Research by Ralph Merkle and Eric Drexler among others indicates that (partial) reversible computation based on molecular mechanical machines could be several orders of magnitude more energy efficient than Landauer's limit (though manufacturing such machines seems very difficult), so we should expect for reversible computation to eventually replace irreversible computation. There are other proposals for energy efficient reversible computing hardware, so be prepared for reversible computation to be the future of computation.
The sources that you have cited actually say that any reversible computer can solve a problem in PSPACE in a polynomial amount of space without deleting any information. But we actually have much better time/space overheads for reversible computation. It turns out that an irreversible computation that takes T time and S space can be done reversibly in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))). These are very modest time/space overheads, and these overheads can be improved by trading resources such as the amount of reversibility, time, space, and parallel computation. The tradeoffs between these resources can easily and effectively be modeled by pebble games such as Charles Bennett's pebble game and its generalizations, and Charles Bennett has originally proven these computational complexity inequalities for reversible computation in his 1989 paper Time/Space Tradeoffs for Reversible Computation (this paper was already cited by the other answerer Mitchell_Porter). I really like Bennett's pebble game because it is very simple and can easily be explained to an elementary school student, and an elementary school student can easily play Bennett's pebble game and they can sometimes form strategies that capture the main idea that Bennett used to prove the bounds of the time/space overheads in reversible computation. I highly recommend for everyone to play Bennett's pebble game with their kids or nieces and nephews.
Since reversible computation can model any irreversible computation in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))), any computation that takes a polynomial amount of space on an irreversible machine can take a polynomial amount of space on an irreversible machine.
The 1995 paper An Analysis of Bennett's Pebble Game by E. Knill proves an optimal strategy for Bennett's pebble game, and the number of steps that this optimal strategy takes follows a simple recursion formula. Knill was then able to give an asymptotic result for the amount of time*space it will take to perform a computation reversibly using Bennett's pebble game.
Reversible computation is the future.
Reversible computation is the future
But will it ever be relevant to human beings? I mean, will it ever be relevant in however long we have, before superhuman AI emerges?
Quantum computing is a kind of reversible computing that already exists in limited form. But will classical reversible computing ever matter? We already have the phenomenon of "dark silicon", in which parts of the chip must go unused so it can cool down, and we have various proposals for "adiabatic computation" and "adiabatic logic" to push the boundaries of this, but it's unclear to me how much that overlaps with the theory of reversible computing.
Did you click through from Paul's LW post to his blog? He gives a proof that a reversible computer can implement a PSPACE algorithm with only polynomially many erasures, and thus only polynomially much energy consumption, at the cost of running a little longer, hardly a noticeable difference compared to the exponential time required. But he also provides context which I suspect you need.
You say “reversible computation can implement PSPACE algorithms while conventional computers can only implement algorithms in the complexity class P.” This is not true in any interesting sense. What class a problem is in is a statement about how fast the space and time increase as the size of the problem increases. Whether a problem is feasible is a statement about how much time and space we’re willing to spend solving it. A sufficiently large problem in P is infeasible, while a sufficiently small problem in PSPACE is feasible. For example, playing chess on an n by n board is in PSPACE. But 8 is small enough that computer chess is perfectly feasible.
Specifically, I am asking whether reversible computers let you implement PSPACE-complete algorithms to solve PSPACE-complete problems, and in particular, do so efficiently, ideally as efficient as normal computers compute the complexity class P.
I'm interested in this question because I've seen some sources saying that reversible computation can implement PSPACE algorithms while conventional computers can only implement algorithms in the complexity class P.
The sources I have are these:
https://www.lesswrong.com/posts/2BJBZh7Rvxr6GaLQs/negentropy-overrated#bHr5gobPhh5KLvxbA
and this Chegg source, which claims that reversible Turing Machines with a polynomial bound on space is equal to PSPACE.
https://www.chegg.com/homework-help/reversible-pspace-pspace-shown-problem-quantified-satisfiabi-chapter-3-problem-9p-solution-9781107002173-exc
I'd like any answer to this question, but ideally an answer would either show that reversible computation is able to implement PSPACE-complete algorithms as efficiently as normal computers implement algorithms in the P complexity class, or show that reversible computation can't do this, and show what complexity class can reversible computation efficiently implement algorithms.