One should think of fully reversible computation as a restricted version of computation where one is not allowed to delete any information throughout the process of computation. This means that any reversible computation can be emulated by a classical irreversible computer without any computational complexity overhead, so the reversible computer will take at least as many steps to perform a calculation as an irreversible computer. It is unlikely that an irreversible computer will be able to solve a problem in PSPACE in polynomial time, it is therefore unlikely that a reversible computer will be able to solve a problem in PSPACE in polynomial time since that would imply that P=PSPACE.
In practice, the purpose of reversible computation will be energy efficiency since the energy efficiency of irreversible computation is limited by Landauer's limit. Landauer's principle states that it costs k*T*ln(2) energy for every bit of information that you delete. Here, k=1.38*10^(-23)Joules/Kelvin, T is the temperature, and ln(2)=0.69314... This means that to delete 1 gigabyte of information and either overwrite that information with something else or replace that information with zeros, it will cost k*T*ln(2)*2^33 energy. In practice, one should expect to spend 100*k*T energy for every reliable irreversible operation regardless of what kind of irreversible hardware one uses; this energy is needed to overcome thermal noise. Since we do not have the perfect hardware, one should expect for every irreversible operation to cost at least thousands of times 1000*k*T, so there is not much room for improvement of our computing hardware until we will need to use energy efficient partially reversible hardware for our high performance computation. There are currently no energy efficient reversible computers since the energy efficiency of computation is not close enough to Landauer's limit that reversible computation has already become necessary, but since our computation is close enough to Landauer's limit that we should expect to see energy efficient reversible computers in the foreseeable future (the first potential application of reversible computing will be for cryptocurrency mining especially if the mining algorithm was designed for reversibility). Research by Ralph Merkle and Eric Drexler among others indicates that (partial) reversible computation based on molecular mechanical machines could be several orders of magnitude more energy efficient than Landauer's limit (though manufacturing such machines seems very difficult), so we should expect for reversible computation to eventually replace irreversible computation. There are other proposals for energy efficient reversible computing hardware, so be prepared for reversible computation to be the future of computation.
The sources that you have cited actually say that any reversible computer can solve a problem in PSPACE in a polynomial amount of space without deleting any information. But we actually have much better time/space overheads for reversible computation. It turns out that an irreversible computation that takes T time and S space can be done reversibly in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))). These are very modest time/space overheads, and these overheads can be improved by trading resources such as the amount of reversibility, time, space, and parallel computation. The tradeoffs between these resources can easily and effectively be modeled by pebble games such as Charles Bennett's pebble game and its generalizations, and Charles Bennett has originally proven these computational complexity inequalities for reversible computation in his 1989 paper Time/Space Tradeoffs for Reversible Computation (this paper was already cited by the other answerer Mitchell_Porter). I really like Bennett's pebble game because it is very simple and can easily be explained to an elementary school student, and an elementary school student can easily play Bennett's pebble game and they can sometimes form strategies that capture the main idea that Bennett used to prove the bounds of the time/space overheads in reversible computation. I highly recommend for everyone to play Bennett's pebble game with their kids or nieces and nephews.
Since reversible computation can model any irreversible computation in time O(T*(T/S)^epsilon) and space O(S*(1+log(T/S))), any computation that takes a polynomial amount of space on an irreversible machine can take a polynomial amount of space on an irreversible machine.
The 1995 paper An Analysis of Bennett's Pebble Game by E. Knill proves an optimal strategy for Bennett's pebble game, and the number of steps that this optimal strategy takes follows a simple recursion formula. Knill was then able to give an asymptotic result for the amount of time*space it will take to perform a computation reversibly using Bennett's pebble game.
Reversible computation is the future.
I once knew someone who had planned to create superintelligence by building a dedicated computer network on the moon. Then the Internet came along and he was able to dispense with the lunar step in his plan. Reversible computing seems like that - physically possible, but not at all a necessary step.
From where I stand, we're rushing towards superhuman AI right now, on the hardware we already have. If superhuman AI were somehow 30 years away, then yes, there might be time for your crypto bootstrap scheme to yield a new generation of reversible chips. But in a world that already contains GPT-4, I don't think we have that much time.