I see a bit of a disconnect here from historical algorithmic improvements. In the last five decades humans have created algorithms for solving many problems that had previously been intractable, and given orders of magnitude improvement on others. Many of these have come from math/compsci innovation that was not particularly hardware-limited, i.e. if you had the same (or a larger/smarter-on-average/better-organized) research community but with frozen primitive hardware many of the insights would have been found.
At the moment there are some problems for which we have near-optimal algorithms, where we can show that near-optimal algorithms are out of reach but further performance improvements are unlikely. There are also problems where we are clearly far from the reachable frontier (whether that is near-optimal performance, or just the best that can be done given resource constraints).
The huge swathe of skills wielded by humans but not by existing AI systems shows that in terms of behavioral capabilities there is a lot of room for growth in capacity that does not depend on outperforming the algorithms where we have near-optimal methods (or optimal under resource constraint). The fact that we are the first species on Earth to reach civilization-supporting levels of cognitive capacities, suggests that there is room to grow beyond that in terms of useful behavioral capacities (which may be produced using various practical strategies that involve different computational problems) before hitting the frontier of feasibility. So long as enough domains have room to grow, they can translate into strategic advantage even if others are stable.
Also, I would note that linear performance gains on one measure can lead to much greater gains on another, e.g. linear improvements in predicting movements in financial markets translate to exponential wealth gains, gains in social manipulation or strategic acumen give disproportionate returns when they enable one to reliably outmaneuver one's opposition, linear gains in chess performance translate into an exponential drop-off in the number of potential human challengers, etc.
In the last five decades humans have created algorithms for solving many problems that had previously been intractable, and given orders of magnitude improvement on others. Many of these have come from math/compsci innovation that was not particularly hardware-limited, i.e. if you had the same (or a larger/smarter-on-average/better-organized) research community but with frozen primitive hardware many of the insights would have been found.
Yes. I agree strongly with this. One major thing we've found in the last few years is just that P turns out to be lar...
Link: johncarlosbaez.wordpress.com/2011/04/24/what-to-do/
His answer, as far as I can tell, seems to be that his Azimuth Project does trump the possibility of working directly on friendly AI or to support it indirectly by making and contributing money.
It seems that he and other people who understand all the arguments in favor of friendly AI and yet decide to ignore it, or disregard it as unfeasible, are rationalizing.
I myself took a different route, I was rather trying to prove to myself that the whole idea of AI going FOOM is somehow flawed rather than trying to come up with justifications for why it would be better to work on something else.
I still have some doubts though. Is it really enough to observe that the arguments in favor of AI going FOOM are logically valid? When should one disregard tiny probabilities of vast utilities and wait for empirical evidence? Yet I think that compared to the alternatives the arguments in favor of friendly AI are water-tight.
The problem why I and other people seem to be reluctant to accept that it is rational to support friendly AI research is that the consequences are unbearable. Robin Hanson recently described the problem:
I believe that people like me feel that to fully accept the importance of friendly AI research would deprive us of the things we value and need.
I feel that I wouldn't be able to justify what I value on the grounds of needing such things. It feels like that I could and should overcome everything that isn't either directly contributing to FAI research or that helps me to earn more money that I could contribute.
Some of us value and need things that consume a lot of time...that's the problem.