What I meant to say is that given what I know it is unlikely enough to be true
Well, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can't be any different.
Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be.
Saying - oh, I know you are wrong based on everything I stand for - is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely - should explain it also. They do so.
P.S. I don't consider myself as a "lesswronger" at all. Disagree too often and have no "site patriotism".
You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely - should explain it also. They do so.
My comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree.
There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewh...
...has finally been published.
Contents:
The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & Bostrom, Igor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.
McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.