JamesAndrix comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
Thank you for taking the time to write this elaborate comment. I do agree with almost anything of the above by the way. I just believe that your portrayal of the anti-FOOM crowd is a bit drastic. I don't think that people like Robin Hanson simply fall for the idea of human supremacy. Nor do I think that the reason for them not looking directly at the pro-FOOM arguments is being circumventive but that they simply do not disagree with the arguments per se but their likelihood and also consider the possibility that it would be more dangerous to impede AGI.
Very interesting and quite compelling the way you put it, thanks.
I'm myself a bit suspicious if the argument for strong self-improvement is as compelling as it sounds though. Something you have to take into account is if it is possible to predict that a transcendence does leave your goals intact, e.g. can you be sure to still care about bananas after you went from chimphood to personhood. Other arguments can also be weakened, as we don't know that 1.) the fuzziness of our brain isn't a feature that allows us to stumble upon unknown unknowns, e.g. against autistic traits 2.) our processing power isn't so low after all, e.g. if you consider the importance of astrocytes, microtubule and possible quantum computational processes. Further it is in my opinion questionable to argue that it is easy to create an intelligence which is able to evolve a vast repertoire of heuristics, acquire vast amounts of knowledge about the universe, dramatically improve its cognitive flexibility and yet somehow really hard to limit the scope of action that it cares about. I believe that the incentive necessary for a Paperclip maximizer will have to be deliberately and carefully hardcoded or evolved or otherwise it will simply be inactive. How else do you defferentiate between something like a grey goo scenarios and that of a Paperclip maximizer if not by its incentive to do it? I'm also not convinced that intelligence bears unbounded payoff. There are limits to what any kind of intelligence can do, a superhuman AI couldn't come up with a faster than light propulsion or would disprove Gödel's incompleteness theorems. Another setback for all of the mentioned pathways to unfriendly AI are enabling technologies like advanced nanotechnology. It is not clear how it could possible improve itself without such technologies at hand. It won't be able to build new computational substrates or even change its own substrate without access to real-world advanced nanotechnology. That it can simply invent it and then acquire it using advanced social engineering is pretty far-fetched in my opinion. And what about taking over the Internet? It is not clear that the Internet would even be a sufficient substrate and that it could provide the necessary resources.
Isn't that exactly the argument against non-proven AI values in the first place?
If you expect AI-chimp to be worried that AI-superchimp won't love bannanas , then you should be very worried about AI-chimp.
I don't get what you're saying about the paperclipper.
It is a reason not to transcend if you are not sure that you'll still be you afterwards, i.e. keep your goals and values. I just wanted to point out that the argument runs both directions. It is an argument for the fragility of values and therefore the dangers of fooming but also an argument for the difficulty that could be associated with radically transforming yourself.