CarlShulman comments on Wanted: "The AIs will need humans" arguments - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but it could be in conflict with things like eliminating threats from humans as quickly as possible, or avoiding other modest pressures in the opposite direction (e.g. concerns about the motives of alien trading partners or stage-managers could also favor eliminating humanity, depending on the estimated distribution of alien motives).
I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.
The Butler citation is silly and shouldn't be bothered with. There are far more recent claims that the human brain can do hypercomputation, perhaps due to an immaterial mind or mystery physics that would be hard to duplicate outside of humans for a while, or even forever. Penrose is more recent. Selmer Bringsjord has recently argued that humans can do hypercomputation, so AI will fail (as well that P=NP, he has a whole cluster of out-of-the-computationalist-mainstream ideas). And there are many others arguing for mystical computational powers in human brains.
Thanks, this is very useful!
Do you remember where?
Moravec would be in Mind Children or Robot. Bostrom would be in one or more of his simulation pieces (I think under "naturalistic theology" in his original simulation argument paper..
There's a whole universe of resources out there. The future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions. There will be an engineered future, with high probability. We are just the larval stage.
Star Wars takes place long, long ago...
Seconding Penrose. Depending on how broadly you want to cast your net, you could include a sampling of the anti-AI philosophy of mind literature, including Searle, maybe Ned Block, etc. They may not explicitly argue that AIs would keep humans around because we have some mental properties they lack, but you could use those folks' writings as the basis for such an argument.
In fact, I would be personally opposed to activating an allegedly friendly superintelligence if I thought it might forcibly upload everybody, due to uncertainty about whether consciousness would be preserved. I'm not confident that uploads wouldn't be conscious, but neither am I confident that they would be conscious.
Unfortunately, given the orthogonality thesis (why am I not finding the paper on that right now?), this does nothing for my confidence that an AI would not try to forcibly upload or simply exterminate humanity.