Kaj_Sotala comments on Wanted: "The AIs will need humans" arguments - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but it could be in conflict with things like eliminating threats from humans as quickly as possible, or avoiding other modest pressures in the opposite direction (e.g. concerns about the motives of alien trading partners or stage-managers could also favor eliminating humanity, depending on the estimated distribution of alien motives).
I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.
The Butler citation is silly and shouldn't be bothered with. There are far more recent claims that the human brain can do hypercomputation, perhaps due to an immaterial mind or mystery physics that would be hard to duplicate outside of humans for a while, or even forever. Penrose is more recent. Selmer Bringsjord has recently argued that humans can do hypercomputation, so AI will fail (as well that P=NP, he has a whole cluster of out-of-the-computationalist-mainstream ideas). And there are many others arguing for mystical computational powers in human brains.
Thanks, this is very useful!
Do you remember where?
Moravec would be in Mind Children or Robot. Bostrom would be in one or more of his simulation pieces (I think under "naturalistic theology" in his original simulation argument paper..