CarlShulman comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (244)
Focusing on slow-developing uploads doesn't cause slower development of other forms of AGI. Uploads themselves can't be expected to turn into FAIs without developing the (same) clean theory of de novo FAI (people are crazy, and uploads are no exception; this is why we have existential risk in the first place, even without any uploads). It's very hard to incrementally improve uploads' intelligence without affecting their preference, and so that won't happen on the first steps from vanilla humans, and pretty much can't happen unless we already have a good theory of preference, which we don't. We can't hold constant a concept (preference/values) that we don't understand (and as a magical concept, it's only held in the mind; any heuristics about it easily break when you push possibilities in the new regions). It's either (almost) no improvement (keep the humans until there is FAI theory), or value drift (until you become intelligent/sane enough to stop and work on preserving preference, but by then it won't be human preference); you obtain not-quite-Friendly AI in the end.
The only way in which uploads might help on the way towards FAI is by being faster (or even smarter/saner) FAI theorists, but in this regard they may accelerate the arrival of existential risks as well (especially the faster uploads that are not smarter/saner). To apply uploads specifically to FAI as opposed to generation of more existential risk, they have to be closely managed, which may be very hard to impossible once the tech gets out.
Emulations could also enable the creation of a singleton capable of globally balancing AI development speeds and dangers. That singleton could then take billions of subjective years to work on designing safe and beneficial AI. If designing safe AI is much, much harder than building AI at all, or if knowledge of AI and safe AI are tightly coupled, such a singleton might be the most likely route to a good outcome.
I agree, if you construct this upload-aggregate and manage to ban other uses for the tech. This was reflected in the next sentence of my comment (maybe not too clearly):
Especially if WBE comes late (so there is a big hardware overhang), you wouldn't need a lot of time to spend loads of subjective years designing FAI. A small lead time could be enough. Of course, you'd have to be first and have significant influence on the project.
Edited for spelling.
I don't think this would be impossibly difficult. If an aggressive line of research is pursued then the first groups to create an upload will be using hardware that would make immediate application of the technology difficult. Commercialization likely wouldn't follow for years. That would potentially give government plenty of time to realize the potential of the technology and put a clamp on it.
At that point the most important thing is that the government (or whatever regulatory body will have oversight of the upload aggregate) is well informed enough to realize what they are dealing with and have sense enough to deal with it properly. To that end, one of the most important things we can be doing now is trying to insure that that regulatory body will be well informed enough when the day comes.