gwern comments on Will the ems save us from the robots? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (43)
Hasn't Eliezer argued at length against ems being safer than AGIs? You should probably look up what he's already written.
Thinking that high-fidelity WBE, magically dropped in our laps, would be a big gain is quite different from thinking that pushing WBE development will make us safer. Many people who have considered these questions buy the first claim, but not the second, since the neuroscience needed for WBE can enable AGI first ("airplanes before ornithopters," etc).
Eliezer has argued that:
1) High-fidelity emulations of specific people give better odds of avoiding existential risk than a distribution over "other AI, Friendly or not."
2) If you push forward the enabling neuroscience and neuroimaging for brain emulation you're more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
3) Pushing forward the enabling technologies of WBE, in accelerating timelines, leaves less time for safety efforts to grow and work before AI, or for better information-gathering on which path to push.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won't be tempted to use low-fidelity scans.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it's hard to tell from the outside.)
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
Different relative weightings of imaginging, comp neurosci, and hardware would seem to give different probability distributions over brain-inspired AI, low-fi WBE, and hi-fi WBE, but I don't see a likely track that goes in the direction of "probably WBE" without a huge (non-competitive) willingness to hold back on the part of future developers.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it's the worst because of more abrupt/unequal transitions), but that doesn't mean one should push any of them.
A number of person-months, but not person-years.
Good point, but brain-inspired AI may not be feasible (within a relevant time frame), because simulating a bunch of neurons may not get you to human-level general intelligence without either detailed information from a brain scan or an impractically huge amount of trial and error. It seems to me that P(unfriendly de novo AI is feasible | FAI is feasible) is near 1, whereas P(neuromorphic AI is feasible | hi-fi WBE is feasible) is maybe 0.5. Has this been considered?
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What's the rationale for pushing decision theory but not neuroimaging?
I guess both of us think abrupt/unequal transitions are better than Robin's Malthusian scenario, but I'm not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions. I'm curious what the reasoning is.
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here's Robin's post on the subject, which leaves his views more ambiguous:
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This argument can't be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question "If we had to push for FAI directly, how should we do it?" which is not what I asked.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I don't understand why you say that. Wouldn't safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn't ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
I certainly agree with that, but I don't understand why SIAI isn't demanding a similar level of analysis before pushing decision theory.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Was there something written up on this work? If not, I think it'd be worth spending a couple of days to write up a report or blog post so others who want to think about these problems don't have to start from scratch.
It looks to me as though Robin would prefer computing power to mature last. Neuroimaging research now could help bring that about.
http://www.overcomingbias.com/2009/11/bad-emulation-advance.html
Ems seem quite likely to be safer than AGIs, since they start out sharing values with humans. They also decrease the likelihood of a singleton.
Uploads in particular mean that current humans can run on digital substrate, thereby ameliorating one of the principle causes of power imbalance between AGIs and humans.
One thing that humans commonly value is exerting power/influence over other humans.
Safer for who? I am not particularly convinced that a whole-brain emulation wouldn't still be a human being, even if under alience circumstances to those of us alive today.
Safer for everyone else. Humans aren't Friendly.
Fair enough. But then, I am of the opinion that so long as the cultural/psychological inheritor of humanity can itself be reliably deemed "human", I'm not much concerned about what happens to meatspace humanity -- at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
He may be a murderous despot, but he's your murderous despot, eh?
I'm a sentimental guy.