A third possibility is that AGI becomes the next big scare.
There's always a market for the next big scare, and a market for people who'll claim putting them in control will save us from the next big scare.
Having the evil machines take over has always been a scare. When AI gets more embodied, and start working together autonomously, people will be more likely to freak, IMO.
Getting beat on Jeopardy is one thing, watching a fleet of autonomous quad copters doing their thing is another. It made me a little nervous, and I'm quite pro AI. When people see machines that seem like they're alive, like they think, communicate among themselves, and cooperate in action, many will freak, and others will be there to channel and make use of that fear.
That's where I disagree with EY. He's right that a smarter talking box will likely just be seen as an nonthreatening curiosity. Watson 2.0, big deal. But embodied intelligent things that communicate and take concerted action will press our base primate "threatening tribe" buttons.
"Her" would have had a very different feel if all those AI operating systems had bodies, and got together in their own parallel and much more quickly advancing society. Kurzweil is right in pointing out that with such advanced AI, Samantha could certainly have a body. We'll be seeing embodied AI well before any human level of AI. That will be enough for a lot of people to get their freak out on.
Self-driving cars are already inspiring discussion of AI ethics in mainstream media.
Driving is something that most people in the developed world feel familiar with — even if they don't themselves drive a car or truck, they interact with people who do. They are aware of the consequences of collisions, traffic jams, road rage, trucker or cabdriver strikes, and other failures of cooperation on the road. The kinds of moral judgments involved in driving are familiar to most people — in a way that (say) operating a factory or manipulating a stock market are not.
I don't mean to imply that most people make good moral judgments about driving — or that they will reach conclusions about self-driving cars that an AI-aware consequentialist would agree with. But they will feel like having opinions on the issue, rather than writing it off as something that programmers or lawyers should figure out. And some of those people will actually become more aware of the issue, who otherwise (i.e. in the absence of self-driving cars) would not.
So yeah, people will become more and more aware of AI ethics. It's already happening.
Self-driving cars will also inevitably catalyze discussion of the economic mora...
I don't think the linked PCP thing is a great example. Yes, the first time someone seriously writes an algorithm to do X it typically represents a big speedup on X. The prediction of the "progress is continuous" hypothesis is that the first time someone writes an algorithm to do X, it won't be very economically important---otherwise someone would have done it sooner---and this example conforms to that trend pretty well.
The other issue seems closer to relevant; mathematical problems do go from being "unsolved" to "solved" with ...
As you noted on your blog Elon Musk is concerned about unfriendly AI and from his comments about how escaping to mars won't be a solution because "The A.I. will chase us there pretty quickly" he might well share MIRI's fear that the AI will seek to capture all of the free energy of the universe. Peter Thiel, a major financial supporter of yours, probably also has this fear.
If after event W happens, Elon Musk, Peter Thiel, and a few of their peers see the truth of proposition X and decide that they and everything they care about will perish if po...
I would believe that as soon as AGI becomes near (it it will ever will) predictions by experts will start to converge to some fixed date, rather than the usual "15-20 years in the future".
I've posted about this before, but there are many aspects of AI safety that we can research much more effectively once strong AI is nearer to realization. If people today say "AI could be a risk but it would be hard to get a good ROI on research dollars invested in AI safety today", I'm inclined to agree.
Therefore, it won't simply be interest in X-risk, but feasibility of concrete research plans on how to reduce it that help advance any AI safety agenda.
(Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")
Apologies for asking an off-topic question that has certainly been discussed somewhere before, but if advanced decision theories are logically superior, then they are in some sense universal, in that a large subspace of mindspace will adopt them when the minds become intelligent enough ("Three worlds collide" seems to indicate that this is EYs opinion, at least for minds that evolved), then even a paperclip maxi...
policy-makers and research funders will begin to respond to the AGI safety challenge, just like they began to respond to... synbio developments in the 2010s.
What are we referring to here? As in, what synbio developments and how did they respond to it?
Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!"
We became better at constructing nuclear power plants, and nuclear bombs became cleaner. What critics are saying is that as AI advances, our control over it advances as well. In other words, the better AI becomes, the better we become at making AI work as expected. Because if AI became increasingly unreliable as its power grew, AI would cease to be a commercially viable product.
A related question:
Are any governments working on AI projects? Surely the idea has occurred to a lot of military planners and spy agencies that AI would be an extremely potent weapon. What would the world be like if AI is first developed secretly in a government facility in Maryland?
And have those tricky philosophical nuances been solved? Can there be reliable predictions of AI unfriendliness without such a solution?
AGI is already 1-2 decades away. Or 2-5 years if a well-funded project started now. I don't think that is enough time for a meaningful reaction by society, even just its upper echelons.
I would be very concerned about the "out of nowhere" outcome, especially now that the AI winter has thawed. We have the tools, and we have the technology to do AGI now. Why assume that it is decades away?
Why do you think it's so near? I don't see many others taking that position even among those who are already concerned about AGI (like around here).
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
If several people follow this procedure, I would expect to get a better estimate from averaging their results than trying it out for myself.
I'm a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.
In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google's knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
The pitfall with private research is that nobody sees your work, meaning there's nobody to criticize it or tell you your assessment "the issues are solvable or solved but not yet integrated" is incorrect. Or, if it is correct and I'm dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general "AGI can be dangerous" thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you've got something)?
I have as much credibility as Eliezer Yudkowsky in that regard
That is, not very much.
But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven't published in peer-reviewed journals or conferences (except some philosophical "special issue" volumes, IIRC).
Your "Trust me, I'm a computer scientist and I've fiddled with OpenCog in my basement but I can't show you my work because humans not ready for it" gives you even less credibility.
This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too...
In UDT you shouldn't consider yourself to be just one of your clones. There is no probability measure on the set of your clones: you are all of them simultaneously. CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA. If you use an anthropic hypothesis, Boltzman brains will still get you in trouble. In fact, some cosmologists are trying to find models w/o Boltzman brains precise to avoid the conclusion that you are likely to be a Boltzman brain (although UDT shows the effort is misguided). The problem with UDT and Goedel incompleteness is a separate issue which has no relation to Boltzman brains.
I was meaning in the sense of measure theory. I've seen people discussing maximising the measure of a utility function over all future Everett branches...
I'm not sure what you mean here. Sets have measure, not functions.
I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn't intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?
Well, you still got all of the abovementioned problems except divergence.
...actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.
Hmm, baby universes are a possibility to consider. I thought the case for them is rather weak but a quick search revealed this. Regarding performing an infinite number of computations I'm pretty sure it doesn't work.
CDT is difficult to apply to situations with clones, unless you supplement it by some anthropic hypothesis like SIA or SSA.
While I can see why there intuitive cause to abandon the "I am person #2, therefore there are probably not 100 people" reasoning, abandoning "There are 100 clones, therefore I'm probably not clone #1" seems to be simply abandoning probability theory altogether, and I'm certainly not willing to bite that bullet.
Actually, looking back through the conversation, I'm also confused as to how time discounting helps in...
Cross-posted from my blog.
Yudkowsky writes:
My own projection goes more like this:
At least one clear difference between my projection and Yudkowsky's is that I expect AI-expert performance on the problem to improve substantially as a greater fraction of elite AI scientists begin to think about the issue in Near mode rather than Far mode.
As a friend of mine suggested recently, current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect elite acknowledgement of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because AGI is tougher to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")
Still, there is a worryingly non-negligible chance that AGI explodes "out of nowhere." Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is sped up by 20 orders of magnitude with a single breakthrough.