The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'
Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:
(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models
(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks
(3) Google: A Neural Image Caption Generator
(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions
[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.
What are you worried he might do?
If he believes what he's said, he should really throw lots of money at FHI and MIRI. Such an action would be helpful at best, harmless at worst.
What are you worried he might do?
Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.
If he believes what he's said, he should really throw lots of money at FHI and MIRI.
Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.
I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.
The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.
If he is seriously convinced that doom might be no more than 5 years away, then I share his worries about what an agent with massive resources at its disposal might do in order to protect itself. Just that in my case this agent is called Elon Musk.
A chiropractor?
Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?
I mean, there's always the argument that you should do whatever it is that makes pain go away, but is there a reason to have a chiropractor do this rather than a medical professional?
I don't want to diss this post which seems quite good, I just wanted to highlight this point.
My first google result led me to this: http://www.sciencebasedmedicine.org/science-and-chiropractic/
However, I haven't done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I'm not really sure where I picked up.
A chiropractor?
Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?
...
However, I haven't done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I'm not really sure where I picked up.
Same for me. I was a little bit shocked to read that someone on LessWrong goes to a chiropractor. But for me this attitude is also based on something I considered to be common knowledge, such as astrology being pseudoscience. And the Wikipedia article on chiropractic did not change this attitude much.
Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, [...]
Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)
Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)
Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.
There are whole books now about this topic. What's missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.
So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don't see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That's not enough!
Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars
You’re confusing peoples’ goals with their expectations.
The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.
I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?
The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don't do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.
I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.
Do you have some convincing counterarguments?
Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.
Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.
Dale Carrico mocks Musk:
http://amormundi.blogspot.com/2014/10/summoning-demon-robot-cultist-elon-musk.html
Of course, Elon Musk has built real companies which make real stuff. Even The Atlantic magazine admits that:
Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.
Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.
Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars. Or something along these lines...
The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You believe he's calling for the execution, imprisonment or other punishment of AI researchers? I doubt it.
So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?
What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.
He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.
I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.
The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).
ETA: Just take those people who destroy GMO test fields. Musk won't do something like that. But other people, who would commit such acts, might be inspired by his remarks.