Peter Thiel: So you’re both thinking it will all fundamentally work out.
Scott Brown: Yes, but not in a wishful thinking way. We need to treat our work with the reverence you’d give to building bombs or super-viruses. At the same time, I don’t think hard takeoff scenarios like Skynet are likely. We’ll start with big gains in a few areas, society will adjust, and the process will repeat.
I don't believe anyone working on AI is actually treating it that way. I do hope, however, that whenever there are signs of a possible breakthrough, researchers will stop, assess what they have very carefully, and build a lot of safety features before doing any more development. Most important of all, I hope that whoever makes the key discoveries does not publish their results in a way that would enable more reckless groups to copy them
I expect the military and megacorps will be the biggest advocates of closed source machine intelligence software.
That's what happened when the government tried to keep cryptography out of citizens' hands, anyway.
Such efforts are ultimately futile, but they do sometimes act to slow progress down - thereby helping those with early access attain their own goals.
I didn't say that researchers should publish binaries without source code, I said they should hold off on publishing at all. This isn't about open vs. closed source.
Open source is about publishing the code (and allowing it to be reused). You're talking about not publishing the code. Plenty of software companies don't publish binaries (e.g. Google, Facebook). Binaries or no, it's not open source if you don't even publish the code.
Nevertheless, when you have the binary, you stand a chance at reverse engineering. If you broadcast such a binary, you have a guaranteed leak. At least, when you don't publish at all, you stand a chance at actual secrecy. (Pretty unlikely, though, if too much people are involved.)
In software development, secrecy is often undesirable: nobody trusts you; nobody will work with you; nobody can help you - you are pretty screwed. Thus, all the OSS in the world's infrastructure these days.
So, {Possible AI} > {Evolved Intelligence} > {Human Intelligence}.
What about {AI practically discoverable/inventable by humans}? This could be an even smaller set than {Human Intelligence}. If it's of a much higher order of intelligence than {Human Intelligence}, it's argued that it would build smarter and smarter AI. How do we know it's likely to be of a higher order?
I guess, I'd like to know more about: {AI practically discoverable/inventable by humans+non-sentient computers in the current generation}. Is there a compelling reason to believe this set is quite large, or quite small?
In particular, there is this quote:
The community and class of algorithms we’re using is fairly well defined, so we think we have a good sense of the competitive and technological landscape. There are probably something like 200—so, to be conservative, let’s say 2000—people out there with the skills and enthusiasm to be able to execute what we’re going after. But are they all tackling the exact same problems we are, and in the same way? That seems really unlikely.
Somehow the diversity that could be generated by 2000, 20000, or even 200000 researchers, presumably working in project teams of a few or a dozen, seems to be much smaller than the evolutionary diversity generated by a population of 10 billion Homo sapiens. (Though it may well span a much larger "volume" of design space, only a relatively few points would be represented across that volume.)
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon. Think mold colonies growing rapidly from a small handful of spores. Also, remember bootstrapping. It's not just the AI's that can be build by a few thousand humans (plus all the disparate fields they draw on). It's all the AI's that can be built by those AI's, and on and on, ad infinitum.
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon.
You are essentially defining "successful designs" as such. And what we know about evolution is strong supporting evidence for this.
What makes you think the first generation of AI will have all of those qualities? What makes you think the first gen AIs will be useful for building more AIs?
The first biological replicators on earth would be considered non-viable clunky disasters by today's standards.
The only way we can have "Friendly AI" beyond the first generation is if such entities are part of a larger "ecosystem" and they face economic, group dynamics, and evolutionary pressures that motivate (and "motivate") them to be this way.
Perhaps the key to "Friendly AI" is going to be competitive augmentation of Human Intelligence.
In light of the discussion he seems to be hedging bets in this area. I'm not sure if that the right strategy from the ex-risk perspective. At the very least it seems inconsistent.
Only if he thinks he can only weakly affect outcomes, or exert large amount of control as the evidence starts coming in.
Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.
I remember reading, on the topic of optimal charity, that it's only rational to select a single cause to donate to... until the point of giving enough money to noticeably change the marginal utility of each additional dollar. (Thiel has that much money, of course.) This information-gathering strategy could be a new reason for spreading donations at the level of large-scale donations, if it hasn't been discussed before.
I remember reading and enjoying that article (this one, I think).
I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.
http://blakemasters.tumblr.com/post/24464587112/peter-thiels-cs183-startup-class-17-deep-thought
Some perspectives on AI risk that might be interesting. Peter is (the primary?) donor for SI, and an investor in AGI startup Vicarious Systems.