wedrifid comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Gahhh! The hoard of arguments against that idea that instantly sprang to my mind (with warning bells screeching) perhaps hints at why a good argument hasn't been given to the contrary (if, in fact, it hasn't). It just seems so obvious. And I don't mean that as a criticism of you or Shane at all. Most things that we already understand well seem like they should be obvious to others. I agree that there should be a post making the arguments on that topic either here on LessWrong or on the SIAI website somewhere. (Are you sure there isn't?)
Edit: And you demonstrate here just why Eliezer (or others) should bother to answer XiXiDu's questions even if there are some weaknesses in his reasoning.
I understand your point, and agree that your conclusion is one that many smart, rational people with good general knowledge would share. Once again I concur that engaging with those X's is important, including that 'X' we're discussing here.
If I was SIAI my reasoning would be the following. First stop with the believes- believes not dichotomy and move to probabilities.
So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal. This is based on fast take-off winner take all type scenarios (I have a problem with this stage, but I would like it to be properly argued and that is hard).
So looking at the decision tree (under these assumptions) the only chance of a good outcome is to try to formalise FAI before AGI becomes well known. All the other options lead to extinction.
So to attack the "formalise Friendliness before AGI" position you would need to argue that the first AGIs are very unlikely to kill us all. That is the major battleground as far as I am concerned.
Agreed about what the "battleground" is, modulo one important nit: not the first AGI, but the first AGI that recursively self-improves at a high speed. (I'm pretty sure that's what you meant, but it's important to keep in mind that, e.g., a roughly human-level AGI as such is not what we need to worry about -- the point is not that intelligent computers are magically superpowerful, but that it seems dangerously likely that quickly self-improving intelligences, if they arrive, will be non-magically superpowerful.)
I don't think formalize-don't formalize should be a simple dichotomy either; friendliness can be formalized in various levels of detail, and the more details are formalized, the fewer unconstrained details there are which could be wrong in a way that kills us all.
I'd look at it the other way: I'd take it as practically certain that any superintelligence built without explicit regard to Friendliness will be unFriendly, and ask what the probability is that through sufficiently slow growth in intelligence and other mere safeguards, we manage to survive building it.
My best hope currently rests on the AGI problem being hard enough that we get uploads first.
(This is essentially the Open Thread about everything Eliezer or SIAI have ever said now, right?)
Uploading would have quite a few benefits, but I get the impression it would make us more vulnerable to whatever tools a hostile AI may possess, not less.
"So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal."
One problem here is the use of a circular definition of "friendliness" - that defines the concept it in terms of whether it leads to a favourable outcome. If you think "friendly" is defined in terms of whether or not the machine destroys humanity, then clearly you will think that an "unfriendly" machine would destroy the world. However, this is just a word game - which doesn't tell us anything about the actual chances of such destruction happening.
I'd start here to get an overview.
My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.
I think various people associated with SIAI mean different things by formalizing friendliness. I remember Vladimir Nesov means getting better than 50% probability for providing a good outcome.
Edited to add my own overview.
Consider my "at random" short hand for "at random from the space of possible minds built by humans".
The Eliezer approved example of humans not getting a simple system to do what they want is the classic Machine Learning example where a Neural Net was trained on two different sorts of tanks. It had happened that the photographs of the different types of tanks had been taken at different times of day. So the classifier just worked on that rather than actually looking at the types of tank. So we didn't build a tank classifier but a day/night classifier. More here.
While I may not agree with Eliezer on everything, I do agree with him it is damn hard to get a computer to do what you want when you stop programming them explicitly .
Better than chance? What chance?
Sorry, "Better than chance" is an english phrase than tends to mean more than 50%.
It assumes an even chance of each outcome. I.e. do better than selecting randomly.
Not appropriate in this context, my brain didn't think of the wider implications as it wrote it.
It's easy to do better than random. *Pours himself a cup of tea.*
Programmers do not operate by "picking programs at random", though.
The idea that "picking programs at random" has anything to do with the issue seems just confused to me.
The first AI will be determined by the first programmer, sure. But I wasn't talking about that level; the biases and concern for the ethics of the AI of that programmer will be random from the space of humans. Or at least I can't see any reason why I should expect people who care about ethics to be more likely to make AI than those that think economics will constrain AI to be nice,
That is now a completely different argument to the original "there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds".
Re: "the biases and concern for the ethics of the AI of that programmer will be random from the space of humans"
Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at "random".
You have correctly identified the area in which we do not agree.
The most relevant knowledge needed in this case is knowledge of game theory and human behaviour. They also need to know 'friendliness is a very hard problem'. They then need to ask themselves the following question:
What is likely to happen if people have the ability to create an AGI but do not have a proven mechanism for implementing friendliness? Is it:
I don't (with that phrasing). I actually suspect that the problem is too difficult to get right and far too easy to get wrong. We're probably all going to die. However, I think we're even more likely to die if some fool goes and invents a AGI before they have a proven theory of friendliness.
Those are the people, indeed. But where do the donations come from? EY seems to be using this argument against me as well. I'm just not educated, well-read or intelligent enough for any criticism. Maybe so, I acknowledged that in my post. But have I seen any pointers to how people arrive at their estimations yet? No, just the demand to read all of LW, which according to EY doesn't even deal with what I'm trying to figure out, but rather the dissolving of biases. A contradiction?
I'm inquiring about the strong claims made by the SIAI, which includes EY and LW. Why? Because they ask for my money and resources. Because they gather fanatic followers who believe into the possibility of literally going to hell. If you follow the discussion surrounding Roko's posts you'll see what I mean. And because I'm simply curious and like to discuss, besides becoming less wrong.
But if EY or someone else is going to tell me that I'm just too dumb and it doesn't matter what I do, think or donate, I can accept that. I don't expect Richard Dawkins to enlighten me about evolution either. But don't expect me to stay quiet about my insignificant personal opinion and epistemic state (as you like to call it) either! Although since I'm conveniently not neurotypical (I guess), you won't have to worry me turning into an antagonist either, simply because EY is being impolite.
SIAI position does dot require "obviously X" from a decision perspective, the opposite one does. To be so sure of something as complicated as the timeline of FAI math vs AGI development seems seriously foolish to me.
It is not a matter about being sure of it but to weigh it against what is asked for in return, other possible events of equal probability and the utility payoff from spending the resources on something else entirely.
I'm not asking the SIAI to prove "obviously X" but rather to prove the very probability of X that they are claiming it has within the larger context of possibilities.
No such proof is possible with our machinery.
=======================================================
Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.
Kaneda: You have to come down on one side or the other. I need a decision.
Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails.
Kaneda: And?
Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.
=====================================================
(Sunshine 2007)
Not being able to calculate chances does not excuse one from using their best de-biased neural machinery to make a guess at a range. IMO 50 years is reasonable (I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it's 5 years given state of certain technologies.
I'm curious, because I like to collect this sort of data: what is your median estimate?
(If you don't want to say because you don't want to defend a specific number or list off a thousand disclaimers I completely understand.)
Median 15-20 years. I'm not really an expert, but certain technologies are coming really close to modeling cognition as I understand it.
Thanks!
Well it's clear to me now that formalizing Friendliness with pen and paper is as naively impossible as it would have been for the people of ancient Babylon to actually build a tower that reached the heavens; so if resources are to be spent attempting it, then it's something that does need to be explicitly argued for.