Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 20 August 2010 07:00:52PM 15 points [-]

I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on.

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Comment author: ata 20 August 2010 07:11:45PM 13 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he's offering you a bet at 1000000000:1 odds in your favour. It's a good deal, you should take it.

Comment author: rabidchicken 21 August 2010 04:42:27PM *  4 points [-]

Indeed. I do not know how many people are actively involved in FAI research, but i would guess that it is only in the the dozens to hundreds. Given the small pool of competition, it seems likely that at some point Eliezer will, or already has, made a unique contribution to the field. Get Multi to put some money on it, offer him 1 cent if you do not make a useful contribution in the next 50 years, and if you do, he can pay you 10 million dollars.

Comment author: Unknowns 20 August 2010 07:08:40PM 13 points [-]

I agree it's kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.

Comment author: multifoliaterose 20 August 2010 07:52:48PM 2 points [-]

Why does my probability estimate look overconfident?

Comment author: steven0461 20 August 2010 09:02:03PM *  15 points [-]

One could offer many crude back-of-envelope probability calculations. Here's one: let's say there's

  • a 10% chance AGI is easy enough for the world to do in the next few decades
  • a 1% chance that if the world can do it, a team of supergeniuses can do the Friendly kind first
  • an independent 10% chance Eliezer succeeds at putting together such a team of supergeniuses

That seems conservative to me and implies at least a 1 in 10^4 chance. Obviously there's lots of room for quibbling here, but it's hard for me to see how such quibbling could account for five orders of magnitude. And even if post-quibbling you think you have a better model that does imply 1 in 10^9, you only need to put little probability mass on my model or models like it for them to dominate the calculation. (E.g., a 9 in 10 chance of a 1 in 10^9 chance plus a 1 in 10 chance of a 1 in 10^4 chance is close to a 1 in 10^5 chance.)

Comment author: multifoliaterose 20 August 2010 09:58:40PM *  1 point [-]

I don't find these remarks compelling. I feel similar remarks could be used to justify nearly anything. Of course, I owe you an explanation. One will follow later on.

Comment author: Unknowns 21 August 2010 05:26:44AM *  2 points [-]

Unless you've actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident. Even Eliezer said that he couldn't assign a probability as low as one in a billion for the claim "God exists" (although Michael Vassar criticized him for this, showing himself to be even more overconfident than Eliezer.)

Comment author: komponisto 23 August 2010 11:25:52AM 5 points [-]

Unless you've actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident.

I'm afraid I have to take severe exception to this statement.

You give the human species far too much credit if you think that our mere ability to dream up a hypothesis automatically raises its probability above some uniform lower bound.

Comment author: Unknowns 23 August 2010 11:51:52AM 0 points [-]

I am aware of your disagreement, for example as expressed by the absurd claims here. Yes, my basic idea is, unlike you, to give some credit to the human species. I think there's a limit on how much you can disagree with other human beings-- unless you're claiming to be something superhuman.

Did you see the link to this comment thread? I would like to see your response to the discussion there.

Comment author: komponisto 23 August 2010 07:58:47PM 5 points [-]

I think there's a limit on how much you can disagree with other human beings-- unless you're claiming to be something superhuman.

At least for epistemic meanings of "superhuman", that's pretty much the whole purpose of LW, isn't it?

Did you see the link to this comment thread? I would like to see your response to the discussion there.

My immediate response is as follows: yes, dependency relations might concentrate most of the improbability of a religion to a relatively small subset of its claims. But the point is that those claims themselves possess enormous complexity (which may not necessarily be apparent on the surface; cf. the simple-sounding "the woman across the street is a witch; she did it").

Comment author: Unknowns 24 August 2010 03:50:26AM *  7 points [-]

Let's pick an example. How probable do you think it is that Islam is a true religion? (There are several ways to take care of logical contradictions here, so saying 0% is not an option.)

Suppose there were a machine--for the sake of tradition, we can call it Omega--that prints out a series of zeros and ones according to the following rule. If Islam is true, it prints out a 1 on each round, with 100% probability. If Islam is false, it prints out a 0 or a 1, each with 50% probability.

Let's run the machine... suppose on the first round, it prints out a 1. Then another. Then another. Then another... and so on... it's printed out 10 1's now. Of course, this isn't so improbable. After all, there was a 1/1024 chance of it doing this anyway, even if Islam is false. And presumably we think Islam is more likely than this to be false, so there's a good chance we'll see a 0 in the next round or two...

But it prints out another 1. Then another. Then another... and so on... It's printed out 20 of them. Incredible! But we're still holding out. After all, million to one chances happen every day...

Then it prints out another, and another... it just keeps going... It's printed out 30 1's now. Of course, it did have a chance of one in a billion of doing this, if Islam were false...

But for me, this is my lower bound. At this point, if not before, I become a Muslim. What about you?

You've been rather vague about the probabilities involved, but you speak of "double digit negative exponents" and so on, even saying that this is "conservative," which implies possibly three digit exponents. Let's suppose you think that the probability that Islam is true is 10^-20; this would seem to be very conservative, by your standards. According to this, to get an equivalent chance, the machine would have to print out 66 1's.

If the machine prints out 50 1's, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, "There is a chance at most of 1 in 60,000 that Islam is true?"

If so, are you serious?

Comment author: cousin_it 24 August 2010 11:33:54PM *  9 points [-]

Thank you a lot for posting this scenario. It's instructive from the "heuristics and biases" point of view.

Imagine there are a trillion variants of Islam, differing by one paragraph in the holy book or something. At most one of them can be true. You pick one variant at random, test it with your machine and get 30 1's in a row. Now you should be damn convinced that you picked the true one, right? Wrong. Getting this result by a fluke is 1000x more likely than having picked the true variant in the first place. Probability is unintuitive and our brains are mush, that's all I'm sayin'.

Comment author: Unknowns 25 August 2010 05:41:52AM 1 point [-]

I agree with this. But if the scenario happened in real life, you would not be picking a certain variant. You would be asking the vague question, "Is Islam true," to which the answer would be yes if any one of those trillion variants, or many others, were true.

Yes, there are trillions of possible religions that differ from one another as much as Islam differs from Judaism, or whatever. But only a few of these are believed by human beings. So I still think I would convert after 30 1's, and I think this would reasonable.

Comment author: komponisto 24 August 2010 10:53:27PM *  3 points [-]

If the machine prints out 50 1's, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, "There is a chance at most of 1 in 60,000 that Islam is true?"

If so, are you serious?

Of course I'm serious (and I hardly need to point out the inadequacy of the argument from the incredulous stare). If I'm not going to take my model of the world seriously, then it wasn't actually my model to begin with.

Sewing-Machine's comment below basically reflects my view, except for the doubts about numbers as a representation of beliefs. What this ultimately comes down to is that you are using a model of the universe according to which the beliefs of Muslims are entangled with reality to a vastly greater degree than on my model. Modulo the obvious issues about setting up an experiment like the one you describe in a universe that works the way I think it does, I really don't have a problem waiting for 66 or more 1's before converting to Islam. Honest. If I did, it would mean I had a different understanding of the causal structure of the universe than I do.

Further below you say this, which I find revealing:

If this actually happened to you, and you walked away and did not convert, would you have some fear of being condemned to hell for seeing this and not converting? Even a little bit of fear? If you would, then your probability that Islam is true must be much higher than 10^-20, since we're not afraid of things that have a one in a hundred billion chance of happening.

As it happens, given my own particular personality, I'd probably be terrified. The voice in my head would be screaming. In fact, at that point I might even be tempted to conclude that expected utilities favor conversion, given the particular nature of Islam.

But from an epistemic point of view, this doesn't actually change anything. As I argued in Advancing Certainty, there is such a thing as epistemically shutting up and multiplying. Bayes' Theorem says the updated probability is one in a hundred billion, my emotions notwithstanding. This is precisely the kind of thing we have to learn to do in order to escape the low-Earth orbit of our primitive evolved epistemology -- our entire project here, mind you -- which, unlike you (it appears), I actually believe is possible.

Comment author: Wei_Dai 25 August 2010 12:59:57AM 4 points [-]

Has anyone done a "shut up and multiply" for Islam (or Christianity)? I would be interested in seeing such a calculation. (I did a Google search and couldn't find anything directly relevant.) Here's my own attempt, which doesn't get very far.

Let H = "Islam is true" and E = everything we've observed about the universe so far. According to Bayes:

P(H | E) = P(E | H) P(H) / P(E)

Unfortunately I have no idea how to compute the terms above. Nor do I know how to argue that P(H|E) is as small as 10^-20 without explicitly calculating the terms. One argument might be that P(H) is very small because of the high complexity of Islam, but since E includes "23% of humanity believe in some form of Islam", the term for the complexity of Islam seems to be present in both the numerator and denominator and therefore cancel each other out.

If someone has done such a calculation/argument before, please post a link?

Comment author: Unknowns 25 August 2010 07:13:27AM *  -2 points [-]

I agree that your position is analogous to "shutting up and multiplying." But in fact, Eliezer may have been wrong about that in general -- see the Lifespan Dilemma -- because people's utility functions are likely not unbounded.

In your case, I agree with shutting up and multiplying when we have a way to calculate the probabilities. In this case, we don't, so we can't do it. If you had a known probability (see cousin_it's comment on the possible trillions of variants of Islam) of one in a trillion, then I would agree with walking away after seeing 30 1's, regardless of the emotional effect of this.

But in reality, we have no such known probability. The result is that you are going to have to use some base rate: "things that people believe" or more accurately, "strange things that people believe" or whatever. In any case, whatever base rate you use, it will not have a probability anywhere near 10^-20 (i.e. more than 1 in 10^20 strange beliefs is true etc.)

My real point about the fear is that your brain doesn't work the way your probabilities do-- even if you say you are that certain, your brain isn't. And if we had calculated the probabilities, you would be justified in ignoring your brain. But in fact, since we haven't, your brain is more right than you are in this case. It is less certain precisely because you are simply not justified in being that certain.

Comment author: RichardKennaway 25 August 2010 03:27:58PM 2 points [-]

But for me, this is my lower bound. At this point, if not before, I become a Muslim. What about you?

At this point, if not before, I doubt Omega's reliability, not mine.

Comment author: Pavitra 26 August 2010 06:42:29AM 2 points [-]

It is a traditional feature of Omega that you have confidence 1 in its reliability and trustworthiness.

Comment author: Unknowns 26 August 2010 06:39:29AM 0 points [-]

This is a copout.

Comment author: [deleted] 24 August 2010 05:10:16AM *  1 point [-]

You've asked us to take our very small number, and imagine it doubling 66 times. I agree that there is a punch to what you say -- no number, no matter how small, could remain small after being doubled 66 times! But in fact long ago Archimedes made a compelling case that there are such numbers.

Now, it's possible that Archimedes was wrong and something like ultrafinitism is true. I take ultrafinitist ideas quite seriously, and if they are correct then there are a lot things that we will have to rethink. But Islam is not close to the top of list of things we would should rethink first.

Maybe there's a kind of meta claim here: conditional on probability theory being a coherent way to discuss claims like "Islam is true," the probability that Islam is true really is that small.

Comment author: Unknowns 24 August 2010 05:25:14AM *  0 points [-]

I just want to know what you would actually do, in that situation, if it happened to you tomorrow. How many 1's would you wait for, before you became a Muslim?

Also, "there are such numbers" is very far from "we should use such numbers as probabilities when talking about claims that many people think are true." The latter is an extremely strong claim and would therefore need extremely strong evidence before being acceptable.

Comment author: [deleted] 21 August 2010 07:54:24AM 1 point [-]

The product of two probabilities above your threshold-for-overconfidence can be below your threshold-for-overconfidence. Have you at least thought this through before?

For instance, the claim "there is a God" is not that much less spectacular than the claim "there is a God, and he's going to make the next 1000 times you flip a coin turn up heads." If one-in-a-billion is a lower bound for the probability that God exists, then one-in-a-billion-squared is a generous lower bound for the probability that the next 1000 times you flip a coin will turn up heads. (One-in-a-billion-squared is about 2-to-the-sixty). You're OK with that?

Comment author: Unknowns 21 August 2010 03:12:20PM *  1 point [-]

Yes. As long as you think of some not-too-complicated scenario where the one would lead to the other, that's perfectly reasonable. For example, God might exist and decide to prove it to you by effecting that prediction. I certainly agree this has a probability of at least one in a billion squared. In fact, suppose you actually get heads the next 60 times you flip a coin, even though you are choosing different coins, it is on different days, and so on. By that point you will be quite convinced that the heads are not independent, and that there is quite a good chance that you will get 1000 heads in a row.

It would be different of course if you picked a random series of heads and tails: in that case you still might say that there is at least that probability that someone else will do it (because God might make that happen), but you surely cannot say that it had that probability before you picked the random series.

This is related to what I said in the torture discussion, namely that explicitly describing a scenario automatically makes it far more probable to actually happen than it was before you described it. So it isn't a problem if the probability of 1000 heads in a row is more likely than 1 in 2-to-1000. Any series you can mention would be more likely than that, once you have mentioned it.

Also, note that there isn't a problem if the 1000 heads in a row is lower than one in a billion, because when I made the general claim, I said "a claim that significant number of people accept as likely true," and no one expects to get the 1000 heads.

Comment author: [deleted] 21 August 2010 05:13:54PM *  0 points [-]

Probabilities should sum to 1. You're saying moreover that probabilities should not be lower that some threshhold. Can I can get you to admit that there's a math issue here that you can't wave away, without trying to fine-tune my examples? If you claim you can solve this math issue, great, but say so.

Edit: -1 because I'm being rude? Sorry if so, the tone does seem inappropriately punchy to me now. -1 because I'm being stupid? Tell me how!

Comment author: Unknowns 21 August 2010 06:57:29PM 1 point [-]

I set a lower bound of one in a billion on the probability of "a natural language claim that a significant number of people accept as likely true". The number of such mutually exclusive claims is surely far less than a billion, so the math issue will resolve easily.

Yes, it is easy to find more than a billion claims, even ones that some people consider true, but they are not mutually exclusive claims. Likewise, it is easy to find more than a billion mutually exclusive claims, but they are not ones that people believe to be true, e.g. no one expects 1000 heads in a row, no one expects a sequence of five hundred successive heads-tails pairs, and so on.

I didn't downvote you.

Comment author: [deleted] 21 August 2010 09:03:22PM 0 points [-]

Maybe I see. You are updating on the fact that many people believe something, and are saying that P(A|many people believe A) should not be too small. Do you agree with that characterization of your argument?

In that case, we will profitably distinguish between P(A|no information about how many people believe A) and P(A|many people believe A). Is there a compact way that I can communicate something like "Excepting/not updating on other people's beliefs, P(God exists) is very small"? If I said something like that would you still think I was being overconfident?

Comment author: Unknowns 22 August 2010 06:17:18AM 0 points [-]

This is basically right, although in fact it is not very profitable to speak of what the probability would be if we didn't have some of the information that we actually have. For example, the probability of this sequence of ones and zeros -- 0101011011101110 0010110111101010 0100010001010110 1010110111001100 1110010101010000 -- being chosen randomly, before anyone has mentioned this particular sequence, is one out 2 to the 80. Yet I chose it randomly, using a random number generator (not a pseudo random number generator, either.) But I doubt that you will conclude that I am certainly lying, or that you are hallucinating. Rather, as Robin Hanson points out, extraordinary claims are extraordinary evidence. The very fact that I write down this improbable evidence is extremely extraordinary evidence that I have chosen it randomly, despite the huge improbability of that random choice. In a similar way, religious claims are extremely strong evidence in favor of what they claim; naturally, just as if I hadn't written the number, you would never believe that I might choose it randomly, in the same way, if people didn't make religious claims, you would rightly think them to be extremely improbable.

Comment author: multifoliaterose 21 August 2010 05:33:43AM 0 points [-]

My estimate does come some effort at calibration, although there's certainly more that I could do. Maybe I should have qualified my statement by saying "this estimate may be a gross overestimate or a gross underestimate."

In any case, I was not being disingenuous or flippant. I have carefully considered the question of how likely it is that Eliezer will be able to play a crucial role in a FAI project if he continues to exhibit a strategy qualitatively similar to his current one and my main objection to SIAI's strategy is that I think it extremely unlikely that Eliezer will be able to have an impact if he proceeds as he has up until this point.

I will be detailing why I don't think that Eliezer's present strategy toward working toward an FAI is a fruitful one in a later top level post.

Comment author: steven0461 21 August 2010 05:47:06AM 4 points [-]

Maybe I should have qualified my statement by saying "this estimate may be a gross overestimate or a gross underestimate."

It sounds, then, like you're averaging probabilities geometrically rather than arithmetically. This is bad!

Comment author: multifoliaterose 21 August 2010 05:51:37AM *  1 point [-]

I understand your position and believe that it's fundamentally unsound. I will have more to say about this later.

For now I'll just say that the arithmetical average of the probabilities that I imagine I might ascribe to Eliezer's current strategy resulting in an FAI to be 10^(-9).

Comment author: multifoliaterose 20 August 2010 07:09:42PM 0 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

I don't understand this remark.

What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you're working on? I can engage with a specific number. I don't know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

I should clarify that my comment applies equally to AGI.

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Yes, this possibility has certainly occurred to me. I just don't know what your different non-crazy beliefs might be.

Why do you think that AGI research is so uncommon within academia if it's so easy to create an AGI?

Comment author: khafra 20 August 2010 07:44:57PM *  4 points [-]

This question sounds disingenuous to me. There is a large gap between "10^-9 chance of Eliezer accomplishing it" and "so easy for the average machine learning PhD." Whatever else you think about him, he's proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.

Comment author: multifoliaterose 20 August 2010 07:56:56PM *  0 points [-]

My remark was genuine. Two points:

  1. I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.

  2. Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Comment author: XiXiDu 20 August 2010 08:16:25PM 3 points [-]

Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done.

He actually stated that himself several times.

So I do understand that, and I did set out to develop such a theory, but my writing speed on big papers is so slow that I can't publish it. Believe it or not, it's true.

Yes, ok, this does not mean his intellectual power isn't on par, but his ability to function in an academic environment.

As far as I know he has no experience with narrow AI research.

Well...

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. [...] And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?"

Comment author: Vladimir_Nesov 20 August 2010 08:51:12PM 1 point [-]

As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).

Comment author: multifoliaterose 20 August 2010 10:04:58PM *  1 point [-]

The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: "there are vastly easier prerequisite questions that we already don’t know how to answer."

Comment author: Vladimir_Nesov 20 August 2010 10:08:43PM 2 points [-]

FAI research is not AGI research, at least not at present, when we still don't know what it is exactly that our AGI will need to work towards, how to formally define human preference.

Comment author: multifoliaterose 20 August 2010 10:13:18PM 1 point [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally. That's where my low probability was coming from.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

As I've said, I find your position sophisticated and respect it. I have to think more about your present point - reflecting on it may indeed alter my thinking about this matter.

Comment author: Vladimir_Nesov 20 August 2010 10:27:03PM *  6 points [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally.

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Comment author: Wei_Dai 11 September 2010 08:13:32PM *  2 points [-]

It seems obviously infeasible to me that governments will chance upon this level of rationality.

I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven't done that badly. From an article about RAND:

Futurology was the magic word in the years after the Second World War, and because the Army and later the Air Force didn’t want to lose the civilian scientists to the private sector, Project Reasearch and Development, RAND in short, was founded in 1945 together with the aircraft manufacturer Douglas and in 1948 was converted into a Corporation. RAND established forecasts for the coming, cold future and developed, towards this end, the ‘delphi’ method.

Rand worshipped rationality as a god and attempted to quantify the unpredictable, to calculate it mathematically, to bring the fear within its grasp and under control - something that seemed to many Americans spooky and made the soviet Prawda call RAND the “American academy of death and destruction.”

(Huh, this is the first time I've heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.

Comment author: multifoliaterose 20 August 2010 10:43:08PM *  0 points [-]

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

Yes, this is the point that I had not considered and which is worthy of further consideration.

It seems obviously infeasible to me that governments will chance upon this level of rationality.

Possibly what I mention could be accomplished with lobbying.

Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you're mentioning were not in my mind before), but I'm glad that there are some people (like you) who are.

The main point that I'm trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.

Comment author: Emile 20 August 2010 09:04:58PM 3 points [-]

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

Um, and there aren't?

Comment author: multifoliaterose 20 August 2010 09:53:55PM 1 point [-]

Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.

Comment author: Vladimir_Nesov 20 August 2010 11:24:04PM *  5 points [-]

What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). "Narrow AI" develops the tools that could eventually allow the construction of random-preference AGI.

Comment author: Nick_Tarleton 20 August 2010 09:57:49PM *  4 points [-]

The folks here, for a start.