Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions

16 Post author: MichaelGR 11 November 2009 03:00AM

As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

Suggestions

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

It's okay to attempt humor (but good luck, it's a tough crowd).

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

Update: Eliezer's video answers to 30 questions from this thread can be found here.

Comments (682)

Comment author: righteousreason 23 December 2009 01:37:42AM *  -2 points [-]

As a question for everyone (and as a counter argument to CEV),

Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?

And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.

A) Yes B) No

I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.

Comment author: Will_Euler 19 November 2009 02:50:09AM 0 points [-]

Let's say someone (today, given present technology) has the goal of achieving rational self-insight into one's thinking processes and the goal of being happy. You have suggested (in conversation) such a person might find himself in an "unhappy valley" insofar as he is not perfectly rational. If someone today -- using current hedonic/positive psychology --undertakes a program to be as happy as possible, what role would rational self-insight play in that program? 

Comment author: Will_Euler 19 November 2009 02:11:41AM 3 points [-]

How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?

If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an "uncanny valley"?

Less important, but related:  What self-insights from hedonic/positive psychology have you found most revealing about people's ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.

(I feel these are sufficiently intertwined to constitute one general question about the relationship between self-knowledge and happiness.)

Comment author: ABranco 18 November 2009 07:28:27PM 13 points [-]

You've achieved a high level of success as a self-learner, without the aid of formal education.

Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?

If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)

Comment author: mormon2 23 November 2009 07:37:34PM 4 points [-]

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

Comment author: komponisto 24 November 2009 05:00:31AM 3 points [-]

How do you define high level of success?

He has a job where he is respected, gets to pursue his own interests, and doesn't have anybody looking over his shoulder on a daily basis (or any short-timescale mandatory duties at all that I can detect). That's pretty much the trifecta, IMHO.

Comment author: ABranco 24 November 2009 12:58:09AM *  1 point [-]

Well, ok, success might be a personal measure, so by all means only Eliezer could properly say if Eliezer is successful. (Or at least, this is what should matter.)

Having said that, my saying he's successful was driven (biased?) by my personal standards. A positive (not in the sense of a biased article; in the sense that impact described is positive) Wikipedia article (how many people are in Wikipedia with picture and 10 footnotes? — but nevermind, this is a polemic variable, so let's not split hairs here) and founding something like SIAI and LessWrong deserve my respect, and quite some awe given his 'formal education'.

Comment author: mormon2 25 November 2009 03:04:13AM 3 points [-]

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?

ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.

I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.

Comment author: MarkHHerman 18 November 2009 02:56:58AM 1 point [-]

Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?

[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]

Comment author: Nick_Tarleton 18 November 2009 07:48:28PM 2 points [-]

More generally, what kind of psychology research would you most like to see done?

Comment author: MarkHHerman 18 November 2009 01:04:20AM 2 points [-]

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

Comment author: MichaelGR 17 November 2009 01:54:16AM *  15 points [-]

Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?

Updating top level with expanded question:

I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?

So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).

It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).

If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.

Comment author: AnnaSalamon 19 November 2009 06:57:55AM *  28 points [-]

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:

Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):

  • “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  • “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  • “Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  • “Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  • “Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
  • And several more at various stages of the writing process, including some journal papers.

The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)

The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)

Miscellaneous additional examples:

  • The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
  • A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
  • Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
  • A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
  • Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.

(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)

(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)

How efficiently can we turn a marginal $1000 into more rapid project-completion?

As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.

As to SIAI vs. SENS:

SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.

The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)

There’s a lot more to say on all of these points, but I’m trying to be brief -- if you want more info on a specific point, let me know which.

It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)

Comment author: Rain 23 March 2010 02:27:07AM 8 points [-]

You can donate to FHI too? Dang, now I'm conflicted.

Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.

Crisis averted by tiny obstacles.

Comment author: Pablo_Stafforini 23 March 2010 01:19:32AM 1 point [-]

Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst's donation recommendation. (Dawrst is "Utilitarian", the donor that Anna mentions above.)

Comment author: Kutta 03 December 2009 11:27:26PM *  7 points [-]

at 8 expected current lives saved per dollar donated

Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...

Comment author: StefanPernar 24 November 2009 11:44:10AM 1 point [-]

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

Comment author: MichaelGR 20 November 2009 06:53:25PM *  0 points [-]

Thank you very much, Anna.

This will help me decide, and I'm sure that it will help others too.

I second Katja's idea; a version of this should be posted on the SIAI blog.

Comment author: Kaj_Sotala 20 November 2009 07:30:29PM 2 points [-]

I second Katja's idea;

Kaj's. :P

Comment author: MichaelGR 20 November 2009 08:35:57PM 0 points [-]

I'm sorry, for some reason I thought you were Katja Grace. My mistake.

Comment author: Kaj_Sotala 20 November 2009 10:05:06AM 14 points [-]

Please post a copy of this comment as a top-level post on the SIAI blog.

Comment author: Wei_Dai 20 November 2009 09:15:26AM 5 points [-]

Someone should update SIAI's recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:

Comment author: Eliezer_Yudkowsky 17 November 2009 02:17:48AM 6 points [-]

I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other's positive publicity. For this reason I've usually tended to avoid this kind of elevator pitch!

Pass to Michael Vassar: Should I answer this?

Comment author: MichaelGR 17 November 2009 03:50:51AM *  3 points [-]

[I've moved what was here to the top level comment]

Comment author: Eliezer_Yudkowsky 18 November 2009 12:47:34AM 2 points [-]

I'll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.

Comment author: Furcas 16 November 2009 05:33:36PM *  4 points [-]

Eliezer, in Excluding the Supernatural, you wrote:

Ultimately, reductionism is just disbelief in fundamentally complicated things. If "fundamentally complicated" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.

"Fundamentally complicated" does sound like an oxymoron to me, but I can't explain why. Could you? What is the contradiction?

Comment author: anonym 17 November 2009 07:31:33AM 6 points [-]

Isn't the contradiction that "complicated" means having more parts/causes/aspects than are readily comprehensible, and "fundamental" things never are complicated, because if they were, they could be broken down into more fundamental things that were less complicated? The fact that things invariably get simpler and more basic as we move closer to the foundational level is in tension with things getting more complicated as we move down.

Comment author: roland 16 November 2009 05:21:42PM *  4 points [-]

Boiling down rationality

Eliezer, if you only had 5 minutes to teach a human how to be rational, how would you do it? The answer has to be more or less self-contained so "read my posts on lw" is not valid. If you think that 5 minutes is not enough you may extend the time to a reasonable amount, but it should be doable in one day at maximum. Of course it would be nice if you actually performed the answer in the video. By perform I mean "Listen human, I will teach you to be rational now..."

EDIT: When I said perform I meant it as opposed to telling how to, so I would prefer Eliezer to actually teach rationality in 5 minutes instead of talking about how he would teach it.

Comment author: mikerpiker 16 November 2009 03:39:16AM 2 points [-]

It seems like, if I'm trying to make up my mind about philosophical questions (like whether moral realism is true, or whether free will is an illusion) I should try to find out what professional philosophers think the answers to these questions are.

If I found out that 80% of professional philosophers who think about metaethical questions think that moral realism is true, and I happen to be an anti-realist, then I should be far less certain of my belief that anti-realism is true.

But surveys like this aren't done in philosophy (I don't think). Do you think that the results of surveys like this (if there were any) should be important to the person trying to make a decision about whether or not to believe in free will, or be an moral realist, or whatever?

Comment author: Jack 16 November 2009 10:18:16PM *  4 points [-]

My answer to this depends on what you mean by "professional philosophers who think about". You have to be aware that subfields have selection biases. For example, the percent of philosophers of religion who think God exists is much, much larger than the percent of professional philosophers generally who think God exists. This is because if God does not exist philosophy of religion ceases to be a productive area of research. Conversely, if you have an irrational attachment to the idea that God exists this than you are likely to spend an inordinate amount of time trying to prove one exists. This issue is particularly bad with regard to religion but it is in some sense generalizable to all or most other subfields. Philosophy is also a competitive enterprise and there are various incentives to publishing novel arguments. This means in any given subfield views that are unpopular among philosophers generally will be overrepresented.

So the circle you draw around "professional philosophers who think about [subfield x] questions" needs to be small enough to target experts but large enough that you don't limit your survey to those philosophers who are very likely to hold a view you are surveying in virtue of the area they work in. I think the right circle is something like 'professional philosophers who are equipped to teach an advanced undergraduate course in the subject'.

Edit: The free will question will depend on what you want out of a conception of free will. But the understanding of free will that most lay people have is totally impossible.

Comment author: mikerpiker 17 November 2009 04:30:52AM 0 points [-]

Jack:

I think I agree with everything you say in response to my original post.

It seems like you basically agree with me that facts about the opinions of philosophers who work in some area (where this group is suitibly defined to avoid the difficulties you point out) should be important to us if we are trying to figure out what to believe in that area.

Why aren't studies being carried out to find out what these facts are? Do you think most philosophers would not agree that they are important?

Comment author: Jack 23 November 2009 10:02:22PM 1 point [-]

Yeah, I've felt for a while now that philosophers should do a better job explaining and popularizing the conclusions they come to. I've never been able to find literature reviews or meta-analysis, either. Part of the problem is definitely that a lot of philosophers are skeptical that they have anything true or interesting to say to non-philosophers. Also, despite some basic agreements about what is definitely wrong philosophers, at least with a lot of issues have so many different views that it wouldn't be very educational to poll them. Also, a lot of philosophy involves conceptual analysis and since it is really hard to poll a philosophical issue without resorting to concepts you might have a lot of respondents refusing to accept the premises of the question.

But none of these arguments are very good. If I ever make it in the field I'll put one together.

Comment author: Alicorn 16 November 2009 10:30:24PM 2 points [-]

Seconded. There are a lot of libertarians-about-free-will who study free will, but nobody I've talked to has ever heard of anyone changing their mind on the subject of free will (except inasmuch as learning new words to describe one's beliefs counts) - so this has to be almost entirely due to more libertarians finding free will an interesting thing to study.

Comment author: Blueberry 16 November 2009 10:49:18PM *  2 points [-]

I've definitely changed my mind on free will. I used to be an incompatibilist with libertarian leanings. After reading Daniel Dennett's books, I changed my mind and became a compatiblist soft determinist.

Comment author: Jack 16 November 2009 10:54:44PM *  0 points [-]

Are you a professional philosopher/ were you a professional philosopher when you were an incompatibilist with libertarian leanings? I'd say the vast majority of those untrained in philosophy hold the view you held and the most rational/intelligent of them would change their minds once confronted with a decent compatiblist argument.

Edit: I'm being a little unfair. There are plenty of smart people who disagree with us.

Comment author: Blueberry 16 November 2009 11:05:22PM 1 point [-]

No, I wasn't, and I agree with you. Defending philosophical positions as a career creates a bias where you're less likely to change your mind (see Cialdini's work on congruence: e.g. POWs in communist brainwashing camps who wrote essays on why communism was good were more likely to support communism afer release). But even so, professional philosophers do change their mind once in a while.

Comment author: Jack 16 November 2009 11:10:08PM 1 point [-]

But even so, professional philosophers do change their mind once in a while.

Absolutely! I tentatively hold the thesis that professional philosophers even make progress on understanding some issues. But there seem to be a couple positions that professional philosophers rarely sway from once they hold those positions and I think Alicorn is right that metaphysical libertarianism is one of these views.

Comment author: Jack 16 November 2009 10:48:31PM 2 points [-]

Free will libertarianism is also infected with religious philosophy. There are certainly some libertarians with secular reasons for their positions but a lot of the support for this for position comes from those whose religious world view requires radical free will and if they didn't believe in God they wouldn't be libertarians. Same goes for a lot of substance dualists, frankly.

Comment author: MarkHHerman 15 November 2009 11:31:27PM 4 points [-]

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “<Eliezer> well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

Comment author: zero_call 15 November 2009 08:32:37AM *  0 points [-]

There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.

It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it's like saying that just because an expert car mechanic completely understands how a car works, it doesn't mean that he can build another car which is fundamentally superior.

Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)

Comment author: [deleted] 15 November 2009 09:19:10AM 3 points [-]

Eliezer's belief, as I recall, is that human intelligence is a relatively small and arbitrary point in the "intelligence hierarchy", i.e. relative to minds at large, the smartest human is not much smarter than the dumbest. If an AI's intelligence stops increasing somewhere, why would it just happen to stop within the human range?

Comment author: whpearson 15 November 2009 11:33:20AM 0 points [-]

I'd expect it, iff we are copying human design without understanding it fully (this seems to have the biggest traction at the moment in terms of full intelligence work).

On the other hand if we can say things like "The energy-usage density of the local universe is X thus by Blah's law we should set the exploration/exploitation parameter to Y", then all bets are off. I don't have much hope for this style of reasoning at the moment though.

There might be a law saying something like "a system can't develop a system more powerful than itself by anything other than chance". However I've noticed that we don't really like working with formalisations of power, and tend to stick to folk psychology notions of intelligence, with which you can do anything you want as they are not well defined. So no progress is being made.

Comment author: timtyler 15 November 2009 11:49:31AM 0 points [-]

Humans built Google. They did it by clubbing together. This seems like a powerful approach.

Comment author: whpearson 15 November 2009 12:02:48PM 0 points [-]

I meant powerful in the Eliezer sense of "ability to achieve its goals". All google is, is a manifestation of the power of the humans that built it (and maintain it) (and the links that webmasters have put up and craft to be google friendly) as it has no goals of its own.

Until we have built a common vocabulary (that cuts the world at its joints), most conversations will unfortunately be pointless.

Comment author: JamesAndrix 18 November 2009 08:03:22AM 3 points [-]

All google is, is a manifestation of the power of the humans that built it

No. If you take that approach then you'll just be saying that about every GAI, no matter how powerful. Google engineers can not solve the problems that google solves. They can't even hold the problem (which includes links between millions of websites) in their heads. They CAN hold in their heads the problem of creating something that can solve the problem. Within google's domain, humans aren't even players.

Even allowing a human the time and notepaper and procedural knowledge to do what google does, that's not a human solving the same problem, that's a human implementing the abstract computation that is google.

Human can and do generate optimization process that are more powerful than themselves.

This may seem more harsh than I intend: I see your proposed law as just a privileged hypothesis, without any evidence, defending the notion that humans must somehow be special.

Comment author: timtyler 15 November 2009 12:27:05PM 0 points [-]

To spell things out - a problem with the idea of a law saying that "a system can't develop a system more powerful than itself by anything other than chance" is that it is pretty easy to do that.

Two humans can (fairly simply) make more humans, and then large groups of humans can have considerably more power than the original pair of humans did.

For example, no human can remember the whole internet and answer questions about its content - but a bunch of humans and their artefacts can do just that.

This is an example of synergy - the power of collective intelligence.

Comment author: whpearson 15 November 2009 02:30:42PM 1 point [-]

I can solve more problems when I have a hammer than when I don't, I can be synergistic with a hammer, you don't need other people for synergy. This just means that the power depends upon the environment.

Lets talk about the power P of a system S being defined as a function P(S, E). With E being the environment. So when I am talking about something more powerful I mean for all E. P(S1,E) > P (S2,E). Or at least for huge amounts of E or on average. It is not sufficient to show a single case.

I don't think that organizations of humans have a coherent goal structure, so they don't have a coherent power.

Comment author: timtyler 15 November 2009 02:57:11PM *  1 point [-]

Why don't you think organizations have "coherent goals". They certainly claim to do so. For instance, Google claims it wants "to organize the world's information and make it universally accessible and useful". Its actions seems to be roughly consistent with that. What is the problem?

Comment author: whpearson 15 November 2009 03:44:09PM *  1 point [-]

They really don't maximise that value... you'd get closer to the mark if you added in words like profit and executive pay.

But the main reason I don't think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.

Comment author: timtyler 15 November 2009 04:04:43PM 1 point [-]

Organisms die as well as organisations. That doesn't mean they are not goal-directed.

Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes - some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.

In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.

Comment author: anonym 14 November 2009 09:49:49PM 3 points [-]

If you conceptualized the high-level tasks you must attend to in order to achieve (1) FAI-understanding and (2) FAI-realization in terms of a priority queue, what would be the current top few items in each queue (with numeric priorities on some arbitrary scale)?

Comment author: anonym 14 November 2009 09:44:56PM 18 points [-]

What progress have you made on FAI in the last five years and in the last year?

Comment author: anon 14 November 2009 03:45:50PM 19 points [-]

For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?

Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.

Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.

I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.

If the answer is "No. You'll have to do with the base probability of any random human being a sociopath.", that might be good enough. Still, I'd like to know if I'm missing specific evidence that would push the probability for "SIAI is capital-E Evil" lower than that.

Posted pseudo-anonymously because I'm a coward.

Comment author: anonymousss 18 April 2012 08:20:13AM *  1 point [-]

I looked into the issue from statistical point of view. I would have to go with much higher than baseline probability of them being sociopaths on the basis of Bayesian reasoning starting with baseline probability (about 1%) as a prior and then updating on the criteria of things that sociopaths can not easily fake (such as e.g. previously inventing something that works).

Ultimately, the easy way to spot a sociopath is to look for the massive dis-balance of the observable signals towards those that sociopaths can easily fake. You don't need to be smarter than sociopath to identify the sociopath. The spam filter is pretty good at filtering out the advance fee fraud and letting business correspondence through.

You just need to act like statistical prediction rule on a set of criteria, without allowing for verbal excuses of any kind, no matter how logical they sound. For instance the leaders of genuine research institutions are not HS dropouts; the leaders of cults are; you can find the ratio and build evidential Bayesian rule, with which you can use 'is HS dropout' evidence to adjust your probabilities.

The beauty of this method is that it is too expensive for sociopaths to fake honest signals - such as for example having spent years to make and perfect some invention that has improved lives of people, you can't send this signal without doing immense lot of work - and so even as they are aware of this method there is literally nothing they can do about it, nor do they want to do anything about it as there are enough people who do not pay attention to certainly honest signals to fakeable signals ratio (gullible people), whom sociopaths can target instead, for a better reward to work ratio.

Ultimately, it boils down to the fact that genuine world saving leader is rather unlikely to have never before invented anything that did demonstrably benefit the mankind, while a sociopath is pretty likely (close to 1) to have never before invented anything that did demonstrably benefit the mankind. You update on this, and ignore verbal excuses, and you have yourself a (nearly)non-exploitable decision mechanism.

Comment author: timtyler 15 November 2009 10:57:16PM 0 points [-]

What would be the best way of producing such evidence? Presumably, organisational transparency - though that could - in principle - be faked.

I'm not sure they will go for that - citing the same reasons previously given for not planning to open-source everything.

Comment author: Eliezer_Yudkowsky 15 November 2009 10:22:58PM 11 points [-]

I guess my main answers would be, in order:

1) You'll have to do with the base probability of a highly intelligent human being a sociopath.

2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.

3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.

4) Why are you asking me that? Shouldn't you be asking, like, anyone else?

Comment author: anon 16 November 2009 09:09:24PM *  0 points [-]

Re. 4, not for the way I asked the question. Obviously asking for a probability, or any empirical evidence I would have to take your word on, would have been silly. But there might have been excellent public evidence against the Evil hypothesis I just wasn't aware of (I couldn't think of any likely candidates, but that might have been a failure of my imagination); in that case, you would likely be aware of such evidence, and would have a significant icentive to present it. It was a long shot.

Comment author: ABranco 14 November 2009 01:55:41PM 14 points [-]

Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?

Comment author: imaxwell 14 November 2009 01:31:25AM 3 points [-]

Previously, in Ethical Injunctions and related posts, you said that, for example,

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?

Comment author: wedrifid 14 November 2009 02:05:49AM 3 points [-]

how smart would a mind have to be in order to safely break ethical injunctions?

Any given mind could create ethical injunctions of a suitable complexity that are useful to it given its own technical limitations.

Comment author: Larks 13 November 2009 10:38:17PM 3 points [-]

What do you estimate the utility of Less Wrong to be?

Comment author: Eliezer_Yudkowsky 13 November 2009 10:51:10PM *  11 points [-]

Roughly 4,250 expected utilons.

Comment author: Unnamed 14 November 2009 02:24:05AM 8 points [-]

Could you please convert to dust specks?

Comment author: timtyler 13 November 2009 11:16:32PM *  4 points [-]

Well yes: the question was a bit ambiguous.

Maybe one should adopt a universal standard yardstick for this kind of thing, though - so such questions can be answered meaningfully. For that we need something that everyone (or practically everyone) values. I figure maybe the love of a cute kitten could be used as a benchmark. Better yardstick proposals would be welcome, though.

Comment author: DanArmak 14 November 2009 12:23:05AM 2 points [-]

Way to Other-ize dog people.

Comment author: Larks 13 November 2009 11:56:49PM 5 points [-]

If only there existed some medium of easy comparison, such that we could easily compare the values placed on common goods and services...

Comment author: timtyler 14 November 2009 12:01:04AM 1 point [-]

Exactly: the elephant in my post ;-)

Comment author: Larks 14 November 2009 12:17:32AM 2 points [-]

I don't think elephants are a very practical yardstick. For a start, they're of varying size. I mean, apparently they can fit in posts now!

Comment author: Alicorn 13 November 2009 11:24:56PM 2 points [-]

It'd have to be a funny yardstick. Almost nothing we value scales linearly. I would start getting tired of kittens after about 4,250 of them had gone by.

Comment author: timtyler 13 November 2009 11:59:19PM 1 point [-]

Velocity runs into diminishing returns too near the speed of light - but it is still useful to try and measure it - and a yardstick can help with that.

Comment author: Furcas 13 November 2009 10:54:34PM *  0 points [-]

That's all?

:-(

Comment author: MichaelHoward 13 November 2009 11:18:50PM 0 points [-]

All? All? That buys you a few hundred tech shares, or The Ultimate answer to Life, the Universe and Everything Universe Takeovers and a Half! :-)

Comment author: Tyrrell_McAllister 13 November 2009 11:00:00PM 2 points [-]

Keep in mind that that's only up to an affine transformation ;).

Comment author: MichaelGR 13 November 2009 09:42:50PM *  7 points [-]

What recent* developments in narrow AI do you find most important/interesting and why?

*Let's say post-Stanley

Comment author: JulianMorrison 13 November 2009 04:24:48PM 17 points [-]

How do you characterize the success of your attempt to create rationalists?

Comment author: Jach 13 November 2009 08:14:46AM *  4 points [-]

Within the next 20 years or so, would you consider having a child and raising him/her to be your successor? Would you adopt? Have you donated sperm?

Edit: the first two questions dependent on you not being satisfied by the progress on FAI.

Comment author: retired_phlebotomist 13 November 2009 07:11:24AM 3 points [-]

What does the fact that when you were celibate you espoused celibacy say about your rationality?

Comment author: retired_phlebotomist 13 November 2009 07:10:04AM 16 points [-]

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

Comment author: Peter_de_Blanc 16 November 2009 04:30:50AM 0 points [-]

About what? Everything?

Comment author: gwern 16 November 2009 04:59:06AM 3 points [-]

Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.

Comment author: retired_phlebotomist 17 November 2009 07:22:17AM 1 point [-]

Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity.

I do miss the interplay between the two at OB.

Comment author: anonym 13 November 2009 06:56:43AM *  5 points [-]

Please estimate your probability of dying in the next year (5 years). Assume your estimate is perfectly accurate. What additional probability of dying in the next year (5 years) would you willingly accept for a guaranteed and safe increase of one (two, three) standard deviation(s) in terms of intelligence?

Comment author: wedrifid 14 November 2009 12:07:42AM 1 point [-]

Assume your estimate is perfectly accurate.

Does this matter?

Comment author: anonym 14 November 2009 06:49:15PM 1 point [-]

I'm not sure if it matters. I was imagining potentially different answers to the 2nd part of the question based on whether one includes additional adjustments and compensating factors to allow for the original estimate being inaccurate -- and trying to prevent those adjustments to get at the core issue.

Comment author: anonym 13 November 2009 06:38:00AM 9 points [-]

In terms of your intellectual growth, what were your biggest mistakes or most harmful habits, and what, if anything, would you do differently if you had the chance?

Comment deleted 13 November 2009 03:10:33AM [-]
Comment author: taa21 14 November 2009 07:27:25PM 0 points [-]

Is this a joke?

Comment author: roland 12 November 2009 09:30:50PM *  5 points [-]

Akrasia

Eliezer, you mentioned suffering from writer's molasses and your solution was to write daily on ob/lw. I consider this a clever and successful overcoming of akrasia. What other success stories from your life in relation to akrasia could you share?

Comment author: roland 12 November 2009 09:24:45PM *  22 points [-]

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

Comment author: Blueberry 12 November 2009 07:48:31PM 24 points [-]

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

Comment author: anonym 15 November 2009 10:15:46PM 0 points [-]

If this is addressed, I'd like to know why the change of mind from the stated position. Was there a flaw in the original argument, did you get new evidence that caused updating of probabilities that changed the original conclusion, etc.?

Comment author: patrissimo 12 November 2009 06:58:14PM 5 points [-]

Do you think that just explaining biases to people helps them substantially overcome those biases, or does it take practice, testing, and calibration to genuinely improve one's rationality?

Comment author: roland 19 November 2009 01:09:22AM *  2 points [-]

I can partially answer this. In the book "The logic of failure" by Dietrich Dorner he tested humans with complex systems they had to manage. It turned out that when one group got specific instructions of how to deal with complex systems they did not perform better than the control group.

EDIT: Dorner's explanation was that just knowing was not enough, individuals had to actually practice dealing with the system to improve. It's a skillset.

Comment author: patrissimo 12 November 2009 06:57:22PM 8 points [-]

What single source of material (book, website, training course) do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Comment author: patrissimo 12 November 2009 06:57:03PM 10 points [-]

What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Comment author: roland 19 November 2009 01:04:41AM 0 points [-]

I had a similar question, on boiling down rationality:

http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/19hc

Comment author: Morendil 12 November 2009 04:59:44PM 2 points [-]

Well, Eliezer's reply to this comment prompts a follow-up question:

In "Free to optimize", you alluded to "the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together". Can you say more about what you imagine such rules might be ?

Comment author: Kutta 13 November 2009 01:34:16AM *  2 points [-]

I think that there isn't any point in attempting to come up with anything more exact than the general musings of the Fun Theory. It really takes a superintelligence and knowledge of CEV to conceive such rules (and it's not even guaranteed that there'd be anything that resemble "rules" per se).

Comment author: MichaelHoward 12 November 2009 02:00:21PM *  8 points [-]

Of the questions you decide not to answer, which is most likely to turn out to be a vital question you should have publicly confronted?

Not the question you don't want to answer but would probably have bitten the bullet anyway. The question you would have avoided completely if it weren't for my question.

[Edit - "If I thought they were vital, I wouldn't avoid" would miss the point, as not wanting to consider something suppresses counterarguments to dismissing it. Take a step back - which question is most likely to be giving you this reaction?]

Comment author: wedrifid 14 November 2009 12:11:08AM *  1 point [-]

If this question has an obvious answer then I expect 'this one' would come in as a close second!

Comment author: MichaelHoward 14 November 2009 09:29:24AM 0 points [-]

My question thanks you for complementing it's probable vitality, but...

The question you would have avoided completely if it weren't for my question.

...I'm hoping a good metaphiliac counterfactualist wouldn't have actually avoided a non-existent self-referential question.

Comment author: UnholySmoke 12 November 2009 10:54:34AM 0 points [-]

Favourite album post-1960?

Comment author: anonym 15 November 2009 10:18:25PM 2 points [-]

More generally, do you listen to music much, and if so, what sorts of music, under what circumstances, and who/what are your favorites?

Comment author: [deleted] 12 November 2009 06:45:53AM 0 points [-]

What if the friendly AI finds that our extrapolated volition is coherent and contains the value of 'self-determination' and concludes that it cannot meddle too much in our affairs? "Well, humankind, it looks like you don't want to have your destiny decided by a machine. my hands are tied. You need to save yourselves."

Comment author: Eliezer_Yudkowsky 12 November 2009 07:37:41AM 3 points [-]
Comment deleted 12 November 2009 05:34:36AM [-]
Comment author: timtyler 14 November 2009 12:40:17PM *  -1 points [-]

It says: "I argue that [the author's material] leads to the collapse of the Gödelian argument advanced by J.R.Lucas, Roger Penrose and others."

Well, duh! Fail - for taking Penrose's nonsense seriously.

Comment author: wuwei 12 November 2009 04:24:48AM *  3 points [-]

Do you think that morality or rationality recommends placing no intrinsic weight or relevance on either a) backwards-looking considerations (e.g. having made a promise) as opposed to future consequences, or b) essentially indexical considerations (e.g. that I would be doing something wrong)?

Comment author: cabalamat 12 November 2009 03:59:43AM *  13 points [-]

What practical policies could politicians enact that would increase overall utility? When I say "practical", I'm specifically ruling out policies that would increase utility but which would be unpopular, since no democratic polity would implement them.

(The background to this question is that I stand a reasonable chance of being elected to the Scottish Parliament in 19 months time).

Comment author: Morendil 12 November 2009 10:14:22AM 4 points [-]

Ruling out unpopular measures is tantamount to giving up on your job as a politician; the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

Much as rationality in an individual consists of "shutting up and multiplying", i.e. computing which course of action including those we have no taste for yields the highest expected utility, politics - the useful part of it - consists of making necessary policies palatable to the public. The rest is demagoguery.

Comment author: cabalamat 13 November 2009 03:39:14AM *  3 points [-]

Ruling out unpopular measures is tantamount to giving up on your job as a politician

On the contrary, NOT ruling out unpopular measures is tantamount to giving up your job as a politician because, if the measure is unpopular enough (1) you won't get the measure passed in the first place, and (2) you won't get re-elected

the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

You're saying it's lazy to require that policies be practical. I say that on the contrary it's lazy not to require them to be practical. It's easy to come up with ideas that're a good thing but which can't be practically realised, but it takes more effort to come up with ideas that're a good thing and which can be practically realised. I co-founded Pirate Party UK precisely because I think it's a practical way of getting the state to apply sensible laws to the internet, instead of just going ahead with whatever freedom-destroying nonsense the entertainment industry is coming up this week to prevent "piracy".

computing which course of action including those we have no taste for yields the highest expected utility

Courses of action that can't be implemented yield zero or negative utility.

The rest is demagoguery.

There's an element of truth in that, but I'd put it differently: its the difference between leadership and followership. Politicians in democracies frequently engage in the latter.

Comment author: Thomas 12 November 2009 09:25:18AM 4 points [-]

Free trade. As a politician, you can't do more than that.

Comment author: Matt_Simpson 12 November 2009 04:50:10PM 2 points [-]

And open immigration policies

Comment author: cabalamat 13 November 2009 03:12:35AM 1 point [-]

Unlimited immigration clearly fails the practicality test, regardless of whether it's a good thing or not.

Comment author: Matt_Simpson 13 November 2009 04:47:51AM 0 points [-]

open != unlimited. But that's a margin that I would push pretty hard, relative to others.

Comment author: cabalamat 13 November 2009 05:42:28AM 0 points [-]

OK I misinterpreted you. What do you mean when you say "open"?

Comment author: Matt_Simpson 13 November 2009 06:12:46AM 1 point [-]

I should have said more open.

Comment author: CronoDAS 12 November 2009 07:22:45AM 0 points [-]

I'd guess that legalizing gay marriage would be pretty low-hanging fruit, but I don't know how politically possible it is.

Comment author: cabalamat 13 November 2009 03:10:55AM 2 points [-]

Gay marriage is already legal in Scotland, albeit under the name "civil partnership".

Comment author: ciphergoth 13 November 2009 09:03:23AM 0 points [-]

The whole of the UK has civil partnership, not just Scotland. It's also illegal to discriminate on gender attraction in employment and in the provision of goods and services.

Comment author: Jess_Riedel 13 November 2009 12:56:36AM *  4 points [-]

It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.

Comment author: CronoDAS 13 November 2009 09:02:46PM *  2 points [-]

Well, I mean "low-hanging fruit" in that it doesn't really cost any money to implement. Symbolism is cheap; providing material benefits is more expensive, especially in developed countries.

I don't know much about the political situation in Scotland; I know about a few miscellaneous stupidities in the U.S. federal government that I'd like to get rid of (abstinence-only sex education, "alternative" medicine research) but I suspect that Scotland and the rest of the U.K. is stupid in different ways than the U.S. is.

Comment author: ajayjetti 12 November 2009 03:23:32AM 3 points [-]

Are you a meat-eater?

Comment author: Alicorn 12 November 2009 03:32:50AM 2 points [-]
Comment author: jimrandomh 12 November 2009 02:44:20AM 16 points [-]

What is the probability that this is the ultimate base layer of reality?

Comment author: MichaelHoward 12 November 2009 11:24:19PM 0 points [-]

And then... Really? What would be a fair estimate if you were someone not especially likely to be simulated, living in a not particularly critical time, and there was only, say, a trillionth as much potential computronium lying around?

Comment author: MichaelBishop 12 November 2009 04:13:59PM 0 points [-]

could you explain more what this means?

Comment author: jimmy 12 November 2009 07:13:05PM 2 points [-]

I think he means "as opposed to living in a simulation (possibly in another simulation, and so on)"

This seems to be one of those questions that seem like they should answer, but actually don't.

If there's at least one copy of you in "a simulation" and at least one in "base level reality", then you're going to run into the same problems as sleeping beauty/absent minded driver/etc when you deal with 'indexical probabilities'.

There are Decicion Theory answers, but the ones that work don't mention indexical probabilities. This does make the situation a bit harder than say, the sleeping beauty problem, since you have to figure out how to weight your utility function over multiple universes.

Comment author: MichaelGR 12 November 2009 01:28:03AM *  15 points [-]

Are the book(s) based on your series of posts are OB/LW still happening? Any details on their progress (title? release date? e-book or real book? approached publishers yet? only technical books, or popular book too?), or on why they've been put on hold?

http://lesswrong.com/lw/jf/why_im_blooking/

Comment author: Eliezer_Yudkowsky 12 November 2009 05:04:20AM 8 points [-]

Yes, that is my current project.

Comment author: SilasBarta 12 November 2009 12:06:41AM 7 points [-]

Okay: Goedel, Escher, Bach. You like it. Big-time.

But why? Specifically, what insights should I have assimilated from reading it that are vital for AI and rationalist arts? I personally feel I learned more from Truly Part of You than all of GEB, though the latter might have offered a little (unproductive) entertainment.

Comment author: Kutta 13 November 2009 01:18:37AM *  4 points [-]

Why? I think, maybe because GEB integrates form, style and thematic content into a seamless whole in a unique and pretty much artistic way, while still being essentially non-fiction. And GEB is probably second to nothing at conveying the notion of an intertwined reality. It also provides very intelligent and intuitive introduction to a whole lot of different areas. Sometimes you can't do all the job of conveying extremely complex ideas in a succinct essay; just look at the epic amount of writing Eliezer had to do merely to establish a bare framework for FAI discussion (besides, from the fact that Eliezer likes GEB does not follow that GEB should be a recommended reading for AI or rationalist arts. It just means that Eliezer thinks it's a good book).

Comment author: SilasBarta 13 November 2009 03:18:30AM 0 points [-]

That doesn't answer my question. Again, what rationalist/AI mistake would I not make as a result of reading GEB that could not be achieved with something shorter?

Comment author: Kutta 13 November 2009 11:39:58AM *  0 points [-]

As I said, there is not necessarily any kind of rationalist/AI content in GEB directly relevant to us. It could be well just simply a good book.

Comment author: SilasBarta 16 November 2009 07:23:20PM 0 points [-]

But would Eliezer view it as that durn good (i.e. it being a tragedy that people die without reading it) if it were just entertaining fluff with no insights to AI and rationality?

Comment author: Yorick_Newsome 29 November 2009 11:48:46AM *  2 points [-]

I'm not Eliezer, and perhaps not being an AGI researcher means that my answer is irrelevant, but I think that things can have a deep aesthetic value or meaning from which one could gain insights into things more important than AI or rationality. One of these things may be the 'something to protect' that Eliezer wrote about. Others may be intrinsic values to discover, to give your rationality purpose. If I could only keep one of a copy of the Gospels of Buddha or a copy of MITECS, I would keep the Gospels of Buddha, because it reminds me of the importance of terminal values like compassion. When I read GEB the ideas of interconnectedness, of patterns, and of meaning all left me with a clearer thought process than did reading Eliezer's short paper on Coherent Extrapolated Volition, which was enjoyable but just didn't seem to resonate in the same way. Calling these things 'entertaining fluff' may be losing sight of Eliezer's 11th virtue: "The Art must have a purpose other than itself, or it collapses into infinite recursion."
That is all, of course, my humble opinion. Maybe having everyone read about and understand the dangers of black swans and unfriendly AI would be more productive than having them read about and understand the values of compassion and altruism; for if people do not understand the former, there may be no world left for the latter.

Comment author: Steve_Rayhawk 11 November 2009 11:46:02PM *  4 points [-]

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be.

This disadvantages questions which are posted late (to a greater extent than would give people an optimal incentive to post questions early). (It also disadvantages questions which start with a low number of upvotes by historical accident and then are displayed low on the page, and are not viewed as much by users who might upvote them.)

It's not your fault; I just wish the LW software had a statistical model which explained observed votes and replies in terms of a latent "comment quality level", because of situations like this, where it could matter if a worse comment got a high rating while a better comment got a low one. (I also wish forums with comment ratings used ideas related to value of information, optimal sequential preference elicitation, and/or n-armed bandit problems to decide when to show users comments whose subjective latent quality has a low marginal mean but a high marginal variance, in case the (")true(") quality of a comment is high, because of the possibility that a user will rate the comment highly and let the forum software know that it should show the comment to other users.)

Comment author: JamesAndrix 12 November 2009 06:05:21PM 5 points [-]

Reddit has implemented a 'best' view which tries to compensate for this kind of thing: http://blog.reddit.com/2009/10/reddits-new-comment-sorting-system.html

LW is based on reddit's source code, so it should be relatively easy to integrate.

Comment author: Douglas_Knight 12 November 2009 12:19:07AM 1 point [-]

There's probably significant value in the low-hanging fruit of just tweaking the parameters in the current algorithm (which are currently set for the much larger reddit!). Don't let the perfect be the enemy of the good.

Comment author: RobinHanson 11 November 2009 11:45:10PM 22 points [-]

Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?

Comment author: MichaelVassar 13 November 2009 05:08:13AM 9 points [-]

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Comment author: Zack_M_Davis 24 November 2009 12:05:18AM 1 point [-]

an ideological attraction to libertarianism is BY FAR the main reason for disagreement [with singleton strategies/hypotheses]. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Any practical advice on how to overcome this failure mode, if and only if it is in fact a failure mode?

Comment author: timtyler 14 November 2009 12:35:45AM 4 points [-]

Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world.

Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding.

We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.

Comment author: MichaelVassar 15 November 2009 04:39:09PM 7 points [-]

You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding.

Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?

Comment author: timtyler 15 November 2009 04:55:09PM *  0 points [-]

Some DOOM peddlers manage to get by. Probably most of them are currently in Hollywood, the finance world, or ecology. Machine intelligence is only barely on the radar at the moment - but that doesn't mean it will stay that way.

I don't necessarily mean to suggest that these people are all motivated by money. Some of them may really want to SAVE THE WORLD. However, that usually means spreading the word - and convincing others that the DOOM is real and immanent - since the world must first be at risk in order for there to be SALVATION.

Look at Wayne Bent (aka Michael Travesser), for example:

"The End of The World Cult Pt.1"

The END OF THE WORLD - but it seems to have more to do with sex than money.

Comment author: Eliezer_Yudkowsky 12 November 2009 12:28:51AM 8 points [-]

Who are we talking about besides you?

Comment author: RobinHanson 12 November 2009 02:30:07AM 2 points [-]

I'd consider important overlapping academic fields to be AI and long term economic growth; I base my claim about academic expert opinion on my informal sampling of such folks. I would of course welcome a more formal sampling.

Comment author: Eliezer_Yudkowsky 12 November 2009 04:59:44AM 9 points [-]

Who's considered my main arguments besides you?

Comment author: RobinHanson 12 November 2009 01:27:50PM 2 points [-]

I'm not comfortable publicly naming names based on informal conversations. These folks vary of course in how much of the details of your arguments they understand, and of course you could always set your bar high enough to get any particular number of folks who have understood "enough."

Comment author: Eliezer_Yudkowsky 12 November 2009 02:46:53PM 4 points [-]

Okay. I don't know any academic besides you who's even tried to consider the arguments. And Nick Bostrom et. al., of course, but AFAIK Bostrom doesn't particularly disagree with me. I cannot refute what I have not encountered, I do set my bar high, and I have no particular reason to believe that any other academics are in the game. I could try to explain why you disagree with me and Bostrom doesn't.

Comment author: Eliezer_Yudkowsky 16 November 2009 01:38:57AM 4 points [-]

Actually, on further recollection, Steve Omohundro and Peter Cheeseman would probably count as academics who know the arguments. Mostly I've talked to them about FAI stuff, so I'm actually having trouble recalling whether they have any particular disagreement with me about hard takeoff.

I think that w/r/t Cheeseman, I had to talk to Cheeseman for a while before he started to appreciate the potential speed of a FOOM, as opposed to just the FOOM itself which he considered obvious. I think I tried to describe your position to Cheeseman and Cheeseman thought it was pretty implausible, but of course that could just be the fact that I was describing it from outside - that counts for nothing in my view until you talk to Cheeseman, otherwise he's not familiar enough with your arguments. (See, the part about setting the bar high works both ways - I can be just as fast to write off the fact of someone else's disagreement with you, if they're insufficiently familiar with your arguments.)

I'm not sure I can recall what Omohundro thinks - he might be intermediate between yourself and myself...? I'm not sure how much I've talked hard takeoff per se with Omohundro, but he's certainly in the game.

Comment author: MichaelVassar 16 November 2009 02:57:22AM 2 points [-]

I think Steve Omohundro disagees about the degree to which takeoff is likely to be centralized, due to what I think is the libertarian impulses I mentioned earlier.

Comment author: RobinHanson 12 November 2009 06:36:25PM 4 points [-]

Surely some on the recent AAAI Presidential Panel on Long-Term AI Futures considered your arguments to at least some degree. You could discuss why these folks disagree with you.

Comment author: timtyler 14 November 2009 10:32:02PM *  2 points [-]

I have a theory about why there is disagreement with the AAAI panel:

The DOOM peddlers gather funding from hapless innocents - who hope to SAVE THE WORLD - while the academics see them as bringing their field into disrepute, by unjustifiably linking their field to existential risk, with their irresponsible scaremongering about THE END OF THE WORLD AS WE KNOW IT.

Naturally, the academics sense a threat to their funding - and so write papers to reassure the public that spending money on this stuff is Really Not As Bad As All That.

Comment author: Eliezer_Yudkowsky 12 November 2009 08:23:03PM 3 points [-]

Haven't particularly looked at that - I think some other SIAI people have. I expect they'd have told me if there was any analysis that counts as serious by our standards, or anything new by our standards.

If someone hasn't read my arguments specifically, then I feel very little need to explain why they might disagree with me. I find myself hardly inclined to suspect that they have reinvented the same arguments. I could talk about that, I suppose - "Why don't other people in your field invent the same arguments you do?"

Comment author: RobinHanson 12 November 2009 09:23:06PM *  16 points [-]

You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?

Comment author: Eliezer_Yudkowsky 12 November 2009 09:35:34PM *  5 points [-]

I'm sorry, but I don't really have a proper lesson plan laid out - although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.

If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn't matter if they'd done it on their own or by reading my stuff.

E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff... with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn't clear from the presentation.

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

Comment author: StefanPernar 12 November 2009 11:02:57AM *  1 point [-]

Me - if I qualify as an academic expert is another matter entirely of course.

Comment author: ChrisHibbert 14 November 2009 07:39:13PM 2 points [-]

Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?

Comment author: StefanPernar 15 November 2009 01:46:06AM 0 points [-]

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

Comment author: AdeleneDawner 15 November 2009 02:00:21AM *  6 points [-]

Assuming I have the correct blog, these two are the only entries that mention Eliezer by name.

Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.

Comment author: timtyler 15 November 2009 10:36:34AM *  5 points [-]

Re: "Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational.

Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were." (source: http://rationalmorality.info/?p=112)

I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals.

I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position.

For example, consider this:

"A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents." http://rationalmorality.info/?p=8

Comment author: StefanPernar 16 November 2009 12:09:48PM *  1 point [-]

"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."

Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.

I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.

If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.

Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.

Comment author: Furcas 15 November 2009 02:16:27AM *  6 points [-]

From the second blog entry linked above:

Two fundamental assumptions:

A) Compassion is a universal value

B) It is a basic AI drive to avoid counterfeit utility

If A = true (as we have every reason to believe) and B = true (see Omohundro’s paper for details) then a transhuman AI would dismiss any utility function that contradicts A on the ground that it is recognized as counterfeit utility.

Heh.