Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions

16 Post author: MichaelGR 11 November 2009 03:00AM

As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

Suggestions

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

It's okay to attempt humor (but good luck, it's a tough crowd).

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

Update: Eliezer's video answers to 30 questions from this thread can be found here.

Comments (682)

Comment author: retired_phlebotomist 13 November 2009 07:10:04AM 16 points [-]

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

Comment author: MichaelGR 17 November 2009 01:54:16AM *  15 points [-]

Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?

Updating top level with expanded question:

I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?

So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).

It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).

If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.

Comment author: AnnaSalamon 19 November 2009 06:57:55AM *  28 points [-]

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:

Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):

  • “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  • “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  • “Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  • “Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  • “Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
  • And several more at various stages of the writing process, including some journal papers.

The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)

The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)

Miscellaneous additional examples:

  • The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
  • A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
  • Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
  • A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
  • Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.

(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)

(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)

How efficiently can we turn a marginal $1000 into more rapid project-completion?

As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.

As to SIAI vs. SENS:

SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.

The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)

There’s a lot more to say on all of these points, but I’m trying to be brief -- if you want more info on a specific point, let me know which.

It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)

Comment author: Rain 23 March 2010 02:27:07AM 8 points [-]

You can donate to FHI too? Dang, now I'm conflicted.

Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.

Crisis averted by tiny obstacles.

Comment author: Kutta 03 December 2009 11:27:26PM *  7 points [-]

at 8 expected current lives saved per dollar donated

Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...

Comment author: Kaj_Sotala 20 November 2009 10:05:06AM 14 points [-]

Please post a copy of this comment as a top-level post on the SIAI blog.

Comment author: Wei_Dai 20 November 2009 09:15:26AM 5 points [-]

Someone should update SIAI's recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:

Comment author: Eliezer_Yudkowsky 17 November 2009 02:17:48AM 6 points [-]

I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other's positive publicity. For this reason I've usually tended to avoid this kind of elevator pitch!

Pass to Michael Vassar: Should I answer this?

Comment author: MichaelGR 17 November 2009 03:50:51AM *  3 points [-]

[I've moved what was here to the top level comment]

Comment author: Eliezer_Yudkowsky 18 November 2009 12:47:34AM 2 points [-]

I'll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.

Comment author: MichaelGR 12 November 2009 01:28:03AM *  15 points [-]

Are the book(s) based on your series of posts are OB/LW still happening? Any details on their progress (title? release date? e-book or real book? approached publishers yet? only technical books, or popular book too?), or on why they've been put on hold?

http://lesswrong.com/lw/jf/why_im_blooking/

Comment author: Eliezer_Yudkowsky 12 November 2009 05:04:20AM 8 points [-]

Yes, that is my current project.

Comment author: ABranco 18 November 2009 07:28:27PM 13 points [-]

You've achieved a high level of success as a self-learner, without the aid of formal education.

Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?

If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)

Comment author: mormon2 23 November 2009 07:37:34PM 4 points [-]

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

Comment author: komponisto 24 November 2009 05:00:31AM 3 points [-]

How do you define high level of success?

He has a job where he is respected, gets to pursue his own interests, and doesn't have anybody looking over his shoulder on a daily basis (or any short-timescale mandatory duties at all that I can detect). That's pretty much the trifecta, IMHO.

Comment author: patrissimo 12 November 2009 06:57:03PM 10 points [-]

What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Comment author: anonym 13 November 2009 06:38:00AM 9 points [-]

In terms of your intellectual growth, what were your biggest mistakes or most harmful habits, and what, if anything, would you do differently if you had the chance?

Comment author: Blueberry 12 November 2009 07:48:31PM 24 points [-]

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

Comment author: MichaelGR 11 November 2009 08:55:54PM *  37 points [-]

What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.

Comment author: Liron 13 November 2009 04:18:43AM 1 point [-]

Ditto regarding your food diet?

Comment author: patrissimo 12 November 2009 06:57:22PM 8 points [-]

What single source of material (book, website, training course) do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Comment author: MichaelHoward 12 November 2009 02:00:21PM *  8 points [-]

Of the questions you decide not to answer, which is most likely to turn out to be a vital question you should have publicly confronted?

Not the question you don't want to answer but would probably have bitten the bullet anyway. The question you would have avoided completely if it weren't for my question.

[Edit - "If I thought they were vital, I wouldn't avoid" would miss the point, as not wanting to consider something suppresses counterarguments to dismissing it. Take a step back - which question is most likely to be giving you this reaction?]

Comment author: roland 12 November 2009 09:24:45PM *  22 points [-]

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

Comment author: MichaelGR 11 November 2009 08:49:01PM 32 points [-]

Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):

http://yudkowsky.net/obsolete/bookshelf.html

Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).

Comment author: alyssavance 13 November 2009 12:23:49AM 3 points [-]

See the Singularity Institute Reading List for some ideas.

Comment author: MichaelGR 13 November 2009 09:42:50PM *  7 points [-]

What recent* developments in narrow AI do you find most important/interesting and why?

*Let's say post-Stanley

Comment author: JamesAndrix 11 November 2009 03:31:39PM *  7 points [-]

He will simply ignore questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

I am 99.99% certain that he will not ignore such questions.

Comment author: arundelo 12 November 2009 05:25:52AM 3 points [-]

I am 99.995% certain that no question will receive that many votes.

Comment author: jimrandomh 12 November 2009 05:42:27AM *  5 points [-]

I am 99.995% certain that no question will receive that many votes.

There is a greater than 0.01% chance that Eliezer or another administrator will edit the site to display a score of "3^^^3" for some post. (Especially now that it's been suggested.)

Comment author: arundelo 12 November 2009 10:42:26AM 2 points [-]

I guess I need to recalibrate!

Comment author: anon 14 November 2009 03:45:50PM 19 points [-]

For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?

Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.

Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.

I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.

If the answer is "No. You'll have to do with the base probability of any random human being a sociopath.", that might be good enough. Still, I'd like to know if I'm missing specific evidence that would push the probability for "SIAI is capital-E Evil" lower than that.

Posted pseudo-anonymously because I'm a coward.

Comment author: Eliezer_Yudkowsky 15 November 2009 10:22:58PM 11 points [-]

I guess my main answers would be, in order:

1) You'll have to do with the base probability of a highly intelligent human being a sociopath.

2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.

3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.

4) Why are you asking me that? Shouldn't you be asking, like, anyone else?

Comment author: anonym 14 November 2009 09:44:56PM 18 points [-]

What progress have you made on FAI in the last five years and in the last year?

Comment author: Johnicholas 11 November 2009 11:43:39AM 18 points [-]

What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.

Comment author: JulianMorrison 13 November 2009 04:24:48PM 17 points [-]

How do you characterize the success of your attempt to create rationalists?

Comment author: haig 11 November 2009 10:19:48PM *  23 points [-]

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

Comment author: Bindbreaker 11 November 2009 07:53:15AM *  15 points [-]

In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.

Comment author: kpreid 11 November 2009 01:00:14PM *  10 points [-]

This comes to mind:

But why not become an expert liar, if that's what maximizes expected utility? Why take the constrained path of truth, when things so much more important are at stake?

Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren't inconveniences. They were safety rails on cliffs I didn't see.

I made fundamental mistakes, and my ethics didn't halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn't expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.

You can't duplicate this protective effect by trying to be clever and calculate the course of "highest utility". The expected utility just takes into account the things you know to expect. It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.

Ethics aren't just there to make your life difficult; they can protect you from Black Swans. A startling assertion, I know, but not one entirely irrelevant to current affairs.

Protected From Myself

Comment author: Bindbreaker 11 January 2010 03:57:24AM 2 points [-]
Comment author: MichaelGR 11 November 2009 09:06:59PM 22 points [-]

If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?

Comment author: taa21 11 November 2009 09:06:17PM 14 points [-]

What do you view as your role here at Less Wrong (e.g. leader, preacher, monk, moderator, plain-old contributor, etc.)?

Comment author: anonym 13 November 2009 06:56:43AM *  5 points [-]

Please estimate your probability of dying in the next year (5 years). Assume your estimate is perfectly accurate. What additional probability of dying in the next year (5 years) would you willingly accept for a guaranteed and safe increase of one (two, three) standard deviation(s) in terms of intelligence?

Comment author: cabalamat 12 November 2009 03:59:43AM *  13 points [-]

What practical policies could politicians enact that would increase overall utility? When I say "practical", I'm specifically ruling out policies that would increase utility but which would be unpopular, since no democratic polity would implement them.

(The background to this question is that I stand a reasonable chance of being elected to the Scottish Parliament in 19 months time).

Comment author: Morendil 12 November 2009 10:14:22AM 4 points [-]

Ruling out unpopular measures is tantamount to giving up on your job as a politician; the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

Much as rationality in an individual consists of "shutting up and multiplying", i.e. computing which course of action including those we have no taste for yields the highest expected utility, politics - the useful part of it - consists of making necessary policies palatable to the public. The rest is demagoguery.

Comment author: cabalamat 13 November 2009 03:39:14AM *  3 points [-]

Ruling out unpopular measures is tantamount to giving up on your job as a politician

On the contrary, NOT ruling out unpopular measures is tantamount to giving up your job as a politician because, if the measure is unpopular enough (1) you won't get the measure passed in the first place, and (2) you won't get re-elected

the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

You're saying it's lazy to require that policies be practical. I say that on the contrary it's lazy not to require them to be practical. It's easy to come up with ideas that're a good thing but which can't be practically realised, but it takes more effort to come up with ideas that're a good thing and which can be practically realised. I co-founded Pirate Party UK precisely because I think it's a practical way of getting the state to apply sensible laws to the internet, instead of just going ahead with whatever freedom-destroying nonsense the entertainment industry is coming up this week to prevent "piracy".

computing which course of action including those we have no taste for yields the highest expected utility

Courses of action that can't be implemented yield zero or negative utility.

The rest is demagoguery.

There's an element of truth in that, but I'd put it differently: its the difference between leadership and followership. Politicians in democracies frequently engage in the latter.

Comment author: Thomas 12 November 2009 09:25:18AM 4 points [-]

Free trade. As a politician, you can't do more than that.

Comment author: Matt_Simpson 12 November 2009 04:50:10PM 2 points [-]

And open immigration policies

Comment author: SilasBarta 11 November 2009 09:44:54PM 12 points [-]

Previously, you endorsed this position:

Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.

One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.

What do you think about this kind of self-deception?

Comment author: Psy-Kosh 11 November 2009 07:00:00PM 12 points [-]

In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?

ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?

Comment author: timtyler 11 November 2009 08:53:49AM 12 points [-]

What was the significance of the wirehead problem in the development of your thinking?

Comment author: DanArmak 11 November 2009 10:47:53AM 18 points [-]

Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)

Comment author: MichaelVassar 13 November 2009 05:28:45AM 2 points [-]

Mine would be slightly less than 10% by 2030, slightly greater than 85% by 2080 conditional on a general continuity of our civilization between now and 2080, most likely method of origination depends on how far we look out. More brain inspired methods tend to come later and to be more likely absolutely.

Comment author: alyssavance 13 November 2009 12:06:27AM 2 points [-]

We at SIAI have been working on building a mathematical model of this since summer 2008. See Michael Anissimov's blog post at http://www.acceleratingfuture.com/michael/blog/2009/02/the-uncertain-future-simple-ai-self-improvement-models/. You (or anyone else reading this) can contact us at uncertainfuture@intelligence.org if you're interested in helping us test the model.

Comment author: komponisto 11 November 2009 05:39:28AM 33 points [-]

During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

Comment author: [deleted] 11 November 2009 04:35:27PM *  15 points [-]

Somewhat related, AGI is such an enormously difficult topic, requiring intimate familiarity with so many different fields, that the vast majority of people (and I count myself among them) simply aren't able to contribute effectively to it.

I'd be interested to know if he thinks there are any singularity-related issues that are important to be worked on, but somewhat more accessible, that are more in need of contributions of man-hours rather than genius-level intellect. Is the only way a person of more modest talents can contribute through donations?

Comment author: MichaelVassar 13 November 2009 05:25:10AM 7 points [-]

Depends on what you mean by 'modest'. Probably 60% of Americans could contribute donations without serious lifestyle consequences and 20% of Americans could contribute over a quarter of their incomes without serious consequences. By contrast, only 10% have the reasoning abilities to identify the best large category of causes and only 1% have the reasoning abilities to identify the very best cause without a large luck element being involved. By working hard, most of that 1% could also become part of the affluent 20% of Americans who could make large donations. A similar fraction might be able to help via fund-raising efforts and by aggressively networking and sharing the contacts that they are able to build with us. A smaller but only modestly smaller fraction might be able to meaningfully contribute to SIAI's effort via seriously committed offers of volunteer effort, but definitely not via volunteer efforts uncoupled to serious commitment. Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

Comment author: Vladimir_Nesov 13 November 2009 05:11:42PM 12 points [-]

Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

SIAI keeps supporting this attitude, yet I don't believe it, at least in the way it's presented. A good mathematician who gets to understand the problem statement and succeeds in weeding out the standard misunderstanding can contribute as well as anyone, at this stage where we have no field. Creating a programme that would allow people to reliably get to work on the problem requires material to build upon, and there is still nothing, no matter of what quality. Systematizing the connections with existing science, trying to locate the place of FAI project in it, is something that only requires expertise in that science and understanding of FAI problem statement. At the very least, a dozen steps in, we'll have a useful curriculum, to get folks up to speed in the right direction.

Comment author: MichaelVassar 15 November 2009 04:36:42PM 4 points [-]

We have some experience with this, but if you want to discuss the details more with myself or some other SIAI people we will be happy to do so, and probably to have you come visit some time and get some experience with what we do. You may have ways of contributing substantially, theoretically or managerially. We'll have to see.

Comment author: MichaelVassar 13 November 2009 05:18:29AM 7 points [-]

It's a free country. You are allowed to do a lot, but it can only be optimal to do one thing.

Comment author: komponisto 14 November 2009 07:58:31AM *  4 points [-]

Not necessarily; the maximum value of a function may be attained at more than one point of its domain.

(Also, my use of the word "allowed" is clearly rhetorical/figurative. Obviously it's not illegal to work on things other than AI, and I don't interpret you folks as saying it should be.)

Comment author: MichaelVassar 15 November 2009 04:41:33PM 3 points [-]

Point taken. Also, of course, given a variety of human personalities and situations, the optimal activity for a given person can vary quite a bit. I never advocate asceticism.

Comment author: John_Maxwell_IV 11 November 2009 06:08:58AM *  9 points [-]

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI?

I don't know about Eliezer, but I would be able to sacrifice quite a lot; perhaps all of art. If humanity spreads through the galaxy there will be way more than enough time for all that.

If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists?

It might. But their expected contribution would be much greater if they looked at the problem to see how they could contribute most effectively.

And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

No one's saying that you're not allowed to do something. Just that it's suboptimal under their utility function, and perhaps yours.

My guess is that you overestimate how much of an altruist you are. Consider that lives can be saved using traditional methods for well under $1000. That means every time you spend $1000 on other things, your revealed preference is that having that stuff is more important to you than saving the life of another human being. If you're upset upon hearing this fact, then you're suffering from cognitive dissonance. If you're a true altruist, you'll be happy after hearing this fact, because you'll realize that you can be scoring much better on your utility function than you are currently. (Assuming for the moment that happiness corresponds with opportunities to better satisfy your utility function, which seems to be fairy common in humans.)

Regardless of whether you're a true altruist, it makes sense to spend a chunk of your time on entertainment and relaxation to spend the rest more effectively.

By the way, I would be interested to hear Eliezer address this topic in his video.

Comment author: [deleted] 11 November 2009 08:21:00PM 31 points [-]

What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?

Comment author: SilasBarta 11 November 2009 03:26:35PM 11 points [-]

Previously, you said that a lot of work in Artificial Intelligence is "5% intelligence and 95% rigged demo". What would you consider an example of something that has a higher "intelligence ratio", if there is one, and what efforts do you consider most likely to increase this ratio?

Comment author: CannibalSmith 12 November 2009 12:23:52PM 2 points [-]

DARPA's Grand Challenge produced several intelligent cars and was definitely not a rigged demo.

Comment author: Stuart_Armstrong 11 November 2009 11:42:21AM 21 points [-]

Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?

Comment author: Furcas 16 November 2009 05:33:36PM *  4 points [-]

Eliezer, in Excluding the Supernatural, you wrote:

Ultimately, reductionism is just disbelief in fundamentally complicated things. If "fundamentally complicated" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.

"Fundamentally complicated" does sound like an oxymoron to me, but I can't explain why. Could you? What is the contradiction?

Comment author: anonym 17 November 2009 07:31:33AM 6 points [-]

Isn't the contradiction that "complicated" means having more parts/causes/aspects than are readily comprehensible, and "fundamental" things never are complicated, because if they were, they could be broken down into more fundamental things that were less complicated? The fact that things invariably get simpler and more basic as we move closer to the foundational level is in tension with things getting more complicated as we move down.

Comment author: roland 16 November 2009 05:21:42PM *  4 points [-]

Boiling down rationality

Eliezer, if you only had 5 minutes to teach a human how to be rational, how would you do it? The answer has to be more or less self-contained so "read my posts on lw" is not valid. If you think that 5 minutes is not enough you may extend the time to a reasonable amount, but it should be doable in one day at maximum. Of course it would be nice if you actually performed the answer in the video. By perform I mean "Listen human, I will teach you to be rational now..."

EDIT: When I said perform I meant it as opposed to telling how to, so I would prefer Eliezer to actually teach rationality in 5 minutes instead of talking about how he would teach it.

Comment author: MarkHHerman 15 November 2009 11:31:27PM 4 points [-]

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “<Eliezer> well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

Comment author: John_Maxwell_IV 11 November 2009 06:49:45AM 10 points [-]

What is the background that you most frequently wish would-be FAI solvers had when they struck up conversations with you? You mentioned the Dreams of Friendliness series; is there anything else? You can answer this question in comment form if you like.

Comment author: ABranco 14 November 2009 01:55:41PM 14 points [-]

Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?

Comment author: John_Maxwell_IV 11 November 2009 06:55:09AM 25 points [-]

What's your advice for Less Wrong readers who want to help save the human race?

Comment author: RichardKennaway 11 November 2009 08:57:12AM *  13 points [-]

Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.

Comment author: FeministX 11 November 2009 05:04:19AM 13 points [-]

2) How does one affect the process of increasing the rationality of people who are not ostensibly interested in objective reasoning and people who claim to be interested but are in fact attached to their biases?

I find that question interesting because it is plain that the general capacity for rationality in a society can be improved over time. Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.

It seems to me that we really are faced with the challenge of explaining the value of empirical analysis and objective reasoning to much of the world. Today the Middle East is hostile towards reason though they presumably don't have to be this way.

So again, my question is how do more rational people affect the reasoning capacity in less rational people, including those hostile towards rationality?

Comment author: cabalamat 12 November 2009 03:46:51AM 5 points [-]

Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.

I suspect that, on the contrary, >50% of the population have very little idea what either term means.

Comment author: MichaelVassar 13 November 2009 05:35:38AM 4 points [-]

I think that the average person has NO IDEA how the concept of the standard deviation is properly used. Neither does the average IQ 140 non-scientist.

Less Wrong is an attempt to increase the rationality of very unusual people. Most other SIAI efforts are other such attempts, or are direct attempts at FAI.

Comment author: John_Maxwell_IV 11 November 2009 06:51:41AM 16 points [-]

Who was the most interesting would-be FAI solver you encountered?

Comment author: alyssavance 13 November 2009 12:02:25AM 5 points [-]

As far as I can tell (this is not Eliezer's or SIAI's opinion), the people who have contributed the most to FAI theory are Eliezer, Marcello Herreshoff, Michael Vassar, Wei Dai, Nick Tarleton, and Peter de Blanc in that order.

Comment author: evtujo 11 November 2009 05:09:36AM 21 points [-]

How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?

Comment author: taa21 11 November 2009 09:01:31PM 6 points [-]

Just out of curiosity, why are you asking this? And why is Yudkowsky's opinion on this matter relevant?

Comment author: spriteless 15 November 2009 11:00:04PM 2 points [-]

This sort of thing should have it's own thread, it deserves some brainstorming.

You can start with choice of fairytales.

You can make the games available to play reward understanding probabilities and logic over luck and quick reflexes. My dad got us puzzle games and reading tutors for the NES and C64 when I was a kid. (Lode Runner, LoLo, Reader Rabbit)

Comment author: sixes_and_sevens 11 November 2009 02:56:43PM 11 points [-]

What five written works would you recommend to an intelligent lay-audience as a rapid introduction to rationality and its orbital disciplines?

Comment author: alyssavance 13 November 2009 12:22:59AM 2 points [-]

See the Singularity Institute Reading List for some ideas.

Comment author: John_Maxwell_IV 11 November 2009 06:52:29AM 11 points [-]

What was the most useful suggestion you got from a would-be FAI solver? (I'm putting separate questions in separate comments per MichaelGR's request.)

Comment author: MichaelGR 11 November 2009 09:20:33PM *  14 points [-]

In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.

Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?

Comment author: MichaelVassar 13 November 2009 05:13:31AM 8 points [-]

I strongly disagree with the claim that it is likely that AGI will appear on the radar of powerful organizations just because it is almost ready. That doesn't match the history of scientific (not, largely technological) breakthroughs in the past in my reading of scientific history. Uploading, maybe, as there is likely to be a huge engineering project even after the science is done, though the science might be done in secret. With AGI, the science IS the project.

Comment author: Nick_Tarleton 11 November 2009 11:47:54PM *  4 points [-]

They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If they're going to have that exact wrong level of cluefulness, why wouldn't they already have a (much better-funded, much less careful) AGI project of their own?

As Vladimir says, it's too early to start solving this problem, and if "things start moving rapidly" anytime soon, then AFAICS we're just screwed, government involvement or no.

Comment author: Vladimir_Nesov 11 November 2009 10:59:33PM *  3 points [-]

Isn't it too early to start solving this problem? There is a good chance SIAI won't even have a direct hand in programming the FAI.

Comment author: Johnicholas 12 November 2009 04:03:23AM 4 points [-]

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode. It's far more likely that SIAI is slower at developing (both Friendly and unFriendly) AI than the rest of the world. It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.

Comment author: MichaelGR 12 November 2009 06:22:14AM 4 points [-]

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode.

I think it might be correct in the entrepreneur/startup world, but it probably isn't when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).

I'm not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn't just one of many many things that can go wrong. But I still think that when you're playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.

Comment author: alyssavance 13 November 2009 12:14:49AM *  6 points [-]

"Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software."

The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn't kept secret by default; if it weren't for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.

"If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc)."

Only if they believe you, which they almost certainly won't. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there's still an additional burden of proof on top of that, because they'd also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.

Comment author: Johnicholas 12 November 2009 01:16:03PM 6 points [-]

Let's be realistic here - the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance - an opinion not generally shared by the AI research world in general, or the world as a whole.

We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we're trying to anticipate what other people are likely to do.

Comment author: RobQ 13 November 2009 04:21:43AM 5 points [-]

Fear of theft is a crank trope? As someone who makes a living providing cyber security I have to say you have no idea of the daily intrusions US companies experience from foreign governments and just simple criminals.

Comment author: MichaelVassar 15 November 2009 04:54:07PM *  3 points [-]

Theft of higher level more abstract ideas is much rarer. It happens both in Hollywood films and in the real Hollywood, but not so frequently, as far as I can tell, in most industries. More frequently, people can't get others to follow up on high generality ideas. Apple and Microsoft, for instance, stole ideas from Xerox that Xerox had been sitting on for years, they didn't steal ideas that Xerox was working on and compete with Xerox.

Comment author: alyssavance 13 November 2009 12:10:07AM 3 points [-]

"It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI."

I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use.

Comment author: Vladimir_Nesov 13 November 2009 04:15:56PM *  3 points [-]

There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven't yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.

Comment author: mormon2 13 November 2009 05:53:45PM 0 points [-]

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.

As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.

I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...

Comment author: alyssavance 13 November 2009 07:23:49PM 2 points [-]

Of course startups sometimes lose; they certainly aren't invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.

"If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile."

(citation needed)

Comment author: mormon2 14 November 2009 01:56:06AM 0 points [-]

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.

My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you're smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)

Sorry this is harsh but there it is.

Comment author: Alicorn 14 November 2009 02:19:36AM 3 points [-]

If you want to claim you're smart you have to have accomplishments that back it up right?

I think you have confused "smart" with "accomplished", or perhaps "possessed of a suitably impressive resumé".

Comment author: mormon2 14 November 2009 02:24:39AM *  2 points [-]

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

Comment author: Alicorn 14 November 2009 02:37:11AM *  3 points [-]

What do you think "intelligence" is?

Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?

Comment author: mormon2 14 November 2009 06:06:42PM 1 point [-]

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

Comment author: RobinHanson 11 November 2009 11:45:10PM 22 points [-]

Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?

Comment author: MichaelVassar 13 November 2009 05:08:13AM 9 points [-]

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Comment author: timtyler 14 November 2009 12:35:45AM 4 points [-]

Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world.

Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding.

We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.

Comment author: MichaelVassar 15 November 2009 04:39:09PM 7 points [-]

You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding.

Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?

Comment author: Eliezer_Yudkowsky 12 November 2009 12:28:51AM 8 points [-]

Who are we talking about besides you?

Comment author: RobinHanson 12 November 2009 02:30:07AM 2 points [-]

I'd consider important overlapping academic fields to be AI and long term economic growth; I base my claim about academic expert opinion on my informal sampling of such folks. I would of course welcome a more formal sampling.

Comment author: Eliezer_Yudkowsky 12 November 2009 04:59:44AM 9 points [-]

Who's considered my main arguments besides you?

Comment author: RobinHanson 12 November 2009 01:27:50PM 2 points [-]

I'm not comfortable publicly naming names based on informal conversations. These folks vary of course in how much of the details of your arguments they understand, and of course you could always set your bar high enough to get any particular number of folks who have understood "enough."

Comment author: Eliezer_Yudkowsky 12 November 2009 02:46:53PM 4 points [-]

Okay. I don't know any academic besides you who's even tried to consider the arguments. And Nick Bostrom et. al., of course, but AFAIK Bostrom doesn't particularly disagree with me. I cannot refute what I have not encountered, I do set my bar high, and I have no particular reason to believe that any other academics are in the game. I could try to explain why you disagree with me and Bostrom doesn't.

Comment author: Eliezer_Yudkowsky 16 November 2009 01:38:57AM 4 points [-]

Actually, on further recollection, Steve Omohundro and Peter Cheeseman would probably count as academics who know the arguments. Mostly I've talked to them about FAI stuff, so I'm actually having trouble recalling whether they have any particular disagreement with me about hard takeoff.

I think that w/r/t Cheeseman, I had to talk to Cheeseman for a while before he started to appreciate the potential speed of a FOOM, as opposed to just the FOOM itself which he considered obvious. I think I tried to describe your position to Cheeseman and Cheeseman thought it was pretty implausible, but of course that could just be the fact that I was describing it from outside - that counts for nothing in my view until you talk to Cheeseman, otherwise he's not familiar enough with your arguments. (See, the part about setting the bar high works both ways - I can be just as fast to write off the fact of someone else's disagreement with you, if they're insufficiently familiar with your arguments.)

I'm not sure I can recall what Omohundro thinks - he might be intermediate between yourself and myself...? I'm not sure how much I've talked hard takeoff per se with Omohundro, but he's certainly in the game.

Comment author: MichaelVassar 16 November 2009 02:57:22AM 2 points [-]

I think Steve Omohundro disagees about the degree to which takeoff is likely to be centralized, due to what I think is the libertarian impulses I mentioned earlier.

Comment author: RobinHanson 12 November 2009 06:36:25PM 4 points [-]

Surely some on the recent AAAI Presidential Panel on Long-Term AI Futures considered your arguments to at least some degree. You could discuss why these folks disagree with you.

Comment author: Eliezer_Yudkowsky 12 November 2009 08:23:03PM 3 points [-]

Haven't particularly looked at that - I think some other SIAI people have. I expect they'd have told me if there was any analysis that counts as serious by our standards, or anything new by our standards.

If someone hasn't read my arguments specifically, then I feel very little need to explain why they might disagree with me. I find myself hardly inclined to suspect that they have reinvented the same arguments. I could talk about that, I suppose - "Why don't other people in your field invent the same arguments you do?"

Comment author: RobinHanson 12 November 2009 09:23:06PM *  16 points [-]

You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?

Comment author: Eliezer_Yudkowsky 12 November 2009 09:35:34PM *  5 points [-]

I'm sorry, but I don't really have a proper lesson plan laid out - although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.

If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn't matter if they'd done it on their own or by reading my stuff.

E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff... with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn't clear from the presentation.

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

Comment author: timtyler 14 November 2009 10:32:02PM *  2 points [-]

I have a theory about why there is disagreement with the AAAI panel:

The DOOM peddlers gather funding from hapless innocents - who hope to SAVE THE WORLD - while the academics see them as bringing their field into disrepute, by unjustifiably linking their field to existential risk, with their irresponsible scaremongering about THE END OF THE WORLD AS WE KNOW IT.

Naturally, the academics sense a threat to their funding - and so write papers to reassure the public that spending money on this stuff is Really Not As Bad As All That.

Comment author: StefanPernar 12 November 2009 11:02:57AM *  1 point [-]

Me - if I qualify as an academic expert is another matter entirely of course.

Comment author: ChrisHibbert 14 November 2009 07:39:13PM 2 points [-]

Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?

Comment author: StefanPernar 15 November 2009 01:46:06AM 0 points [-]

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

Comment author: Alicorn 11 November 2009 06:23:51PM 22 points [-]

What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?

Comment author: dclayh 11 November 2009 07:58:54PM *  2 points [-]

Excellent; I was going to ask that myself. Clearly Eliezer wanted an example to support his oft-repeated contention that the future like the past will be filled with people whose values seem abhorrent to us. But why he chose that particular example I'd like to know. Was it the most horrific(-sounding) thing he could come up with some kind of reasonable(-sounding) justification for?

Comment author: RobinZ 11 November 2009 05:33:10PM 7 points [-]

I am sure you're familiar with the University of Chicago "Doomsday Clock", so: if you were in charge of a Funsday Clock, showing the time until positive singularity, what time would it be on? Any recent significant changes?

(Idea of Funsday Clock blatantly stolen from some guy on Twitter.)

Comment author: Will_Euler 19 November 2009 02:11:41AM 3 points [-]

How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?

If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an "uncanny valley"?

Less important, but related:  What self-insights from hedonic/positive psychology have you found most revealing about people's ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.

(I feel these are sufficiently intertwined to constitute one general question about the relationship between self-knowledge and happiness.)

Comment author: imaxwell 14 November 2009 01:31:25AM 3 points [-]

Previously, in Ethical Injunctions and related posts, you said that, for example,

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?

Comment author: wedrifid 14 November 2009 02:05:49AM 3 points [-]

how smart would a mind have to be in order to safely break ethical injunctions?

Any given mind could create ethical injunctions of a suitable complexity that are useful to it given its own technical limitations.

Comment author: wuwei 12 November 2009 04:24:48AM *  3 points [-]

Do you think that morality or rationality recommends placing no intrinsic weight or relevance on either a) backwards-looking considerations (e.g. having made a promise) as opposed to future consequences, or b) essentially indexical considerations (e.g. that I would be doing something wrong)?

Comment author: Utilitarian 11 November 2009 06:58:36AM 14 points [-]

What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?

Comment author: timtyler 11 November 2009 09:00:49AM 4 points [-]

That's 14 questions! ;-)

Comment author: SilasBarta 12 November 2009 09:05:35PM *  2 points [-]

Just in case people are taking timtyler's point too seriously: It's really one question, then a list of issues it should touch on to be a complete answer. You wouldn't need to directly answer all of them if the implication for that question is obvious from a previous.

ETA: I'm not the one who asked the question, but I did vote it up.

Comment author: botogol 11 November 2009 05:14:27PM 9 points [-]

Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?

Comment author: alyssavance 13 November 2009 12:27:25AM 5 points [-]

I strongly think that's the wrong way to phrase the question.

"Don't expect fame or fortune. The Singularity Institute is not your employer, and we are not paying you to accomplish our work. The so-called "Singularity Institute" is a group of humans who got together to accomplish work they deemed important to the human species, and some of them went off to do fundraising so the other ones could get paid enough to live on. Don't even dream of being paid what you're worth, if you're worth enough to solve this class of problem. As for fame, we are trying to do something that is daring far beyond the level of daring that is just exactly daring enough to be seen academically as sexy and transgressive and courageous, so working here may even count against you on your resume. But that's not important, because this is a lifetime commitment. Let me repeat that again: Once you're in, really in, you stay. I can't afford to start over training a new Research Fellow. We can't afford to have you leave in the middle of The Project. It's Singularity or bust. If you look like a good candidate, we'll probably bring you in for a trial month, or something like that, to see if we can work well together. But please do consider that, once you've been in for long enough, I'll be damned hurt – and far more importantly, The Project will be hurt – if you leave. This is a very difficult thing that we of the Singularity Institute are attempting – some of us have been working on it since long before there was enough money to pay us, and some of us still aren't getting paid. The motivation to do this thing, to accomplish this impossible feat, has to come from within you; and be glad that someone is paying you enough to live on while you do it. It can't be the job that you took to make the rent. That's not how the research branch of the Singularity Institute works. It's not who we are." - http://singinst.org/aboutus/opportunities/research-fellow

Comment author: James_Miller 11 November 2009 05:26:46AM 31 points [-]

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?

Comment author: Wei_Dai 11 November 2009 09:23:46PM 21 points [-]

Why do you have a strong interest in anime, and how has it affected your thinking?

Comment author: Vladimir_Nesov 11 November 2009 02:35:21PM *  6 points [-]

Which areas of science or angles of analysis currently seem relevant to the FAI problem, and which of those you've studied seem irrelevant? What about those that fall on the "AI" side of things? Fundamental math? Physics?

Comment author: mormon2 11 November 2009 05:22:28PM 3 points [-]

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a request to see EY demonstrate TDT with some worked out math on a whiteboard or some such on the video.

Comment author: bogdanb 11 November 2009 11:07:15PM 20 points [-]

How did you win any of the AI-in-the-box challenges?

Comment author: righteousreason 12 November 2009 02:47:29AM 9 points [-]

http://news.ycombinator.com/item?id=195959

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

Comment author: Unnamed 17 November 2009 02:22:58AM 7 points [-]

Here's an alternative question if you don't want to answer bogdanb's: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?

Comment author: Yorick_Newsome 12 November 2009 01:26:30AM 4 points [-]

Something tells me he won't answer this one. But I support the question! I'm awfully curious as well.

Comment author: CronoDAS 16 November 2009 09:50:09AM 2 points [-]

Perhaps this would be a more appropriate version of the above:

What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?

Comment author: SilasBarta 12 November 2009 08:59:57PM 2 points [-]

Voted down. Eliezer Yudkowsky has made clear he's not answering that, and it seems like an important issue for him.

Comment author: wedrifid 15 November 2009 10:24:23AM *  3 points [-]

Voted back up. He will not answer but there's no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.

Comment author: jimrandomh 12 November 2009 02:44:20AM 16 points [-]

What is the probability that this is the ultimate base layer of reality?

Comment author: Psy-Kosh 11 November 2009 03:14:37AM 21 points [-]

Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?

Comment author: [deleted] 11 November 2009 04:51:53AM 3 points [-]

Earlier today, I pondered whether this infinite set atheism thing is something Eliezer merely claims to believe as some sort of test of basic rationality. It's a belief that, as far as I can tell, makes no prediction.

But here's what I predict that I would say if I had Eliezer's opinions and my mathematical knowledge: I'm a fan of thinking of ZFC as being its countably infinite model, in which the class of all sets is enumerable, and every set has a finite representation. Of course, things like the axiom of infinity and Cantor's diagonal argument still apply; it's just that "uncountably infinite set" simply means "set whose bijection with the natural numbers is not contained in the model".

(Yes, ZFC has a countable model, assuming it's consistent. I would call this weird, but I hate admitting that any aspect of math is counterintuitive.)

Comment author: Johnicholas 11 November 2009 11:26:29AM *  8 points [-]

ZFC's countable model isn't that weird.

Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.

The computer programmer will do a back of the envelope calculation, something like: "The set of all natural numbers" is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer "syntactically".

Of course, the mathematician might claim that the "entities" that they're manipulating are more than just the syntax, and are actually much bigger. That is, they might answer "semantically". Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like "how big is it?" in terms of those mental images. A math professor asking "How big is it?" might accept answers like "it's a subset of the integers" or "It's a superset of the power set of reals". The programmer's answer of "maybe 30 bytes" seems, from the mathematical perspective, about as ridiculous as "It's about three feet long right now, but I can write it longer if you want".

The weirdly small models are only weirdly small if what you thought were manipulating was something other than finite (and therefore Godel-numberable) syntax.

Comment author: Eliezer_Yudkowsky 12 November 2009 03:14:33PM 6 points [-]

Earlier today, I pondered whether this infinite set atheism thing is something Eliezer merely claims to believe as some sort of test of basic rationality.

I've said this before in many places, but I simply don't do that sort of thing. If I want to say something flawed just to see how my readers react to it, I put it into the mouth of a character in a fictional story; I don't say it in my own voice.

Comment author: komponisto 11 November 2009 06:00:47AM 15 points [-]

I admit to being curious about various biographical matters. So for example I might ask:

What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?

Comment author: patrissimo 12 November 2009 06:58:14PM 5 points [-]

Do you think that just explaining biases to people helps them substantially overcome those biases, or does it take practice, testing, and calibration to genuinely improve one's rationality?

Comment author: roland 19 November 2009 01:09:22AM *  2 points [-]

I can partially answer this. In the book "The logic of failure" by Dietrich Dorner he tested humans with complex systems they had to manage. It turned out that when one group got specific instructions of how to deal with complex systems they did not perform better than the control group.

EDIT: Dorner's explanation was that just knowing was not enough, individuals had to actually practice dealing with the system to improve. It's a skillset.

Comment author: Daniel_Burfoot 11 November 2009 04:14:15PM 9 points [-]

Let E(t) be the set of historical information available up until some time t, where t is some date (e.g. 1934). Let p(A|E) be your estimate of the probability an optimally rational Bayesian agent would assign to the event "Self-improving artificial general intelligence is discovered before 2100" given a certain set of historical information.

Consider the function p(t)=p(A|E(t)). Presumably as t approaches 2009, p(t) approaches your own current estimate of p(A).

Describe the function p(t) since about 1900. What events - research discoveries, economic trends, technological developments, sci-fi novel publications, etc, caused the largest changes in p(t)? Is it strictly increasing, or does it fluctuate substantially? Did the publication of any impossibility proofs (e.g. No Free Lunch) cause strong decreases in p(t)? Can you point to any specific research results that increased p(t)? What about the "AI winter" and related setbacks?

Comment author: Peter_de_Blanc 12 November 2009 02:21:47AM 3 points [-]

I don't think this question behaves the way you want it to. Why not ask what a smart human would predict?

Comment author: MichaelVassar 13 November 2009 05:16:17AM 2 points [-]

I'd guess that WWII and particularly the Holocaust set it back rather a lot. How likely were they in 1934 though, possibly quite.

Comment author: John_Maxwell_IV 11 November 2009 06:21:52AM 10 points [-]

What are the hazards associated with making random smart people who haven't heard about existential dangers more intelligent, mathematically inclined, and productive?

Comment author: SilasBarta 12 November 2009 12:06:41AM 7 points [-]

Okay: Goedel, Escher, Bach. You like it. Big-time.

But why? Specifically, what insights should I have assimilated from reading it that are vital for AI and rationalist arts? I personally feel I learned more from Truly Part of You than all of GEB, though the latter might have offered a little (unproductive) entertainment.

Comment author: Kutta 13 November 2009 01:18:37AM *  4 points [-]

Why? I think, maybe because GEB integrates form, style and thematic content into a seamless whole in a unique and pretty much artistic way, while still being essentially non-fiction. And GEB is probably second to nothing at conveying the notion of an intertwined reality. It also provides very intelligent and intuitive introduction to a whole lot of different areas. Sometimes you can't do all the job of conveying extremely complex ideas in a succinct essay; just look at the epic amount of writing Eliezer had to do merely to establish a bare framework for FAI discussion (besides, from the fact that Eliezer likes GEB does not follow that GEB should be a recommended reading for AI or rationalist arts. It just means that Eliezer thinks it's a good book).

Comment author: Steve_Rayhawk 11 November 2009 11:46:02PM *  4 points [-]

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be.

This disadvantages questions which are posted late (to a greater extent than would give people an optimal incentive to post questions early). (It also disadvantages questions which start with a low number of upvotes by historical accident and then are displayed low on the page, and are not viewed as much by users who might upvote them.)

It's not your fault; I just wish the LW software had a statistical model which explained observed votes and replies in terms of a latent "comment quality level", because of situations like this, where it could matter if a worse comment got a high rating while a better comment got a low one. (I also wish forums with comment ratings used ideas related to value of information, optimal sequential preference elicitation, and/or n-armed bandit problems to decide when to show users comments whose subjective latent quality has a low marginal mean but a high marginal variance, in case the (")true(") quality of a comment is high, because of the possibility that a user will rate the comment highly and let the forum software know that it should show the comment to other users.)

Comment author: JamesAndrix 12 November 2009 06:05:21PM 5 points [-]

Reddit has implemented a 'best' view which tries to compensate for this kind of thing: http://blog.reddit.com/2009/10/reddits-new-comment-sorting-system.html

LW is based on reddit's source code, so it should be relatively easy to integrate.

Comment author: MarkHHerman 18 November 2009 01:04:20AM 2 points [-]

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

Comment author: mikerpiker 16 November 2009 03:39:16AM 2 points [-]

It seems like, if I'm trying to make up my mind about philosophical questions (like whether moral realism is true, or whether free will is an illusion) I should try to find out what professional philosophers think the answers to these questions are.

If I found out that 80% of professional philosophers who think about metaethical questions think that moral realism is true, and I happen to be an anti-realist, then I should be far less certain of my belief that anti-realism is true.

But surveys like this aren't done in philosophy (I don't think). Do you think that the results of surveys like this (if there were any) should be important to the person trying to make a decision about whether or not to believe in free will, or be an moral realist, or whatever?

Comment author: Jack 16 November 2009 10:18:16PM *  4 points [-]

My answer to this depends on what you mean by "professional philosophers who think about". You have to be aware that subfields have selection biases. For example, the percent of philosophers of religion who think God exists is much, much larger than the percent of professional philosophers generally who think God exists. This is because if God does not exist philosophy of religion ceases to be a productive area of research. Conversely, if you have an irrational attachment to the idea that God exists this than you are likely to spend an inordinate amount of time trying to prove one exists. This issue is particularly bad with regard to religion but it is in some sense generalizable to all or most other subfields. Philosophy is also a competitive enterprise and there are various incentives to publishing novel arguments. This means in any given subfield views that are unpopular among philosophers generally will be overrepresented.

So the circle you draw around "professional philosophers who think about [subfield x] questions" needs to be small enough to target experts but large enough that you don't limit your survey to those philosophers who are very likely to hold a view you are surveying in virtue of the area they work in. I think the right circle is something like 'professional philosophers who are equipped to teach an advanced undergraduate course in the subject'.

Edit: The free will question will depend on what you want out of a conception of free will. But the understanding of free will that most lay people have is totally impossible.

Comment author: Alicorn 16 November 2009 10:30:24PM 2 points [-]

Seconded. There are a lot of libertarians-about-free-will who study free will, but nobody I've talked to has ever heard of anyone changing their mind on the subject of free will (except inasmuch as learning new words to describe one's beliefs counts) - so this has to be almost entirely due to more libertarians finding free will an interesting thing to study.

Comment author: Blueberry 16 November 2009 10:49:18PM *  2 points [-]

I've definitely changed my mind on free will. I used to be an incompatibilist with libertarian leanings. After reading Daniel Dennett's books, I changed my mind and became a compatiblist soft determinist.

Comment author: Jack 16 November 2009 10:48:31PM 2 points [-]

Free will libertarianism is also infected with religious philosophy. There are certainly some libertarians with secular reasons for their positions but a lot of the support for this for position comes from those whose religious world view requires radical free will and if they didn't believe in God they wouldn't be libertarians. Same goes for a lot of substance dualists, frankly.

Comment author: whpearson 11 November 2009 06:21:52PM *  2 points [-]

In reference to this comment, can you give us more information about the interface between the modules. Also what leads you to believe that a human level intelligence can be decomposed nicely in such a fashion.

Comment author: roland 12 November 2009 09:30:50PM *  5 points [-]

Akrasia

Eliezer, you mentioned suffering from writer's molasses and your solution was to write daily on ob/lw. I consider this a clever and successful overcoming of akrasia. What other success stories from your life in relation to akrasia could you share?

Comment author: Jack 11 November 2009 04:27:11AM *  7 points [-]

If you thought an AGI couldn't be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.

Comment author: Johnicholas 11 November 2009 11:34:00AM 2 points [-]

At the Singularity Summit, this question (or one similar) was asked, and (if I remember correctly) EY answer was something like: If the world didn't need saving? Possibly writing science fiction.

Comment author: Jach 13 November 2009 08:14:46AM *  4 points [-]

Within the next 20 years or so, would you consider having a child and raising him/her to be your successor? Would you adopt? Have you donated sperm?

Edit: the first two questions dependent on you not being satisfied by the progress on FAI.

Comment author: pwno 11 November 2009 05:47:11PM *  4 points [-]

How would a utopia deal with human's seemingly contradicting desires - the desire to go up in status and the desire to help lower status people go up in status. Because helping lower status people go up in status will hurt our own status positions. I remember you mentioning how in your utopia you would prefer not to reconfigure the human mind. So how would you deal with such a problem?

(If someone finds the premise of my question wrong, please point it out)

Comment author: anonym 14 November 2009 09:49:49PM 3 points [-]

If you conceptualized the high-level tasks you must attend to in order to achieve (1) FAI-understanding and (2) FAI-realization in terms of a priority queue, what would be the current top few items in each queue (with numeric priorities on some arbitrary scale)?

Comment author: ajayjetti 12 November 2009 03:23:32AM 3 points [-]

Are you a meat-eater?

Comment author: Alicorn 12 November 2009 03:32:50AM 2 points [-]
Comment author: Larks 13 November 2009 10:38:17PM 3 points [-]

What do you estimate the utility of Less Wrong to be?

Comment author: Eliezer_Yudkowsky 13 November 2009 10:51:10PM *  11 points [-]

Roughly 4,250 expected utilons.

Comment author: Unnamed 14 November 2009 02:24:05AM 8 points [-]

Could you please convert to dust specks?

Comment author: timtyler 13 November 2009 11:16:32PM *  4 points [-]

Well yes: the question was a bit ambiguous.

Maybe one should adopt a universal standard yardstick for this kind of thing, though - so such questions can be answered meaningfully. For that we need something that everyone (or practically everyone) values. I figure maybe the love of a cute kitten could be used as a benchmark. Better yardstick proposals would be welcome, though.

Comment author: Larks 13 November 2009 11:56:49PM 5 points [-]

If only there existed some medium of easy comparison, such that we could easily compare the values placed on common goods and services...

Comment author: Alicorn 13 November 2009 11:24:56PM 2 points [-]

It'd have to be a funny yardstick. Almost nothing we value scales linearly. I would start getting tired of kittens after about 4,250 of them had gone by.

Comment author: DanArmak 14 November 2009 12:23:05AM 2 points [-]

Way to Other-ize dog people.

Comment author: FeministX 11 November 2009 04:51:12AM *  3 points [-]

I have questions. You say we must have one question per comment. So, I will have to make multitple posts.

1) Is there a domain where rational analysis does not apply?

Comment author: CannibalSmith 11 November 2009 10:47:14AM 3 points [-]

Improvisational theater. (I'm not Eliezer, I know.)

Comment author: nazgulnarsil 12 November 2009 04:31:12PM *  4 points [-]

actually... http://greenlightwiki.com/improv/Status http://craigtovey.blogspot.com/2008/02/popular-comedy-formulas.html

learning this stuff allowed me (introvert) to successfully fake extroversion for my own benefit when I need to.

Comment author: MichaelVassar 13 November 2009 05:37:07AM 2 points [-]

Analysis takes time, so anywhere timed. Rational analysis, crudely speaking, is the proper use of 'system 2'. Most domains work better via 'system 1' with 'system 2' watching and noticing what's going wrong in order to analyze problems or nudge habits.

Comment author: retired_phlebotomist 13 November 2009 07:11:24AM 3 points [-]

What does the fact that when you were celibate you espoused celibacy say about your rationality?

Comment author: Morendil 12 November 2009 04:59:44PM 2 points [-]

Well, Eliezer's reply to this comment prompts a follow-up question:

In "Free to optimize", you alluded to "the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together". Can you say more about what you imagine such rules might be ?

Comment author: Kutta 13 November 2009 01:34:16AM *  2 points [-]

I think that there isn't any point in attempting to come up with anything more exact than the general musings of the Fun Theory. It really takes a superintelligence and knowledge of CEV to conceive such rules (and it's not even guaranteed that there'd be anything that resemble "rules" per se).

Comment author: komponisto 11 November 2009 06:12:39AM 2 points [-]

Sticking with biography/family background:

Anyone who has read this poignant essay knows that Eliezer had a younger brother who died tragically young. If it is not too insensitive of me, may I ask what the cause of death was?

Comment author: Kutta 11 November 2009 07:07:47AM *  4 points [-]

It's been discussed somewhere in the second half of this podcast:

http://www.speakupstudios.com/Listen.aspx?ShowUID=333035

Comment deleted 13 November 2009 03:10:33AM [-]
Comment author: botogol 11 November 2009 05:19:25PM 0 points [-]

Do you act all rational at home . . or do you switch out of work mode and stuff pizza and beer in front of the TV like any normal akrasic person? (and if you do act all rational, what do your partner/family/housemates make of it? do any of them ever give you a slap upside the head?)

:-)

Comment author: Vladimir_Nesov 11 November 2009 06:31:54PM 3 points [-]

How does it look like when a person "acts rationally"? Do I hear connotations with dreaded Mr. Spock?

Comment author: RobinZ 11 November 2009 05:20:48PM 2 points [-]

*coughs*

A popular belief about "rationality" is that rationality opposes all emotion - that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can't find any theorem of probability theory which proves that I should appear ice-cold and expressionless.

Comment author: AndrewKemendo 11 November 2009 12:34:23PM 1 point [-]

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

Comment author: Eliezer_Yudkowsky 11 November 2009 06:55:52PM 4 points [-]

how much thought have you put into developing your personal epistemological philosophy?

...very little, you know me, I usually just wing that epistemology stuff...

(seriously, could you expand on what this question means?)

Comment author: Psy-Kosh 11 November 2009 01:47:31PM 2 points [-]

*blinks* I'm curious as to what it is you are asking. A utility function is just a way of encoding and organizing one's preferences/values. Okay, there're a couple additional requirements like internal consistency (if you prefer A to B and B to C, you'd better prefer A to C) and such, but other than that, it's just a convenient way of talking about one's preferences.

The goal isn't "maximize utility", but rather "maximizing utility" is a way of stating what it is you're doing when you're working to achieve your goals. Or did I completely misunderstand?