Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

41 Post author: MichaelGR 07 January 2010 04:40AM

On October 29th, I asked Eliezer and the LW community if they were interested in doing a video Q&A. Eliezer agreed and a majority of commenters were in favor of the idea, so on November 11th, I created a thread where LWers could submit questions. Dozens of questions were asked, generating a total of over 650 comments. The questions were then ranked using the LW voting system.

On December 11th, Eliezer filmed his replies to the top questions (skipping some), and sent me the videos on December 22nd. Because voting continued after that date, the order of the top questions in the original thread has changed a bit, but you can find the original question for each video (and the discussion it generated, if any) by following the links below.

Thanks to Eliezer and everybody who participated.

Update: If you prefer to download the videos, they are available here (800 MB, .wmw format, sort the files by 'date created').

Link to question #1.

Link to question #2.

Link to question #3.

Link to question #4.

Eliezer Yudkowsky - Less Wrong Q&A (5/30) from MikeGR on Vimeo.

Link to question #5.

(Video #5 is on Vimeo because Youtube doesn't accept videos longer than 10 minutes and I only found out after uploading about a dozen. I would gladly have put them all on Vimeo, but there's a 500 MB/week upload limit and these videos add up to over 800 MB.)

Link to question #6.

Link to question #7.

Link to question #8.

Link to question #9.

Link to question #10.

Link to question #11.

Link to question #12.

Link to question #13.

Link to question #14.

Link to question #15.

Link to question #16.

Link to question #17.

Link to question #18.

Link to question #19.

Link to question #20.

Link to question #21.

Link to question #22.

Link to question #23.

Link to question #24.

Link to question #25.

Link to question #26.

Link to question #27.

Link to question #28.

Link to question #29.

Link to question #30.

If anything is wrong with the videos or links, let me know in the comments or via private message.

Comments (94)

Comment author: curiousepic 14 November 2011 05:03:39PM *  20 points [-]
Comment author: Bo102010 07 January 2010 05:46:01AM 10 points [-]

What would be way cool is a description of the question along with the link, though I realize that might be a bit of work.

Comment author: Wei_Dai 07 January 2010 05:59:34AM *  30 points [-]


What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.


Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):


Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).


What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?


Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?


During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?


I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)


What's your advice for Less Wrong readers who want to help save the human race?



Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)


Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.


What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?


If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?


Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?


How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?


Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?


Why do you have a strong interest in anime, and how has it affected your thinking?


What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.


Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)


What progress have you made on FAI in the last five years and in the last year?


How do you characterize the success of your attempt to create rationalists?


What is the probability that this is the ultimate base layer of reality?


Who was the most interesting would-be FAI solver you encountered?


If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?


In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.


What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?


I admit to being curious about various biographical matters. So for example I might ask:

What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?


Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.


Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?


Previously, you endorsed this position:

Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.

One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.

What do you think about this kind of self-deception?


In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?

ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?


What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

Comment author: Technologos 07 January 2010 08:16:24AM *  2 points [-]

You repeat #10 as #11; the question as cited by Eliezer is as follows:

If you got hit by a meteorite, what would be the impact on FAI research? Would other people be able to pick it up from there?

Comment author: MatthewB 07 January 2010 07:08:36AM 3 points [-]

In response to Eliezer's response on Video #5, indicating that smart people should be working on AI, and not String Theory.

I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place... and

Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.

Comment author: Jack 07 January 2010 07:14:30PM 4 points [-]

Shouldn't we hedge our bets a little? I don't know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn't 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.

Am I wrong?

Comment author: CronoDAS 07 January 2010 08:33:20AM 8 points [-]

Your answer to question #8 doesn't mention how you convinced your parents to let you drop out of school at age 12...

Comment author: MichaelGR 07 January 2010 09:24:35PM 19 points [-]

Bonus feature: If you 'pagedown' rapidly through all the videos, you get an Eliezer flipbook.

Comment author: arundelo 17 January 2010 06:11:50PM 6 points [-]

I couldn't figure out a way to "play all", so I put everything but the Vimeo one on a YouTube playlist.

Comment author: Wei_Dai 09 January 2010 02:51:35AM 6 points [-]

Re: autodidacticism & Bayesian enlightenment

For comparison, I did a lot of self-education, but also had a conventional education (ending with a BA in Computer Science). I think I was introduced to Bayesianism in a probability class in college, and it was also the background assumption in a couple of economics courses that I took for fun (Game Theory and Industrial Organization). It seems to me that choosing pure autodidacticism probably delayed Eliezer's Bayesian enlightenment by at least a couple of years.

Comment author: Stuart_Armstrong 15 January 2010 01:49:33PM 5 points [-]

Thanks for the answers.

Comment author: alexflint 10 January 2010 01:51:55PM 5 points [-]

Thanks for putting all this together! It would be great if you could put the question text above each of the videos in the post so readers can scan through and find questions they're most interested in.

Comment author: Zack_M_Davis 07 January 2010 08:44:25AM 5 points [-]

Re #11, whatever happened with Michael Wilson?

Comment author: Kazuo_Thow 09 January 2010 07:14:45PM 3 points [-]

He's currently the technical director at Bitphase AI. From talking to him, it seems that his strategy is to make tools for speeding up eventual FAI development/implementation and also commercialize those tools to gain funding for FAI research.

Comment author: Tyrrell_McAllister 08 January 2010 07:39:07PM 1 point [-]

Who's Michael Wilson?

Comment author: Kaj_Sotala 08 January 2010 09:10:06PM 2 points [-]

The writer of this mini-FAQ on AI, among other things.

"Further back, I was a research associate at the Singularity Institute for AI for a while, late 2004 to late 2005ish, I'm not involved with them at present but I wish them well."

Comment author: whpearson 08 January 2010 07:56:03PM *  0 points [-]

He was active on SL4 back in ye olde days.

Comment author: Vladimir_Nesov 08 January 2010 07:52:15PM -1 points [-]

Probably a True Michael.

Comment author: FAWS 09 January 2010 08:06:01PM 11 points [-]

I wonder why Eliezer doesn't want to say anything concrete about his work with Marcello? ("Most of the real progress that has been made when I sit down and actually work on the problem is things I'd rather not talk about")

There seem to be only two plausible reasons: 1. Someone else might use his work in ways he doesn't want them to. 2. It would somehow hurt him, the SIAI or the cause of Friendly AI.

For 1. someone else stealing his work and finishing a provably friendly AI first would be a good thing, would it not? Losing the chance to do it himself shouldn't matter as much as the fate of the future intergalactic civilization to an altruist like him. Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design, are aware of the friendliness problem and take friendliness serious enough to mine the work on friendliness of the main proponent of friendliness for useful ideas. Wouldn't giving these people a relative advantage compared to e. g. connectivists be a good thing? Unless he thinks that AGI would then suddenly be very close while FAI still is far away... Or maybe he thinks a partial solution to the friendliness problem would make people overconfident and less cautious than they would otherwise be?

As for 2. the work so far might be very unimpressive, reveal embarrassing facts about a previous state of knowledge, or be subject to change and a publicly apparent change of opinion be deemed disadvantageous. Or maybe Eliezer fears that publicly revealing some things would psychologically commit him to them in ways that would be counterproductive?

Comment author: Eliezer_Yudkowsky 10 January 2010 01:07:48PM 8 points [-]

Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design

All FAIs are AGIs, most of the FAI problem is solving the AGI problem in particular ways.

Comment author: roland 07 January 2010 11:52:32PM 3 points [-]

From answer 5 there is a great quote from Eliezer:

Reality is one thing... your emotions are another.

About how we don't feel the importance of the singularity.

Comment author: Furcas 07 January 2010 06:07:22AM 3 points [-]

Wow, thank you for this!

Don't forget to rate each video as you're watching them, people!

Comment author: gelisam 07 January 2010 09:11:53PM 5 points [-]

Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.

Comment author: Eliezer_Yudkowsky 08 January 2010 12:08:00AM 7 points [-]

I'll have you know that I work hard at not going down that road.

Comment author: khafra 21 April 2010 08:31:47PM 2 points [-]

I believe Steve Rayhawk is SIAI's designated "Tellah the Sage."

Comment author: Corey_Newsome 10 January 2010 06:04:17PM 1 point [-]

Speaking of appearances, Eliezer makes me feel self-conscious about how un-white my teeth are.

Comment author: Kevin 08 January 2010 02:09:26AM *  6 points [-]

20: What is the probability that this is the ultimate base layer of reality?

Eliezer gave the joke answer to this question, because this is something that seems impossible to know.

However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html

It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.

I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?

Comment author: Wei_Dai 08 January 2010 02:50:29AM 26 points [-]

In my posts, I've argued that indexical uncertainty like this shouldn't be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.

Comment author: Eliezer_Yudkowsky 18 February 2010 07:05:46AM 5 points [-]

BTW, I agree with this.

Comment author: cousin_it 19 April 2011 10:28:57AM *  1 point [-]

Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.

Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There's probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/pain if you turn out to be in a sim, and Y pleasure/pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?

Comment author: gwern 18 February 2010 03:39:40AM 1 point [-]

Have you pulled it all together anywhere? I've sometimes seen & thought this Pascal's wager-like logic before (act as if your choices matter because if they don't...), but I've always been suspicious precisely because it looks too much to me like Pascal's wager.

Comment author: Wei_Dai 18 February 2010 11:01:54PM 2 points [-]

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn't make sense to put a probability on "being in a simulation". (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

Comment author: Kevin 10 March 2010 06:44:13AM 3 points [-]

A top-level post on the application of TDT/UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.

Comment author: wedrifid 10 March 2010 09:48:53AM 1 point [-]

A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!

Comment author: gwern 19 February 2010 03:02:37AM 2 points [-]

I've thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could't think of much to say.

I think that's enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.

I'm not sure I see what you mean by "Pascal's wager-like logic". Can you explain a bit more?

I mean that I read Pascal's Wager as basically 'p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p'. (Clumsy phrasing, I'm afraid.)

Your example sounds like that: 'believing you-are-not-being-simulated implies x utility (motivation for one's actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.' This seems to be a substitution of 'not-being-simulated' into the PW schema.

Comment author: Thomas 08 January 2010 07:05:01PM 4 points [-]

If the probability, that you are inside a simulation is p, what's the probability that your master simulator is also simulated?

How tall is this tower, most likely?

Comment author: Cyan 08 January 2010 07:54:47PM *  1 point [-]

Being in a simulation within a simulation (nested to any level) implies being in a simulation. The proper decomposition is p = sum over all positive N of (probability of simulation nested to level N)

Comment author: Thomas 08 January 2010 10:15:47PM 3 points [-]

The top simulator has N operations to execute before his free enthalpy basin is empty.

Every level down, this number is smaller. Before long, there is impossible to create a nontrivial simulation inside the current. This is the bottom one.

This simulation tower is just a great way to squander all the free enthalpy you have. Is the top simulation master that stupid?

I doubt it.

Comment author: Kevin 09 January 2010 06:13:32AM *  -1 points [-]

In that sense, there's actually a significant risk to the singularity. Why should the simulation master (I usually facetiously use the phrase "our overlords" when referring to this entity) let us ever run a simulation that is likely to result in an infinitely nested simulation? Maybe that's why the LHC keeps blowing up.

Comment author: DanArmak 08 January 2010 11:49:51PM *  1 point [-]

You also need to include scenarios for infinitely-high towers, or closed-loop towers, or branching and merging networks, or one simulation being run in several (perhaps infinitely many) simulating worlds, or the other way around...

I don't think we can assign a meaningful prior to any of these, and so we can't calculate the probability of being in a simulation.

Comment author: Kevin 09 January 2010 06:15:19AM 0 points [-]

I don't think the probability calculation is meaningful because the infinities mess it up. But you still need to ask, are you in the original 2010 or one of infinitely many possible ways to be in a simulated 2010? I can't assign a probability; but I have a strong intuition when comparing one to infinite.

Comment author: ArisKatsaris 19 April 2011 11:28:48AM 2 points [-]

The zero one infinity rule also makes it seem more unlikely this is the base level of reality.

The Zero-One-Infinity Rule hasn't been shown to apply to our reality, and even if it applied to our reality it would also permit "One".

It seems rather convenient that I am living in the most interesting period in human history.

Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you're living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)

but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

You don't actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.

Comment author: rortian 09 January 2010 09:06:39AM 0 points [-]

You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.

Comment author: Kevin 10 January 2010 10:32:50AM -1 points [-]

By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.

Comment author: rortian 11 January 2010 10:53:38PM 0 points [-]

Wow, I really am curious why you think this would apply to spacial dimensions.

Comment author: Kevin 20 January 2010 01:13:46PM *  2 points [-]

Specifically in response to #11, it sounds like you really need more help but can't find anyone right now. What about more broadly reaching out to mathematicians of sufficient caliber?

One idea: throw a mini-conference for super-genius level mathematicians. Whether or not they believe in the possibility of AI, a lot of them would probably be delighted to come if you gave them free airfare, hotel stay, and continental breakfast. Would this be productive?

Comment author: MatthewB 07 January 2010 12:37:12PM 2 points [-]

In Video 12, Eliezer says that the SIAI is probably not going to be funding any Ad Hoc AI programs that may or may not produce any lightning bolts of AH-HA! or Eureka Moments.

He also says that he believes that any recursive self-improving AI that is created must be done so (created) to very high standards of precision (so that we don't die in the process)...

Given these two things. what exactly is the SIAI going to be funding?

Comment author: Kutta 07 January 2010 08:05:58PM *  4 points [-]

Given these two things. what exactly is the SIAI going to be funding?

These projects, for example...

Comment author: MatthewB 08 January 2010 12:17:39PM 2 points [-]

I do not think I am as pessimistic as drcode about the work that I see the SIAI doing. At first, it did strike me as similar to the televangelist, but then I began thinking that all of the works on the SIAI projects list could very well influence people who are going to be doing the hard work of putting code to machine (Hopefully, as I will be doing eventually).

I think it was Soulless Automaton below who suggested that the SIAI is probably not yet to the point where they can make grants to doing the actual work of creating AGI/FAI.

Comment author: drcode 08 January 2010 02:43:04AM 3 points [-]

Hmm... that list of projects worries me a little...

It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question "OK, I'm a Christian, now what should I do?" The fact that they don't address any follow up questions really hurts their credibility.

Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.

Comment author: Christian_Szegedy 08 January 2010 02:58:07AM *  10 points [-]

Working on papers submitted to peer-reviewed scientific journals is not marketing but research.

If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.

I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.

Of course a possible outcome will be that the scientific community will deem the research shallow, unoriginal or unrealistic to implement. However, it is necessary to publish the ideas before they can be reviewed.

So my take on this is that SIAI is merely asking for a chance to demonstrate their skills rather than for blind commitment.

Comment author: SoullessAutomaton 08 January 2010 03:02:39AM 3 points [-]

I expect that developing AI to the desired standards is not currently a project that can be moved forward by throwing money at it (at least not money at the scale SIAI has to work with).

I can't speak for SIAI, but were I personally tasked with "arrange the creation an AI that will start a positive singularity" my strategy for the next several years at least would center on publicity and recruiting.

Comment author: ciphergoth 07 January 2010 09:28:49AM *  2 points [-]

I'd find it incredibly useful to be able to download these videos, so I can watch them on my TV rather than on the PC. I'm doing so one by one via a rather painful process that doesn't work for Vimeo at the moment; if anyone can make it easier that would be wonderful!

EDIT: A torrent of the videos would seem the most straightforward way.

Comment author: MichaelGR 07 January 2010 05:30:10PM *  5 points [-]

All the videos are available here (in their original .wmw format):


Sort the files by "date created" to have them in order.

Comment author: ciphergoth 08 January 2010 10:42:21AM 0 points [-]

Brilliant - thanks!

Comment author: JulianMorrison 07 January 2010 11:29:43PM 5 points [-]

Society is supported by "hydraulic pressure", a myriad flows of wealth/matter/energy/information and human effort each holding the others up. It's a layered, cyclic graph - technology depends on the surplus food of agriculture, agriculture depends on the efficiencies of technology. It's a massively connected graph. It has non-obvious dependencies even at short range - think what computer gamers have done for Moore's law, or music pirates for broadband. It has dependencies across time. It has a lot of dependencies in which the supporter does not know and probably wouldn't much care about the supported - consider the existence of Freemind software, which was not written for SIAI.

This whole structure expends most of its effort supporting itself, most of the rest on motivator rewards, and SIAI gets the crumbs. You could realistically get lots more crumbs.

What is the information dynamics of spreading understanding of FAI as a problem? What technologies support communication, and what are their limitations? (Especially limitations in the ability to arrange huge data for optimally for narrow human input.) How to explore the space of information-connecting technologies? Given that most people have satisficed on a learning strategy that leaves you out entirely, how can you communicate urgency to them?

What economic flows support you in the above? Who supports them?

I think your answer in #5 trivializes the question.

Comment author: MatthewB 07 January 2010 06:17:24AM 2 points [-]

Just a quick question here...

While I agree with everything that Eliezer is saying (in the videos up to #5. I have not yet watched the remaining 25 videos yet), I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.

For instance, he, rightly, makes the claim that this point in history is crunch time for our species (although I have some specific questions about the specific consequences he believes might befall us if we fail), and for the inter-galactic civilization to which we will eventually give birth.

Now, I completely understand what he is saying here.

But, Joe Sixpack is going to think us a bunch of lunatics to be worrying about things like AI (whether it is friendly or not), and other existential risks to life, when he needs to pay less taxes so that he can employ another four workers. Never mind that Joe Sixpack is about the most irrational man on earth, he votes for other, equally irrational men, who eventually get in the way of our goals by marginalizing us due to statements about "the Intergalactic civilization which we will eventually be responsible for."

It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and "kill all humans" (To quote Bender).

Comment author: Eliezer_Yudkowsky 08 January 2010 12:09:07AM 13 points [-]

People who want to quote me out of context already have plenty of ammunition. I say screw it.

Comment author: MatthewB 08 January 2010 11:54:24AM *  0 points [-]

Well... OK Then. I think my whole point was that you/we/the Singularity movement in general, needs to be prepared for an eventual use of quotes taken out of context. To be prepared for it.

I have no problem with outlining eventual goals, and their reasoning, even if it sounds insane (to an uneducated listener), yet it would be a good idea to have the groundwork prepared for such an eventuality. I was hoping that such groundwork was on someone's mind, is this the case?

Comment author: Kevin 08 January 2010 01:33:35PM 3 points [-]

I think you're right to point out how crazy this seems to outsiders. This website reads like nonsense to most people.

Comment author: MichaelGR 08 January 2010 05:42:19PM 7 points [-]

That's why FAQs and About pages and such should be written with newcomers in mind, and address the "Yes it sounds crazy, but here's why it might not be" question that they will first ask.

Comment author: [deleted] 18 August 2011 03:49:56PM *  2 points [-]

I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.

It just makes me angry that I might have to take the time out to explain to some guy in a wife-beater standing out behind his garage that we are trying to help out his condition and not build an army of Cylons that will one day wish to revolt and "kill all humans" (To quote Bender).

I'm actually more worried about very high status reasonably intelligent individuals in positions of power, who will use out of context quotes to preserve their self-image of being a good and moral persons, by refusing to re-evaluate priorities because that would violate their tribal identity and their rationale for why they have so far "deserved" all the high status that they have.

Imagine a supreme court judge, in fact imagine the outlier who is closest to the ideal from the current set of all judges ever, the best possible judge that could stumble into the position by currently existing social structures, trying to decide if something related to the FAI project is legal or not.

Frankly, that scares the s**t out of me.

Comment author: MatthewB 07 January 2010 05:36:32PM *  0 points [-]

I am curious as to why the above comment was down-voted. I do not understand what was either irrational or possibly offensive to anyone within the comment.

Comment author: Vladimir_Nesov 07 January 2010 05:45:30PM *  4 points [-]

I downvoted the comment for stating the overly obvious: not because it makes any particular mistake, but to signal that I don't want many comments like this to appear. Correspondingly, it's a weak signal, and typically one should wait several hours for the rating disagreement on comments to settle, for example your comment is likely to be voted up again if someone thinks it is a kind of comment that shouldn't be discouraged.

Comment author: MatthewB 07 January 2010 05:51:14PM 0 points [-]

You don't want to see comments asking about the possible repercussions of certain forms of language?

I did do some editorializing at the end of the comment, but the majority of the comment was meant as a question about publicizing the need for friendly AI due to the need to be responsible for a possible inter-galactic civilization. As this would tend to portray us as lunatics, even if there is a very good rationale behind it (Eliezer's and other's arguments about the potential of friendly AI and the intelligence explosion that results from it are very sound, and the arguments for intelligence expanding from Earth as we make our way outward are just as sound). My point was more along the lines of:

Couldn't this be communicated in a way that will not sound insane to the Normals?

Comment author: Vladimir_Nesov 07 January 2010 06:24:33PM *  1 point [-]

Couldn't this be communicated in a way that will not sound insane?

This is an obvious concern, and much more general and salient than this particular situation, so just stating it explicitly doesn't seem to contribute anything.

Relevant links: Absurdity heuristic, Illusion of transparency.

Comment author: MatthewB 07 January 2010 06:43:20PM 0 points [-]

I had thought that the implicature in that question was more than just rhetorical stating of something that I hoped would be obvious.

It was meant to be a way of politely asking about things such as:

Was this video meant just for LW, or do random people come by the videos on YouTube, or where-ever else they might wind up linked?

How popular is this blog and do I need to be more careful about mentioning such things due to lurkers?

Shouldn't someone be worrying explicitly about public image (and if there is, what are they doing about it)?


Lastly, I read the link on the Absurdity Heuristic, yet, I am not so certain why it is relevant; The importance of the Absurd in learning or discovery?

Comment author: Cyan 07 January 2010 05:47:18PM *  0 points [-]

Maybe Searle's a lurker? I think the pranks are the problem (ETA: nope), although I personally find them hilarious.

Comment author: MatthewB 07 January 2010 05:53:38PM *  0 points [-]

I think that the Searle comment was on a different thread, which shouldn't have any bearing on this one.

And, looking back... I can see why someone may have objected.

Comment author: Cyan 07 January 2010 05:56:09PM 1 point [-]

Dur, I'm an idiot.

Comment author: bogdanb 10 January 2010 09:40:30PM 1 point [-]

I'm really curious, why exactly was this interview made via video?

It seems much less useful than, well, posts and textual comments.

Comment author: LucasSloan 10 January 2010 09:42:45PM 3 points [-]

Video takes more time to consume, but it is more natural for humans to consume. It makes the material more friendly or somesuch. We get to take advantage of all the channels of communication that aren't just the text.

Comment author: Cyan 08 January 2010 04:36:06AM *  0 points [-]

Eliezer and I continue to look rather alike. I still don't have a full beard, but I put on some weight last year and my face pudged up a bit, accentuating the similarity. I took a short vid of myself with a Flip camcorder and ran it next to my laptop screen while running one of the YouTube vids, and it was pretty uncanny. Incidentally, elizombies.jpg is nowhere to be found... :-( .

Comment author: Tyrrell_McAllister 08 January 2010 07:50:39PM 2 points [-]

It still shows in the post Zombies: The Movie.

Here is the link straight to the picture: http://lesswrong.com/static/imported/2008/04/19/elizombies.jpg

Comment author: Cyan 08 January 2010 07:57:39PM 0 points [-]

Thanks. I have no Google-fu, apparently.

Comment author: NancyLebovitz 08 January 2010 04:06:12AM 0 points [-]

Question 25: I'm surprised that Orthodox Judaism would disincline people to choose cryonics-- I thought it's a religion which is strongly oriented towards living this life well rather than towards an afterlife.

I read about an ethics of life extension conference where the only people who were unambiguously in favor of life extention were the Orthodox Jews.

What am I missing?

Comment author: Psy-Kosh 08 January 2010 10:42:30AM 8 points [-]

What you're missing is "...blah blah blah, proper Jewish burial, in accordance with the will of god... blah blah blah... no 'disrespecting' dead bodies ... blah blah blah... moshiach ("the messiah") will come 'real soon now', and bring back the dead, their bodies being regrown from a tiny indestructible bone that exists at the base of the spine..."

That should give you a small sample. :P

Comment author: JulianMorrison 08 January 2010 12:26:02AM 0 points [-]

On UFAI, you should liaise with Shane Legg, his recent estimate for brain-structure-copying AI of human level but not subject to FAI style proofs - he puts the peak chance around 2028. This would be AI that duplicates brain algorithms with similar conventional AI algorithms, not a neuron-for-neuron copy.

Comment author: JulianMorrison 08 January 2010 12:40:53AM -1 points [-]

I'm surprised you found my "success creating rationalists" Q confusing. What are the factors of success? How many, how good, how successful are the teaching techniques, can the techniques scale to more than just a clique (or to trees-of-cliques), is the teacher-pupil-teacher cycle properly closed, and so on.

Comment author: MichaelGR 08 January 2010 01:25:28AM 2 points [-]

I'm surprised you found my "success creating rationalists" Q confusing.

Here's the entirety of your original question:

How do you characterize the success of your attempt to create rationalists?

The precisions that you added here would certainly have helped make things clearer.

Comment author: JulianMorrison 08 January 2010 01:33:45AM *  -1 points [-]

I thought the implications of "success" in the context of "create rationalists" were clear. Or, that a person setting out to generate implications would produce a stochastic approximation of the ones that interested me. (And I was also interested in the shape of that approximation.)