Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions

16 Post author: MichaelGR 11 November 2009 03:00AM

As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

Suggestions

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

It's okay to attempt humor (but good luck, it's a tough crowd).

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

Update: Eliezer's video answers to 30 questions from this thread can be found here.

Comments (682)

Sort By: Controversial
Comment author: botogol 11 November 2009 05:19:25PM 0 points [-]

Do you act all rational at home . . or do you switch out of work mode and stuff pizza and beer in front of the TV like any normal akrasic person? (and if you do act all rational, what do your partner/family/housemates make of it? do any of them ever give you a slap upside the head?)

:-)

Comment author: RobinZ 11 November 2009 05:20:48PM 2 points [-]

*coughs*

A popular belief about "rationality" is that rationality opposes all emotion - that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can't find any theorem of probability theory which proves that I should appear ice-cold and expressionless.

Comment author: PlaidX 11 November 2009 03:41:54AM *  -1 points [-]

In 2000, you said this:

When voting in the United States, follow this algorithm: Vote Libertarian when available; otherwise, vote for the strongest third party available (usually Reform, unless they have a really evil candidate); then vote for any candidate who isn't a lawyer; then vote Republican (at present, they're slightly better).

Would you stick by your assertion that in 2000, the republicans were "slightly better", and who or what did you mean they were slightly better for? From where I'm standing, albeit with the benefit of hindsight, it seems like eight years of al gore would've been "slightly better" than eight years of dick cheney for just about everyone and everything.

Comment author: Zack_M_Davis 11 November 2009 03:51:45AM 5 points [-]

Cf. "Stop Voting for Nincompoops" (2008):

Besides, picking the better lizard is harder than it looks. In 2000, the comic Melonpool showed a character pondering, "Bush or Gore... Bush or Gore... it's like flipping a two-headed coin." Well, how were they supposed to know? In 2000, based on history, it seemed to me that the Republicans were generally less interventionist and therefore less harmful than the Democrats, so I pondered whether to vote for Bush to prevent Gore from getting in. Yet it seemed to me that the barriers to keep out third parties were a raw power grab, and that I was therefore obliged to vote for third parties wherever possible, to penalize the Republicrats for getting grabby. And so I voted Libertarian, though I don't consider myself one (at least not with a big "L"). I'm glad I didn't do the "sensible" thing. Less blood on my hands.

Comment author: Eliezer_Yudkowsky 11 November 2009 03:53:44AM 1 point [-]

...and that's still my reply, but you could vote it up if you want me to repeat it on video. :)

Comment deleted 13 November 2009 03:10:33AM [-]
Comment author: taa21 14 November 2009 07:27:25PM 0 points [-]

Is this a joke?

Comment author: UnholySmoke 12 November 2009 10:54:34AM 0 points [-]

Favourite album post-1960?

Comment author: anonym 15 November 2009 10:18:25PM 2 points [-]

More generally, do you listen to music much, and if so, what sorts of music, under what circumstances, and who/what are your favorites?

Comment author: AndrewKemendo 11 November 2009 12:34:23PM 1 point [-]

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

Comment author: zero_call 15 November 2009 08:32:37AM *  0 points [-]

There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.

It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it's like saying that just because an expert car mechanic completely understands how a car works, it doesn't mean that he can build another car which is fundamentally superior.

Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)

Comment author: [deleted] 15 November 2009 09:19:10AM 3 points [-]

Eliezer's belief, as I recall, is that human intelligence is a relatively small and arbitrary point in the "intelligence hierarchy", i.e. relative to minds at large, the smartest human is not much smarter than the dumbest. If an AI's intelligence stops increasing somewhere, why would it just happen to stop within the human range?

Comment author: komponisto 11 November 2009 06:12:39AM 2 points [-]

Sticking with biography/family background:

Anyone who has read this poignant essay knows that Eliezer had a younger brother who died tragically young. If it is not too insensitive of me, may I ask what the cause of death was?

Comment author: smoofra 11 November 2009 05:58:59AM -2 points [-]

Do you vote?

Comment author: retired_phlebotomist 13 November 2009 07:11:24AM 3 points [-]

What does the fact that when you were celibate you espoused celibacy say about your rationality?

Comment author: Larks 13 November 2009 10:38:17PM 3 points [-]

What do you estimate the utility of Less Wrong to be?

Comment author: Eliezer_Yudkowsky 13 November 2009 10:51:10PM *  11 points [-]

Roughly 4,250 expected utilons.

Comment author: Unnamed 14 November 2009 02:24:05AM 8 points [-]

Could you please convert to dust specks?

Comment author: timtyler 13 November 2009 11:16:32PM *  4 points [-]

Well yes: the question was a bit ambiguous.

Maybe one should adopt a universal standard yardstick for this kind of thing, though - so such questions can be answered meaningfully. For that we need something that everyone (or practically everyone) values. I figure maybe the love of a cute kitten could be used as a benchmark. Better yardstick proposals would be welcome, though.

Comment author: DanArmak 14 November 2009 12:23:05AM 2 points [-]

Way to Other-ize dog people.

Comment author: Larks 13 November 2009 11:56:49PM 5 points [-]

If only there existed some medium of easy comparison, such that we could easily compare the values placed on common goods and services...

Comment author: timtyler 14 November 2009 12:01:04AM 1 point [-]

Exactly: the elephant in my post ;-)

Comment author: Larks 14 November 2009 12:17:32AM 2 points [-]

I don't think elephants are a very practical yardstick. For a start, they're of varying size. I mean, apparently they can fit in posts now!

Comment author: Alicorn 13 November 2009 11:24:56PM 2 points [-]

It'd have to be a funny yardstick. Almost nothing we value scales linearly. I would start getting tired of kittens after about 4,250 of them had gone by.

Comment author: timtyler 13 November 2009 11:59:19PM 1 point [-]

Velocity runs into diminishing returns too near the speed of light - but it is still useful to try and measure it - and a yardstick can help with that.

Comment author: FeministX 11 November 2009 04:51:12AM *  3 points [-]

I have questions. You say we must have one question per comment. So, I will have to make multitple posts.

1) Is there a domain where rational analysis does not apply?

Comment author: Jack 11 November 2009 04:27:11AM *  7 points [-]

If you thought an AGI couldn't be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.

Comment author: righteousreason 23 December 2009 01:37:42AM *  -2 points [-]

As a question for everyone (and as a counter argument to CEV),

Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?

And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.

A) Yes B) No

I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.

Comment author: Will_Euler 19 November 2009 02:50:09AM 0 points [-]

Let's say someone (today, given present technology) has the goal of achieving rational self-insight into one's thinking processes and the goal of being happy. You have suggested (in conversation) such a person might find himself in an "unhappy valley" insofar as he is not perfectly rational. If someone today -- using current hedonic/positive psychology --undertakes a program to be as happy as possible, what role would rational self-insight play in that program? 

Comment author: Jach 13 November 2009 08:14:46AM *  4 points [-]

Within the next 20 years or so, would you consider having a child and raising him/her to be your successor? Would you adopt? Have you donated sperm?

Edit: the first two questions dependent on you not being satisfied by the progress on FAI.

Comment author: Morendil 12 November 2009 04:59:44PM 2 points [-]

Well, Eliezer's reply to this comment prompts a follow-up question:

In "Free to optimize", you alluded to "the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together". Can you say more about what you imagine such rules might be ?

Comment author: [deleted] 12 November 2009 06:45:53AM 0 points [-]

What if the friendly AI finds that our extrapolated volition is coherent and contains the value of 'self-determination' and concludes that it cannot meddle too much in our affairs? "Well, humankind, it looks like you don't want to have your destiny decided by a machine. my hands are tied. You need to save yourselves."

Comment author: Thomas 11 November 2009 07:04:09PM 0 points [-]

What is your current position about the FOOM effect, when the exploding intelligence quickly acquires the surrounding matter for its own usage. Solves its computing needs by transforming everything nearby to something more computationally optimal. And that by some not so obvious physical operations, yet entirely permitted and achievable from "pure calculating" already granted to ("seed") AI?

Comment author: pwno 11 November 2009 05:47:11PM *  4 points [-]

How would a utopia deal with human's seemingly contradicting desires - the desire to go up in status and the desire to help lower status people go up in status. Because helping lower status people go up in status will hurt our own status positions. I remember you mentioning how in your utopia you would prefer not to reconfigure the human mind. So how would you deal with such a problem?

(If someone finds the premise of my question wrong, please point it out)

Comment author: [deleted] 11 November 2009 07:17:31PM 1 point [-]

I don't think most people want to actually raise people who are lower status than themselves up to higher than themselves. I actually don't think that most people want to raise other's status very much. They seem to typically be more concerned with raising the material welfare of people who are significantly worse off, which doesn't necessarily change status. The main status effect of altruistic behavior is to raise the status of the altruist. For instance, consider the quote "It is more blessed to give than to receive." (Acts 20:35). If we think of "blessedness" as similar to status (status in the eyes of god maybe?) then a "status altruist" would read that and decide to always receive and never give in order raise the status of others. The traditional altruistic interpretation though is to give, and therefore become more blessed than the poor suckers who you are giving to.

Comment author: RichardKennaway 11 November 2009 10:46:21AM *  0 points [-]

You(EY)'ve mentioned moral beliefs from time to time, but I don't recall you addressing morality directly at length. A commonly expressed view in rationalist circles is that there is no such thing, but I don't think that is your view. What is a moral judgement, and how do you arrive at them?

ETA: As Psy-Kosh points out, he has, so scratch that unless EY has something more to say on the matter.

Comment author: John_Maxwell_IV 11 November 2009 06:21:52AM 10 points [-]

What are the hazards associated with making random smart people who haven't heard about existential dangers more intelligent, mathematically inclined, and productive?

Comment author: Daniel_Burfoot 11 November 2009 04:14:15PM 9 points [-]

Let E(t) be the set of historical information available up until some time t, where t is some date (e.g. 1934). Let p(A|E) be your estimate of the probability an optimally rational Bayesian agent would assign to the event "Self-improving artificial general intelligence is discovered before 2100" given a certain set of historical information.

Consider the function p(t)=p(A|E(t)). Presumably as t approaches 2009, p(t) approaches your own current estimate of p(A).

Describe the function p(t) since about 1900. What events - research discoveries, economic trends, technological developments, sci-fi novel publications, etc, caused the largest changes in p(t)? Is it strictly increasing, or does it fluctuate substantially? Did the publication of any impossibility proofs (e.g. No Free Lunch) cause strong decreases in p(t)? Can you point to any specific research results that increased p(t)? What about the "AI winter" and related setbacks?

Comment author: SilasBarta 12 November 2009 12:06:41AM 7 points [-]

Okay: Goedel, Escher, Bach. You like it. Big-time.

But why? Specifically, what insights should I have assimilated from reading it that are vital for AI and rationalist arts? I personally feel I learned more from Truly Part of You than all of GEB, though the latter might have offered a little (unproductive) entertainment.

Comment author: Psy-Kosh 11 November 2009 03:14:37AM 21 points [-]

Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?

Comment author: James_Miller 11 November 2009 05:26:46AM 31 points [-]

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?

Comment author: roland 13 November 2009 12:53:49AM 0 points [-]

Great question and I think it ties in well with my one about autodidacticism:

http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/1942

Comment author: roland 12 November 2009 09:30:50PM *  5 points [-]

Akrasia

Eliezer, you mentioned suffering from writer's molasses and your solution was to write daily on ob/lw. I consider this a clever and successful overcoming of akrasia. What other success stories from your life in relation to akrasia could you share?

Comment author: bogdanb 11 November 2009 11:07:15PM 20 points [-]

How did you win any of the AI-in-the-box challenges?

Comment author: SilasBarta 12 November 2009 08:59:57PM 2 points [-]

Voted down. Eliezer Yudkowsky has made clear he's not answering that, and it seems like an important issue for him.

Comment author: wedrifid 15 November 2009 10:24:23AM *  3 points [-]

Voted back up. He will not answer but there's no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.

Comment author: SilasBarta 16 November 2009 01:05:36AM -1 points [-]

Yes, there is harm in asking. Provoking people to break contractual agreements they've made with others and have made clear they regard as vital, generally counts as Not. Cool.

Comment author: Jordan 16 November 2009 01:50:00AM *  3 points [-]

In this case though, it's clear that Eliezer wants people to get something out of knowing about the AI box experiments. That's my extrapolated Eliezer volition at least. Since for me and many others we can't get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition.

In most situations I would agree that it's not cool to push.

Comment author: wedrifid 16 November 2009 08:38:19AM 1 point [-]

As the OP said, Eliezer hasn't been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is 'up'.

(At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It's a little blurry.)

Comment author: [deleted] 17 November 2009 02:11:00AM 1 point [-]

It depends on the probability distribution of comments.

Comment author: Unnamed 17 November 2009 02:22:58AM 7 points [-]

Here's an alternative question if you don't want to answer bogdanb's: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?

Comment author: CronoDAS 16 November 2009 09:50:09AM 2 points [-]

Perhaps this would be a more appropriate version of the above:

What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?

Comment author: komponisto 11 November 2009 06:00:47AM 15 points [-]

I admit to being curious about various biographical matters. So for example I might ask:

What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?

Comment author: Wei_Dai 11 November 2009 09:23:46PM 21 points [-]

Why do you have a strong interest in anime, and how has it affected your thinking?

Comment author: jimrandomh 12 November 2009 02:44:20AM 16 points [-]

What is the probability that this is the ultimate base layer of reality?

Comment author: anonym 14 November 2009 09:49:49PM 3 points [-]

If you conceptualized the high-level tasks you must attend to in order to achieve (1) FAI-understanding and (2) FAI-realization in terms of a priority queue, what would be the current top few items in each queue (with numeric priorities on some arbitrary scale)?

Comment author: ajayjetti 12 November 2009 03:23:32AM 3 points [-]

Are you a meat-eater?

Comment author: Utilitarian 11 November 2009 06:58:36AM 14 points [-]

What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?

Comment author: timtyler 11 November 2009 09:00:49AM 4 points [-]

That's 14 questions! ;-)

Comment author: RobinHanson 11 November 2009 11:45:10PM 22 points [-]

Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?

Comment author: Eliezer_Yudkowsky 12 November 2009 12:28:51AM 8 points [-]

Who are we talking about besides you?

Comment author: RobinHanson 12 November 2009 02:30:07AM 2 points [-]

I'd consider important overlapping academic fields to be AI and long term economic growth; I base my claim about academic expert opinion on my informal sampling of such folks. I would of course welcome a more formal sampling.

Comment author: MichaelVassar 13 November 2009 05:08:13AM 9 points [-]

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Comment author: timtyler 14 November 2009 12:35:45AM 4 points [-]

Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world.

Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding.

We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.

Comment author: MichaelVassar 15 November 2009 04:39:09PM 7 points [-]

You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding.

Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?

Comment author: timtyler 15 November 2009 04:55:09PM *  0 points [-]

Some DOOM peddlers manage to get by. Probably most of them are currently in Hollywood, the finance world, or ecology. Machine intelligence is only barely on the radar at the moment - but that doesn't mean it will stay that way.

I don't necessarily mean to suggest that these people are all motivated by money. Some of them may really want to SAVE THE WORLD. However, that usually means spreading the word - and convincing others that the DOOM is real and immanent - since the world must first be at risk in order for there to be SALVATION.

Look at Wayne Bent (aka Michael Travesser), for example:

"The End of The World Cult Pt.1"

The END OF THE WORLD - but it seems to have more to do with sex than money.

Comment author: Zack_M_Davis 24 November 2009 12:05:18AM 1 point [-]

an ideological attraction to libertarianism is BY FAR the main reason for disagreement [with singleton strategies/hypotheses]. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Any practical advice on how to overcome this failure mode, if and only if it is in fact a failure mode?

Comment author: Alicorn 11 November 2009 06:23:51PM 22 points [-]

What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?

Comment author: Steve_Rayhawk 11 November 2009 11:46:02PM *  4 points [-]

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be.

This disadvantages questions which are posted late (to a greater extent than would give people an optimal incentive to post questions early). (It also disadvantages questions which start with a low number of upvotes by historical accident and then are displayed low on the page, and are not viewed as much by users who might upvote them.)

It's not your fault; I just wish the LW software had a statistical model which explained observed votes and replies in terms of a latent "comment quality level", because of situations like this, where it could matter if a worse comment got a high rating while a better comment got a low one. (I also wish forums with comment ratings used ideas related to value of information, optimal sequential preference elicitation, and/or n-armed bandit problems to decide when to show users comments whose subjective latent quality has a low marginal mean but a high marginal variance, in case the (")true(") quality of a comment is high, because of the possibility that a user will rate the comment highly and let the forum software know that it should show the comment to other users.)

Comment author: evtujo 11 November 2009 05:09:36AM 21 points [-]

How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?

Comment author: taa21 11 November 2009 09:01:31PM 6 points [-]

Just out of curiosity, why are you asking this? And why is Yudkowsky's opinion on this matter relevant?

Comment author: spriteless 15 November 2009 11:00:04PM 2 points [-]

This sort of thing should have it's own thread, it deserves some brainstorming.

You can start with choice of fairytales.

You can make the games available to play reward understanding probabilities and logic over luck and quick reflexes. My dad got us puzzle games and reading tutors for the NES and C64 when I was a kid. (Lode Runner, LoLo, Reader Rabbit)

Comment author: botogol 11 November 2009 05:14:27PM 9 points [-]

Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?

Comment author: MichaelGR 11 November 2009 09:20:33PM *  14 points [-]

In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.

Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?

Comment author: timtyler 14 November 2009 12:40:30AM *  0 points [-]

You apparently assume that SIAI will get somewhere in their attempts to create machine intelligence before other organisations do. That seems relatively unlikely - given the current situation. What is the justification for that premise?

Comment author: MichaelGR 14 November 2009 01:20:26AM 1 point [-]

I would ask the same question to other AGI organizations if I could, but this is a Q&A with only Eliezer (though I'm also curious to know if he knows anything about what other groups are doing with regards to this).

Regardless of who is the first to get to AGI, that group could potentially run into the kind of problems I mentioned. I never said it was the most probable thing that can go wrong. But it should probably be looked into seriously since, if it does happen, it could be pretty catastrophic.

The way I see it, either AGI is developed in secret and Eliezer could be putting the finishing touches on the code right now without telling anyone, or it'll be developed in a fairly open way, with mathematical and algorithmic breakthroughs discussed at conferences, on the net, in papers, whatever. If the latter is the case, some big breakthroughs could attract the attention of powerful organizations (or even of AGI researchers who have enough of a clue to understand these breakthroughs, but that also know they're too far behind to catch up, so the best way for them to get there first is to somehow convince an intelligence agency to steal the code or whatever - again specifics are not important here, just the general principle of what to do with security as we get closer to full AGI).

Comment author: Johnicholas 12 November 2009 04:03:23AM 4 points [-]

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode. It's far more likely that SIAI is slower at developing (both Friendly and unFriendly) AI than the rest of the world. It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.

Comment author: alyssavance 13 November 2009 12:10:07AM 3 points [-]

"It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI."

I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use.

Comment author: mormon2 13 November 2009 05:53:45PM 0 points [-]

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.

As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.

I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...

Comment author: alyssavance 13 November 2009 07:23:49PM 2 points [-]

Of course startups sometimes lose; they certainly aren't invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.

"If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile."

(citation needed)

Comment author: mormon2 14 November 2009 01:56:06AM 0 points [-]

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.

My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you're smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)

Sorry this is harsh but there it is.

Comment author: alyssavance 14 November 2009 02:29:49PM 0 points [-]

I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.

Comment author: Alicorn 14 November 2009 02:19:36AM 3 points [-]

If you want to claim you're smart you have to have accomplishments that back it up right?

I think you have confused "smart" with "accomplished", or perhaps "possessed of a suitably impressive resumé".

Comment author: mormon2 14 November 2009 02:24:39AM *  2 points [-]

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

Comment author: wedrifid 14 November 2009 03:44:38AM *  -1 points [-]

I think accomplishments are a better measure (quality over quantity obviously)

I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I'm trying to walk in 40 years?

ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn't a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.

Comment author: Alicorn 14 November 2009 02:37:11AM *  3 points [-]

What do you think "intelligence" is?

Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?

Comment author: mormon2 14 November 2009 06:06:42PM 1 point [-]

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

Comment author: Kaj_Sotala 14 November 2009 06:45:20PM 1 point [-]

What do you think "intelligence" is?

Previously, Eliezer has said that intelligence is efficient optimization.

Comment author: Vladimir_Nesov 13 November 2009 04:15:56PM *  3 points [-]

There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven't yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.

Comment author: RobQ 13 November 2009 04:21:43AM 5 points [-]

Fear of theft is a crank trope? As someone who makes a living providing cyber security I have to say you have no idea of the daily intrusions US companies experience from foreign governments and just simple criminals.

Comment author: MichaelVassar 15 November 2009 04:54:07PM *  3 points [-]

Theft of higher level more abstract ideas is much rarer. It happens both in Hollywood films and in the real Hollywood, but not so frequently, as far as I can tell, in most industries. More frequently, people can't get others to follow up on high generality ideas. Apple and Microsoft, for instance, stole ideas from Xerox that Xerox had been sitting on for years, they didn't steal ideas that Xerox was working on and compete with Xerox.

Comment author: patrissimo 12 November 2009 06:58:14PM 5 points [-]

Do you think that just explaining biases to people helps them substantially overcome those biases, or does it take practice, testing, and calibration to genuinely improve one's rationality?

Comment author: roland 19 November 2009 01:09:22AM *  2 points [-]

I can partially answer this. In the book "The logic of failure" by Dietrich Dorner he tested humans with complex systems they had to manage. It turned out that when one group got specific instructions of how to deal with complex systems they did not perform better than the control group.

EDIT: Dorner's explanation was that just knowing was not enough, individuals had to actually practice dealing with the system to improve. It's a skillset.

Comment author: John_Maxwell_IV 11 November 2009 06:55:09AM 25 points [-]

What's your advice for Less Wrong readers who want to help save the human race?

Comment author: John_Maxwell_IV 11 November 2009 06:51:41AM 16 points [-]

Who was the most interesting would-be FAI solver you encountered?

Comment author: sixes_and_sevens 11 November 2009 02:56:43PM 11 points [-]

What five written works would you recommend to an intelligent lay-audience as a rapid introduction to rationality and its orbital disciplines?

Comment author: John_Maxwell_IV 11 November 2009 06:52:29AM 11 points [-]

What was the most useful suggestion you got from a would-be FAI solver? (I'm putting separate questions in separate comments per MichaelGR's request.)

Comment author: Vladimir_Nesov 11 November 2009 02:35:21PM *  6 points [-]

Which areas of science or angles of analysis currently seem relevant to the FAI problem, and which of those you've studied seem irrelevant? What about those that fall on the "AI" side of things? Fundamental math? Physics?

Comment author: mormon2 11 November 2009 05:22:28PM 3 points [-]

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a request to see EY demonstrate TDT with some worked out math on a whiteboard or some such on the video.

Comment author: [deleted] 11 November 2009 08:21:00PM 31 points [-]

What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?

Comment author: RichardKennaway 11 November 2009 08:57:12AM *  13 points [-]

Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.

Comment author: [deleted] 11 November 2009 04:15:18PM 0 points [-]

Well, it appears that no published work in AI has ended in successful strong artificial intelligence.

Comment author: FeministX 11 November 2009 05:04:19AM 13 points [-]

2) How does one affect the process of increasing the rationality of people who are not ostensibly interested in objective reasoning and people who claim to be interested but are in fact attached to their biases?

I find that question interesting because it is plain that the general capacity for rationality in a society can be improved over time. Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.

It seems to me that we really are faced with the challenge of explaining the value of empirical analysis and objective reasoning to much of the world. Today the Middle East is hostile towards reason though they presumably don't have to be this way.

So again, my question is how do more rational people affect the reasoning capacity in less rational people, including those hostile towards rationality?

Comment author: komponisto 11 November 2009 05:39:28AM 33 points [-]

During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

Comment author: gwern 06 November 2011 12:37:54AM 1 point [-]

If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction?

String theorists are at least somewhat plausible, but Michelangelo and Beethoven? Do you have any evidence that they actually helped the sciences progress? I've asked the same question in the past, and have not been able to adduce any evidence worth a damn. (Science fiction, at least, can try to justify itself as good propaganda.)

Comment author: John_Maxwell_IV 11 November 2009 06:08:58AM *  9 points [-]

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI?

I don't know about Eliezer, but I would be able to sacrifice quite a lot; perhaps all of art. If humanity spreads through the galaxy there will be way more than enough time for all that.

If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists?

It might. But their expected contribution would be much greater if they looked at the problem to see how they could contribute most effectively.

And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

No one's saying that you're not allowed to do something. Just that it's suboptimal under their utility function, and perhaps yours.

My guess is that you overestimate how much of an altruist you are. Consider that lives can be saved using traditional methods for well under $1000. That means every time you spend $1000 on other things, your revealed preference is that having that stuff is more important to you than saving the life of another human being. If you're upset upon hearing this fact, then you're suffering from cognitive dissonance. If you're a true altruist, you'll be happy after hearing this fact, because you'll realize that you can be scoring much better on your utility function than you are currently. (Assuming for the moment that happiness corresponds with opportunities to better satisfy your utility function, which seems to be fairy common in humans.)

Regardless of whether you're a true altruist, it makes sense to spend a chunk of your time on entertainment and relaxation to spend the rest more effectively.

By the way, I would be interested to hear Eliezer address this topic in his video.

Comment author: ABranco 14 November 2009 01:55:41PM 14 points [-]

Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?

Comment author: RobinZ 11 November 2009 05:33:10PM 7 points [-]

I am sure you're familiar with the University of Chicago "Doomsday Clock", so: if you were in charge of a Funsday Clock, showing the time until positive singularity, what time would it be on? Any recent significant changes?

(Idea of Funsday Clock blatantly stolen from some guy on Twitter.)

Comment author: Stuart_Armstrong 11 November 2009 11:42:21AM 21 points [-]

Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?

Comment author: DanArmak 11 November 2009 10:47:53AM 18 points [-]

Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)

Comment author: John_Maxwell_IV 11 November 2009 06:49:45AM 10 points [-]

What is the background that you most frequently wish would-be FAI solvers had when they struck up conversations with you? You mentioned the Dreams of Friendliness series; is there anything else? You can answer this question in comment form if you like.

Comment author: MichaelGR 11 November 2009 09:06:59PM 22 points [-]

If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?

Comment author: SilasBarta 11 November 2009 03:26:35PM 11 points [-]

Previously, you said that a lot of work in Artificial Intelligence is "5% intelligence and 95% rigged demo". What would you consider an example of something that has a higher "intelligence ratio", if there is one, and what efforts do you consider most likely to increase this ratio?

Comment author: haig 11 November 2009 10:19:48PM *  23 points [-]

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

Comment author: timtyler 15 November 2009 10:16:37AM 1 point [-]

See Eli on video, 50 seconds in:

http://www.youtube.com/watch?v=0A9pGhwQbS0

Comment author: SilasBarta 11 November 2009 09:44:54PM 12 points [-]

Previously, you endorsed this position:

Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.

One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.

What do you think about this kind of self-deception?

Comment author: pwno 12 November 2009 02:45:23AM 0 points [-]

Costs outweigh the benefits.

Comment author: Dufaer 13 November 2009 12:09:50PM 0 points [-]

Oh, how convenient, isn’t it? Well, then what about a self-deception in order to increase a placebo effect; in a case where the concerned disease may or may not be life-threatening?

Comment author: pwno 13 November 2009 04:21:30PM 0 points [-]

I didn't say the costs always outweigh the benefits.

Comment author: Psy-Kosh 11 November 2009 07:00:00PM 12 points [-]

In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?

ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?

Comment author: timtyler 11 November 2009 08:53:49AM 12 points [-]

What was the significance of the wirehead problem in the development of your thinking?

Comment author: cabalamat 12 November 2009 03:59:43AM *  13 points [-]

What practical policies could politicians enact that would increase overall utility? When I say "practical", I'm specifically ruling out policies that would increase utility but which would be unpopular, since no democratic polity would implement them.

(The background to this question is that I stand a reasonable chance of being elected to the Scottish Parliament in 19 months time).

Comment author: CronoDAS 12 November 2009 07:22:45AM 0 points [-]

I'd guess that legalizing gay marriage would be pretty low-hanging fruit, but I don't know how politically possible it is.

Comment author: Jess_Riedel 13 November 2009 12:56:36AM *  4 points [-]

It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.

Comment author: CronoDAS 13 November 2009 09:02:46PM *  2 points [-]

Well, I mean "low-hanging fruit" in that it doesn't really cost any money to implement. Symbolism is cheap; providing material benefits is more expensive, especially in developed countries.

I don't know much about the political situation in Scotland; I know about a few miscellaneous stupidities in the U.S. federal government that I'd like to get rid of (abstinence-only sex education, "alternative" medicine research) but I suspect that Scotland and the rest of the U.K. is stupid in different ways than the U.S. is.

Comment author: Thomas 12 November 2009 09:25:18AM 4 points [-]

Free trade. As a politician, you can't do more than that.

Comment author: Matt_Simpson 12 November 2009 04:50:10PM 2 points [-]

And open immigration policies

Comment author: taa21 11 November 2009 09:06:17PM 14 points [-]

What do you view as your role here at Less Wrong (e.g. leader, preacher, monk, moderator, plain-old contributor, etc.)?

Comment author: Bindbreaker 11 November 2009 07:53:15AM *  15 points [-]

In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.

Comment author: kpreid 11 November 2009 01:00:14PM *  10 points [-]

This comes to mind:

But why not become an expert liar, if that's what maximizes expected utility? Why take the constrained path of truth, when things so much more important are at stake?

Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren't inconveniences. They were safety rails on cliffs I didn't see.

I made fundamental mistakes, and my ethics didn't halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn't expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.

You can't duplicate this protective effect by trying to be clever and calculate the course of "highest utility". The expected utility just takes into account the things you know to expect. It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.

Ethics aren't just there to make your life difficult; they can protect you from Black Swans. A startling assertion, I know, but not one entirely irrelevant to current affairs.

Protected From Myself

Comment author: Bindbreaker 11 January 2010 03:57:24AM 2 points [-]
Comment author: MichaelGR 11 November 2009 08:49:01PM 32 points [-]

Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):

http://yudkowsky.net/obsolete/bookshelf.html

Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).

Comment author: JulianMorrison 13 November 2009 04:24:48PM 17 points [-]

How do you characterize the success of your attempt to create rationalists?

Comment author: anonym 14 November 2009 09:44:56PM 18 points [-]

What progress have you made on FAI in the last five years and in the last year?

Comment author: Johnicholas 11 November 2009 11:43:39AM 18 points [-]

What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.

Comment author: MichaelGR 11 November 2009 08:55:54PM *  37 points [-]

What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.

Comment author: Liron 13 November 2009 04:18:43AM 1 point [-]

Ditto regarding your food diet?

Comment author: anon 14 November 2009 03:45:50PM 19 points [-]

For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?

Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.

Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.

I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.

If the answer is "No. You'll have to do with the base probability of any random human being a sociopath.", that might be good enough. Still, I'd like to know if I'm missing specific evidence that would push the probability for "SIAI is capital-E Evil" lower than that.

Posted pseudo-anonymously because I'm a coward.

Comment author: anonymousss 18 April 2012 08:20:13AM *  1 point [-]

I looked into the issue from statistical point of view. I would have to go with much higher than baseline probability of them being sociopaths on the basis of Bayesian reasoning starting with baseline probability (about 1%) as a prior and then updating on the criteria of things that sociopaths can not easily fake (such as e.g. previously inventing something that works).

Ultimately, the easy way to spot a sociopath is to look for the massive dis-balance of the observable signals towards those that sociopaths can easily fake. You don't need to be smarter than sociopath to identify the sociopath. The spam filter is pretty good at filtering out the advance fee fraud and letting business correspondence through.

You just need to act like statistical prediction rule on a set of criteria, without allowing for verbal excuses of any kind, no matter how logical they sound. For instance the leaders of genuine research institutions are not HS dropouts; the leaders of cults are; you can find the ratio and build evidential Bayesian rule, with which you can use 'is HS dropout' evidence to adjust your probabilities.

The beauty of this method is that it is too expensive for sociopaths to fake honest signals - such as for example having spent years to make and perfect some invention that has improved lives of people, you can't send this signal without doing immense lot of work - and so even as they are aware of this method there is literally nothing they can do about it, nor do they want to do anything about it as there are enough people who do not pay attention to certainly honest signals to fakeable signals ratio (gullible people), whom sociopaths can target instead, for a better reward to work ratio.

Ultimately, it boils down to the fact that genuine world saving leader is rather unlikely to have never before invented anything that did demonstrably benefit the mankind, while a sociopath is pretty likely (close to 1) to have never before invented anything that did demonstrably benefit the mankind. You update on this, and ignore verbal excuses, and you have yourself a (nearly)non-exploitable decision mechanism.

Comment author: Eliezer_Yudkowsky 15 November 2009 10:22:58PM 11 points [-]

I guess my main answers would be, in order:

1) You'll have to do with the base probability of a highly intelligent human being a sociopath.

2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.

3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.

4) Why are you asking me that? Shouldn't you be asking, like, anyone else?

Comment author: roland 12 November 2009 09:24:45PM *  22 points [-]

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

Comment author: Blueberry 12 November 2009 07:48:31PM 24 points [-]

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

Comment author: Will_Euler 19 November 2009 02:11:41AM 3 points [-]

How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?

If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an "uncanny valley"?

Less important, but related:  What self-insights from hedonic/positive psychology have you found most revealing about people's ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.

(I feel these are sufficiently intertwined to constitute one general question about the relationship between self-knowledge and happiness.)

Comment author: ABranco 18 November 2009 07:28:27PM 13 points [-]

You've achieved a high level of success as a self-learner, without the aid of formal education.

Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?

If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)

Comment author: mormon2 23 November 2009 07:37:34PM 4 points [-]

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

Comment author: komponisto 24 November 2009 05:00:31AM 3 points [-]

How do you define high level of success?

He has a job where he is respected, gets to pursue his own interests, and doesn't have anybody looking over his shoulder on a daily basis (or any short-timescale mandatory duties at all that I can detect). That's pretty much the trifecta, IMHO.

Comment author: ABranco 24 November 2009 12:58:09AM *  1 point [-]

Well, ok, success might be a personal measure, so by all means only Eliezer could properly say if Eliezer is successful. (Or at least, this is what should matter.)

Having said that, my saying he's successful was driven (biased?) by my personal standards. A positive (not in the sense of a biased article; in the sense that impact described is positive) Wikipedia article (how many people are in Wikipedia with picture and 10 footnotes? — but nevermind, this is a polemic variable, so let's not split hairs here) and founding something like SIAI and LessWrong deserve my respect, and quite some awe given his 'formal education'.

Comment author: mormon2 25 November 2009 03:04:13AM 3 points [-]

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?

ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.

I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.

Comment author: MarkHHerman 18 November 2009 02:56:58AM 1 point [-]

Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?

[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]

Comment author: Nick_Tarleton 18 November 2009 07:48:28PM 2 points [-]

More generally, what kind of psychology research would you most like to see done?

Comment author: MarkHHerman 18 November 2009 01:04:20AM 2 points [-]

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

Comment author: MichaelGR 17 November 2009 01:54:16AM *  15 points [-]

Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?

Updating top level with expanded question:

I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?

So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).

It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).

If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.

Comment author: AnnaSalamon 19 November 2009 06:57:55AM *  28 points [-]

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:

Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):

  • “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  • “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  • “Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  • “Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  • “Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
  • And several more at various stages of the writing process, including some journal papers.

The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)

The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)

Miscellaneous additional examples:

  • The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
  • A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
  • Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
  • A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
  • Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.

(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)

(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)

How efficiently can we turn a marginal $1000 into more rapid project-completion?

As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.

As to SIAI vs. SENS:

SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.

The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)

There’s a lot more to say on all of these points, but I’m trying to be brief -- if you want more info on a specific point, let me know which.

It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)

Comment author: Kaj_Sotala 20 November 2009 10:05:06AM 14 points [-]

Please post a copy of this comment as a top-level post on the SIAI blog.

Comment author: Rain 23 March 2010 02:27:07AM 8 points [-]

You can donate to FHI too? Dang, now I'm conflicted.

Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.

Crisis averted by tiny obstacles.

Comment author: Pablo_Stafforini 23 March 2010 01:19:32AM 1 point [-]

Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst's donation recommendation. (Dawrst is "Utilitarian", the donor that Anna mentions above.)

Comment author: Kutta 03 December 2009 11:27:26PM *  7 points [-]

at 8 expected current lives saved per dollar donated

Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...

Comment author: StefanPernar 24 November 2009 11:44:10AM 1 point [-]

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

Comment author: Wei_Dai 20 November 2009 09:15:26AM 5 points [-]

Someone should update SIAI's recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:

Comment author: Eliezer_Yudkowsky 17 November 2009 02:17:48AM 6 points [-]

I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other's positive publicity. For this reason I've usually tended to avoid this kind of elevator pitch!

Pass to Michael Vassar: Should I answer this?

Comment author: MichaelGR 17 November 2009 03:50:51AM *  3 points [-]

[I've moved what was here to the top level comment]

Comment author: Eliezer_Yudkowsky 18 November 2009 12:47:34AM 2 points [-]

I'll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.

Comment author: Furcas 16 November 2009 05:33:36PM *  4 points [-]

Eliezer, in Excluding the Supernatural, you wrote:

Ultimately, reductionism is just disbelief in fundamentally complicated things. If "fundamentally complicated" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.

"Fundamentally complicated" does sound like an oxymoron to me, but I can't explain why. Could you? What is the contradiction?

Comment author: anonym 17 November 2009 07:31:33AM 6 points [-]

Isn't the contradiction that "complicated" means having more parts/causes/aspects than are readily comprehensible, and "fundamental" things never are complicated, because if they were, they could be broken down into more fundamental things that were less complicated? The fact that things invariably get simpler and more basic as we move closer to the foundational level is in tension with things getting more complicated as we move down.

Comment author: roland 16 November 2009 05:21:42PM *  4 points [-]

Boiling down rationality

Eliezer, if you only had 5 minutes to teach a human how to be rational, how would you do it? The answer has to be more or less self-contained so "read my posts on lw" is not valid. If you think that 5 minutes is not enough you may extend the time to a reasonable amount, but it should be doable in one day at maximum. Of course it would be nice if you actually performed the answer in the video. By perform I mean "Listen human, I will teach you to be rational now..."

EDIT: When I said perform I meant it as opposed to telling how to, so I would prefer Eliezer to actually teach rationality in 5 minutes instead of talking about how he would teach it.

Comment author: mikerpiker 16 November 2009 03:39:16AM 2 points [-]

It seems like, if I'm trying to make up my mind about philosophical questions (like whether moral realism is true, or whether free will is an illusion) I should try to find out what professional philosophers think the answers to these questions are.

If I found out that 80% of professional philosophers who think about metaethical questions think that moral realism is true, and I happen to be an anti-realist, then I should be far less certain of my belief that anti-realism is true.

But surveys like this aren't done in philosophy (I don't think). Do you think that the results of surveys like this (if there were any) should be important to the person trying to make a decision about whether or not to believe in free will, or be an moral realist, or whatever?

Comment author: Jack 16 November 2009 10:18:16PM *  4 points [-]

My answer to this depends on what you mean by "professional philosophers who think about". You have to be aware that subfields have selection biases. For example, the percent of philosophers of religion who think God exists is much, much larger than the percent of professional philosophers generally who think God exists. This is because if God does not exist philosophy of religion ceases to be a productive area of research. Conversely, if you have an irrational attachment to the idea that God exists this than you are likely to spend an inordinate amount of time trying to prove one exists. This issue is particularly bad with regard to religion but it is in some sense generalizable to all or most other subfields. Philosophy is also a competitive enterprise and there are various incentives to publishing novel arguments. This means in any given subfield views that are unpopular among philosophers generally will be overrepresented.

So the circle you draw around "professional philosophers who think about [subfield x] questions" needs to be small enough to target experts but large enough that you don't limit your survey to those philosophers who are very likely to hold a view you are surveying in virtue of the area they work in. I think the right circle is something like 'professional philosophers who are equipped to teach an advanced undergraduate course in the subject'.

Edit: The free will question will depend on what you want out of a conception of free will. But the understanding of free will that most lay people have is totally impossible.

Comment author: Alicorn 16 November 2009 10:30:24PM 2 points [-]

Seconded. There are a lot of libertarians-about-free-will who study free will, but nobody I've talked to has ever heard of anyone changing their mind on the subject of free will (except inasmuch as learning new words to describe one's beliefs counts) - so this has to be almost entirely due to more libertarians finding free will an interesting thing to study.

Comment author: Blueberry 16 November 2009 10:49:18PM *  2 points [-]

I've definitely changed my mind on free will. I used to be an incompatibilist with libertarian leanings. After reading Daniel Dennett's books, I changed my mind and became a compatiblist soft determinist.

Comment author: Jack 16 November 2009 10:48:31PM 2 points [-]

Free will libertarianism is also infected with religious philosophy. There are certainly some libertarians with secular reasons for their positions but a lot of the support for this for position comes from those whose religious world view requires radical free will and if they didn't believe in God they wouldn't be libertarians. Same goes for a lot of substance dualists, frankly.

Comment author: MarkHHerman 15 November 2009 11:31:27PM 4 points [-]

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “<Eliezer> well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

Comment author: imaxwell 14 November 2009 01:31:25AM 3 points [-]

Previously, in Ethical Injunctions and related posts, you said that, for example,

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?

Comment author: wedrifid 14 November 2009 02:05:49AM 3 points [-]

how smart would a mind have to be in order to safely break ethical injunctions?

Any given mind could create ethical injunctions of a suitable complexity that are useful to it given its own technical limitations.

Comment author: MichaelGR 13 November 2009 09:42:50PM *  7 points [-]

What recent* developments in narrow AI do you find most important/interesting and why?

*Let's say post-Stanley

Comment author: retired_phlebotomist 13 November 2009 07:10:04AM 16 points [-]

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

Comment author: Peter_de_Blanc 16 November 2009 04:30:50AM 0 points [-]

About what? Everything?

Comment author: gwern 16 November 2009 04:59:06AM 3 points [-]

Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.

Comment author: retired_phlebotomist 17 November 2009 07:22:17AM 1 point [-]

Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity.

I do miss the interplay between the two at OB.

Comment author: anonym 13 November 2009 06:56:43AM *  5 points [-]

Please estimate your probability of dying in the next year (5 years). Assume your estimate is perfectly accurate. What additional probability of dying in the next year (5 years) would you willingly accept for a guaranteed and safe increase of one (two, three) standard deviation(s) in terms of intelligence?

Comment author: wedrifid 14 November 2009 12:07:42AM 1 point [-]

Assume your estimate is perfectly accurate.

Does this matter?

Comment author: anonym 14 November 2009 06:49:15PM 1 point [-]

I'm not sure if it matters. I was imagining potentially different answers to the 2nd part of the question based on whether one includes additional adjustments and compensating factors to allow for the original estimate being inaccurate -- and trying to prevent those adjustments to get at the core issue.

Comment author: anonym 13 November 2009 06:38:00AM 9 points [-]

In terms of your intellectual growth, what were your biggest mistakes or most harmful habits, and what, if anything, would you do differently if you had the chance?