Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: ata 20 August 2010 06:09:35AM *  14 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world, but I hope he does take away from this a greater sense of the importance of a "the customer is always right" attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer... justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we're trying to fix, and the solution can't always just be "read the sequences until you're rational enough to see why this makes sense".

I realize it's a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn't crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer's example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not really intuitively grasp its importance or have a strong enough inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird or high-self-regard-taboo-violating on the surface level will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an automatic sense of who's not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.

(Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously. But unfortunately, it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world". That's probably something that can't be changed without downplaying the importance of the project, and downplaying the importance of FAI probably increases existential risk enough that the PR hit of sounding overly self-important to probable non-contributors may be well worth it in the end.)

Comment author: Eliezer_Yudkowsky 20 August 2010 07:01:08AM 10 points [-]

there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!,

Yes, and it's called "pattern completion", the same effect that makes people think "Singularitarians believe that only people who believe in the Singularity will be saved".

Comment author: Emile 20 August 2010 09:59:05AM 2 points [-]

This is discussed in Imaginary Positions.

Comment author: timtyler 20 August 2010 05:09:18PM *  7 points [-]

The outside view of the pitch:

  • DOOM! - and SOON!
  • GIVE US ALL YOUR MONEY;
  • We'll SAVE THE WORLD; you'll LIVE FOREVER in HEAVEN;
  • Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION!

Maybe there are some bits missing - but they don't appear to be critical components of the pattern.

Indeed, this time there are some extra features not invented by those who went before - e.g.:

  • We can even send you to HEAVEN if you DIE a sinner - IF you PAY MORE MONEY to our partner organisation.
Comment author: CarlShulman 20 August 2010 05:16:31PM *  9 points [-]

Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION.

This one isn't right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that's one's practice in collective action problems.

Also: Rapture of the Nerds, Not

Comment author: cousin_it 20 August 2010 06:45:43PM *  3 points [-]

I don't understand why downvote this. It does sound like an accurate representation of the outside view.

Comment author: Unknowns 20 August 2010 07:30:14PM 4 points [-]

It may have been downvoted for the caps.

Comment author: [deleted] 14 May 2011 10:10:03PM 3 points [-]

Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn't risk his reputation while a commenter does).

Comment author: rhollerith_dot_com 15 May 2011 02:54:33AM *  2 points [-]

Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot's level of quality.

Comment author: timtyler 30 May 2011 08:23:31AM *  0 points [-]

Yes: votes should probably not be anonymous - and on "various other" social networking sites, they are not.

Comment author: rhollerith_dot_com 30 May 2011 05:01:42PM *  0 points [-]

Metafilter, for one. It is hard for an online community to avoid becoming worthless, but Metafilter has avoided that for 10 years.

Comment author: Perplexed 20 August 2010 07:12:44PM 3 points [-]

Perhaps downvoted for suggesting that the salvation-for-cash meme is a modern one. I upvoted, though.

Comment author: timtyler 20 August 2010 07:20:07PM 0 points [-]

Hmm - I didn't think of that. Maybe deathbed repentance is similar as well - in that it offers sinners a shot at eternal bliss in return for public endorsement - and maybe a slice of the will.

Comment author: Vladimir_Nesov 20 August 2010 09:23:43PM *  12 points [-]

This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).

In many cases outside view, and in particular reference class tennis, is a form of filtering the evidence, and thus "not technically" lying, a tool of anti-epistemology and dark arts, fit for deceiving yourself and others.

Comment author: Nick_Tarleton 20 August 2010 09:41:21PM 7 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

Comment author: timtyler 14 May 2011 04:09:50PM *  2 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

If this particular critique has been made more clearly elsewhere, perhaps let me know, and I will happily link to there in the future.

Update 2011-05-30: There's now this recent article: The “Rapture” and the “Singularity” Have Much in Common - which makes a rather similar point.

Comment author: ata 20 August 2010 06:23:07PM *  0 points [-]

I must know, have you actually encountered people who literally think that? I'm really hoping that's a comical exaggeration, but I guess I should not overestimate human brains.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:43:32PM 5 points [-]

I've encountered people who think Singularitarians think that, never any actual Singularitarians who think that.

Comment author: ata 20 August 2010 07:14:44PM *  8 points [-]

Yeah, "people who think Singularitarians think that" is what I meant.

I've actually met exactly one something-like-a-Singularitarian who did think something-like-that — it was at one of the Bay Area meetups, so you may or may not have talked to him, but anyway, he was saying that only people who invent or otherwise contribute to the development of Singularity technology would "deserve" to actually benefit from a positive Singularity. He wasn't exactly saying he believed that the nonbelievers would be left to languish when cometh the Singularity, but he seemed to be saying that they should.

Also, I think he tried to convert me to Objectivism.

Comment author: timtyler 20 August 2010 08:13:06PM *  -1 points [-]

Technological progress has increased weath inequality a great deal so far.

Machine intelligence probably has the potential to result in enormous weath inequality.

Comment author: WrongBot 20 August 2010 09:19:49PM 1 point [-]

How, in a post-AGI world, would you define wealth? Computational resources? Matter?

I don't think there's any foundation for speculation on this topic at this time.

Comment author: khafra 20 August 2010 09:34:50PM *  2 points [-]

Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I've never seen any significant holes poked in that thesis.

Comment author: WrongBot 20 August 2010 09:45:48PM 0 points [-]

Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.

Comment author: khafra 20 August 2010 11:20:07PM 2 points [-]

I've never gotten that impression. What I've gotten is that evolutionary pressures will, in the long term, still exist--even if technological self-modification leads to a population that's 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn't depend on human preferences, just the laws of physics and natural selection.

Comment author: Vladimir_Nesov 20 August 2010 09:35:47PM 1 point [-]

Control, owned by preferences.

Comment author: timtyler 20 August 2010 09:28:01PM *  0 points [-]

I wasn't trying to make an especially long-term prediction:

"We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade - probably before 2016."

Comment author: WrongBot 20 August 2010 09:41:32PM *  5 points [-]
  1. Inflation.

  2. The richest person on earth currently has a net worth of $53.5 billion.

  3. The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates' $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.

  4. In any case, your extrapolated curve points to 2116, not 2016.

I am increasingly convinced that your comments on this topic are made in less than good faith.

Comment author: timtyler 20 August 2010 10:59:57PM *  0 points [-]

Yes, the last figure looks wrong to me too - hopefully I will revisit the issue.

Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read "later this century".

Anyway, the huge modern wealth inequalities are well established - and projecting them into the future doesn't seem especially controversial. Today's winners in IT are hugely rich - and tomorrow's winners may well be even richer. People thinking something like they will "be on the inside track when the singularity happens" would not be very surprising.

Comment author: timtyler 20 August 2010 07:12:35PM 0 points [-]

What about the recent "forbidden topic"? Surely that is a prime example of this kind of thing.

Comment author: timtyler 20 August 2010 07:07:51PM *  5 points [-]

"It's basically a modern version of a religious belief system and there's no purpose to it, like why, why must we have another one of these things ... you get an afterlife out of it because you'll be on the inside track when the singularity happens - it's got all the trappings of a religion, it's the same thing." - Jaron here.

Comment author: TheAncientGeek 17 June 2015 12:51:03PM -1 points [-]

Pattern completion isnt always wrong.