Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world, but I hope he does take away from this a greater sense of the importance of a "the customer is always right" attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer... justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we're trying to fix, and the solution can't always just be "read the sequences until you're rational enough to see why this makes sense".
I realize it's a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn't crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer's example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not really intuitively grasp its importance or have a strong enough inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird or high-self-regard-taboo-violating on the surface level will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an automatic sense of who's not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.
(Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously. But unfortunately, it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world". That's probably something that can't be changed without downplaying the importance of the project, and downplaying the importance of FAI probably increases existential risk enough that the PR hit of sounding overly self-important to probable non-contributors may be well worth it in the end.)
Yes, and it's called "pattern completion", the same effect that makes people think "Singularitarians believe that only people who believe in the Singularity will be saved".
This is discussed in Imaginary Positions.
The outside view of the pitch:
Maybe there are some bits missing - but they don't appear to be critical components of the pattern.
Indeed, this time there are some extra features not invented by those who went before - e.g.:
This one isn't right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that's one's practice in collective action problems.
Also: Rapture of the Nerds, Not
I don't understand why downvote this. It does sound like an accurate representation of the outside view.
It may have been downvoted for the caps.
Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn't risk his reputation while a commenter does).
Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot's level of quality.
Yes: votes should probably not be anonymous - and on "various other" social networking sites, they are not.
Metafilter, for one. It is hard for an online community to avoid becoming worthless, but Metafilter has avoided that for 10 years.
Perhaps downvoted for suggesting that the salvation-for-cash meme is a modern one. I upvoted, though.
Hmm - I didn't think of that. Maybe deathbed repentance is similar as well - in that it offers sinners a shot at eternal bliss in return for public endorsement - and maybe a slice of the will.
This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).
In many cases outside view, and in particular reference class tennis, is a form of filtering the evidence, and thus "not technically" lying, a tool of anti-epistemology and dark arts, fit for deceiving yourself and others.
We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.
If this particular critique has been made more clearly elsewhere, perhaps let me know, and I will happily link to there in the future.
Update 2011-05-30: There's now this recent article: The “Rapture” and the “Singularity” Have Much in Common - which makes a rather similar point.
I must know, have you actually encountered people who literally think that? I'm really hoping that's a comical exaggeration, but I guess I should not overestimate human brains.
I've encountered people who think Singularitarians think that, never any actual Singularitarians who think that.
Yeah, "people who think Singularitarians think that" is what I meant.
I've actually met exactly one something-like-a-Singularitarian who did think something-like-that — it was at one of the Bay Area meetups, so you may or may not have talked to him, but anyway, he was saying that only people who invent or otherwise contribute to the development of Singularity technology would "deserve" to actually benefit from a positive Singularity. He wasn't exactly saying he believed that the nonbelievers would be left to languish when cometh the Singularity, but he seemed to be saying that they should.
Also, I think he tried to convert me to Objectivism.
Technological progress has increased weath inequality a great deal so far.
Machine intelligence probably has the potential to result in enormous weath inequality.
How, in a post-AGI world, would you define wealth? Computational resources? Matter?
I don't think there's any foundation for speculation on this topic at this time.
Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I've never seen any significant holes poked in that thesis.
Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.
I've never gotten that impression. What I've gotten is that evolutionary pressures will, in the long term, still exist--even if technological self-modification leads to a population that's 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn't depend on human preferences, just the laws of physics and natural selection.
Control, owned by preferences.
I wasn't trying to make an especially long-term prediction:
"We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade - probably before 2016."
Inflation.
The richest person on earth currently has a net worth of $53.5 billion.
The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates' $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.
In any case, your extrapolated curve points to 2116, not 2016.
I am increasingly convinced that your comments on this topic are made in less than good faith.
Yes, the last figure looks wrong to me too - hopefully I will revisit the issue.
Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read "later this century".
Anyway, the huge modern wealth inequalities are well established - and projecting them into the future doesn't seem especially controversial. Today's winners in IT are hugely rich - and tomorrow's winners may well be even richer. People thinking something like they will "be on the inside track when the singularity happens" would not be very surprising.
What about the recent "forbidden topic"? Surely that is a prime example of this kind of thing.
"It's basically a modern version of a religious belief system and there's no purpose to it, like why, why must we have another one of these things ... you get an afterlife out of it because you'll be on the inside track when the singularity happens - it's got all the trappings of a religion, it's the same thing." - Jaron here.
Pattern completion isnt always wrong.