Dr_Manhattan comments on Open thread, October 2011 - Less Wrong

5 Post author: MarkusRamikin 02 October 2011 09:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (308)

You are viewing a single comment's thread.

Comment author: Dr_Manhattan 03 October 2011 12:47:40AM 3 points [-]

http://becominggaia.wordpress.com/2011/03/15/why-do-you-hate-the-siailesswrong/#entry I'll reserve my opinion about this clown, but honestly I do not get how he gets invited to AGI conferences, having neither work or even serious educational credentials.

Comment author: Solvent 03 October 2011 06:22:43AM 11 points [-]

He didn't actually make any arguments in that essay. That frustrates me.

Comment author: lessdazed 03 October 2011 07:34:00AM *  10 points [-]

They...build a high wall around themselves rather than building roads to their neighbors. I can understand self-protection and short-sighted conservatism but extremes aren’t healthy for anyone...repetitively screaming their fear rather than listening to rational advice. Worse, they’re kicking rocks down on us.

If it weren’t for their fear-mongering...AND their arguing for unwise, dangerous actions (because they can’t see the even larger dangers that they are causing), I would ignore them like harmless individuals...rather than [like] junkies who need to do anti-societal/immoral things to support their habits...fear-mongering and manipulating others...

...very good at rhetorical rationalization and who are selfishly, unwilling to honestly interact and cooperate with others. Their fearful, conservative selfishness extends far beyond their “necessary” enslavement of the non-human and dangerous...raising strawmen, reducing to sound bites and other misdirections. They dismiss anyone and anything they don’t like with pejoratives like clueless and confused. Rather than honest engagement they attempt to shut down anyone who doesn’t see the world as they do. And they are very active in trying to proselytize their bad ideas...

In a sense, they are very like out-of-control children. They are bright, well-meaning and without a clue of the likely results of their actions. You certainly can’t hate individuals like that — but you also don’t let them run rampant...

What do you mean no arguments? Just read the above excerpts...what do you think those are, ad hominems and applause lights?

Comment author: Solvent 03 October 2011 07:56:48AM 9 points [-]

...I think that that was one of those occasional comments you make which are sarcastic, and which no-one gets, and which always get downvoted.

But I could be wrong. Please clarify if you were kidding or not, for this slow uncertain person.

Comment author: lessdazed 03 October 2011 07:58:31AM *  4 points [-]

Don't worry, if my sarcasm is downvoted, that will probably be good for me. I get more karma than I deserve on silly stuff anyway.

Comment author: Solvent 03 October 2011 11:01:39AM 4 points [-]

The silly comments you make are far more insightful and useful than most seriously intended comments on most other websites. Keep up the good work.

Comment author: endoself 04 October 2011 09:53:01PM *  2 points [-]

I like the third passage. It makes it very clear what he is mistaken about.

Comment author: vi21maobk9vp 03 October 2011 06:55:23AM 6 points [-]

Maybe he submits papers and conference program comittee find them relevant and interesting enough?

After all, Yudkowsky has no credentials to speak of, either - what is SIAI? Weird charity?

I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.

Comment author: Solvent 03 October 2011 07:15:42AM 1 point [-]

Which paper of his did you read? He has quite a few.

Comment author: vi21maobk9vp 03 October 2011 07:34:46AM 1 point [-]

AGI-2011 one.

Comment author: lessdazed 03 October 2011 07:48:15AM 15 points [-]

Based on the abstract, it's not worth my time to read it.

Abstract. Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?

Points 2), 3), and 4) are simply inane.

Comment author: [deleted] 03 October 2011 04:50:34PM 6 points [-]

Upvoted, agreed, and addendum: Similarly inane is the cliche "insanity is doing the same thing over and over and expecting a different result."

Comment author: wedrifid 03 October 2011 08:22:37AM 5 points [-]

Maybe he submits papers and conference program comittee find them relevant and interesting enough?

Which invites the question of why clearly incompetent people make up the program committee. His papers look like utter drivel mixed with superstition.

Comment author: vi21maobk9vp 03 October 2011 08:31:03AM 1 point [-]

If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract - at least once - attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?

Comment author: lessdazed 03 October 2011 08:51:17AM *  1 point [-]

That's a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.

We should not promote that status quo if his ideas - such as they are amid clumsily delivered, wince-inducing rhetorical bombast - are plainly stupid and a waste of everyone's time.

Comment author: vi21maobk9vp 03 October 2011 09:21:17AM 2 points [-]

It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.

Comment author: lessdazed 03 October 2011 09:42:17AM 3 points [-]

information

It's a general argument to avoid considering whether or not something even is information in a relevant sense.

I'm willing to accept "If you are wrong, it is good that papers showing how you are wrong are published," but not "If you are right, there is no harm done by any arguments against your position," nor "If you are wrong, there is benefit to any argument about AI so long as it differs from yours."

Comment author: wedrifid 03 October 2011 10:13:47AM 1 point [-]

Another way to put it is that it is a fully general counterargument against having standards. ;)

Comment author: vi21maobk9vp 03 October 2011 01:38:40PM 1 point [-]

Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.

Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.

If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.

Comment author: Vladimir_Nesov 03 October 2011 02:03:34PM 2 points [-]

FAI approach

Talk of "approaches" in AI has a similar insidious effect to that of "-ism"s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.

Comment author: jsalvatier 03 October 2011 02:45:27PM 3 points [-]

That's an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a 'bayesian approach to statistics'.

Comment author: wedrifid 03 October 2011 10:01:01AM 3 points [-]

It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.

That isn't true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.

Comment author: wedrifid 03 October 2011 09:57:12AM *  6 points [-]

Now, in what situation should we not promote that status quo?

Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.

Comment author: lessdazed 03 October 2011 10:03:26AM 2 points [-]

If there ever came a time when human thinking in general prematurely converged...I would perhaps advocate adding random noise to the thoughts of some of the population

Keen, I hadn't thought of that, upvoted.

Comment author: Vladimir_Nesov 03 October 2011 10:34:06AM 12 points [-]

Interestingly, back in 2007, when I was naive and stupid, I thought Mark Waser one of the most competent participants of agi and sl4 mailing lists. Must be something appealing to an unprepared mind in the way he talks. Can't simulate that impression now, so it's not clear what that is, but probably mostly general contrarian attitude without too many spelling errors.

Comment author: wedrifid 03 October 2011 07:23:55AM 12 points [-]

Wow, I loved the essay. I hadn't realized I was part of such a united, powerful organisation and that I was so impressively intelligent, rhetorically powerful and ruthlessly self interested. I seriously felt flattered.

Comment author: vi21maobk9vp 03 October 2011 08:03:45AM 4 points [-]

You are in a Chinese room, according to his argument. No one of us is as cruel as all of us.

Comment author: [deleted] 03 October 2011 04:47:46PM 11 points [-]

Not to call attention to the elephant in the room, but what exactly are Eliezer Yudkowsky's work and educational credentials re: AGI? I see a lot of philosophy relevant to AI as a discipline, but nothing that suggests any kind of hands-on-experience...

Comment author: Dr_Manhattan 03 October 2011 06:32:27PM 1 point [-]

This for one http://singinst.org/upload/LOGI//LOGI.pdf is in the ballpark of AGI work. Plus FAI work, while not being on AGI per se, is relevant and interesting to a rare conference in the area. Waser is pure drivel.

Comment author: ArisKatsaris 04 October 2011 09:59:04PM 18 points [-]

I'll reserve my opinion about this clown

Downvoted. Unless "clown" is his actual profession, you didn't reserve your opinion.