PhilGoetz comments on Yes, a blog. - Less Wrong

88 Post author: Academian 19 November 2010 01:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (106)

You are viewing a single comment's thread.

Comment author: PhilGoetz 19 November 2010 07:58:04PM *  8 points [-]

LessWrong has a dual nature. On one hand, it's a place where anyone can post, and where almost any idea can get a hearing.

On the other hand, LessWrong promotes the ideas of Eliezer Yudkowsky. This is inevitable, and fair, since it was originally based on Eliezer's posts. This is also intentional; no post makes it onto the home page unless Eliezer endorses it; and he has to my knowledge never endorsed a post that disagreed with or questioned things he has said in the past.

I'm not complaining. I applaud Eliezer for opening up top-level posting to everyone; he could have just kept it as his blog. But LessWrong shouldn't simultaneously be Eliezer's place, and a base to use to build an entire discipline, if you want that discipline to be well-built. That's like trying to build a school of journalism at Fox News.

Could LessWrong become such a place, if Eliezer relinquished control of the coveted green button? I don't know. There's more memetic homegeneity here than I would prefer for such a venture. But I don't see any more likely candidates at present.

The other dual nature of LessWrong is that it's about rationality, and it's about Friendly AI. The groupthink exists mainly within the FAI aspect of LessWrong. Perhaps someday these two parts should split into separate websites?

(Or perhaps, before that happens, we will develop a web service interface enabling two websites to interact so seamlessly that the notion of "separate websites" will dissolve.)

Comment author: ata 19 November 2010 09:30:59PM 14 points [-]

no post makes it onto the home page unless Eliezer endorses it; and he has to my knowledge never endorsed a post that disagreed with or questioned things he has said in the past.

Here's one example of a post that criticized Eliezer and others associated with SIAI but nevertheless got promoted to the home page: http://lesswrong.com/lw/2l8/existential_risk_and_public_relations/

I think there have been others, though I don't remember any specific ones off the top of my head.

Comment author: Nick_Tarleton 19 November 2010 10:04:46PM 11 points [-]

Off the top of my head, Abnormal Cryonics.

Comment author: JGWeissman 19 November 2010 08:10:13PM 8 points [-]

There's more memetic homegeneity here than I would prefer for such a venture.

Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of "memetic homegeneity" results from your insistance on disagreeing with some obviously (at least obviously after the discussions we've had) right answers, e.g. persistance of values as an instrumental value.

Comment author: wedrifid 19 November 2010 08:18:31PM 4 points [-]

e.g. persistance of values as an instrumental value.

What? Someone disagrees with that? But, but... how?

Comment author: JGWeissman 19 November 2010 08:34:49PM 1 point [-]

What? Someone disagrees with that? But, but... how?

Ask Phil

Comment author: Perplexed 20 November 2010 06:27:25AM *  2 points [-]

If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).

But then when the membership in that coalition changes, the 'deal' must be renegotiated, and the coalition's values are no longer perfectly persistent - nor should they be.

This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.

Comment author: red75 19 November 2010 11:37:01PM 1 point [-]
  1. The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.

  2. The case of unknown future. You know class of worlds you want to be in. What you don't know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don't want to be in.

Comment author: wedrifid 20 November 2010 05:31:39AM 1 point [-]
  1. Yes. That is the general class that includes 'Omega rewards you if you make your decision irrationally'. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.

  2. No. You don't need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.

Comment author: red75 21 November 2010 08:36:39AM 0 points [-]

[...] you just make that choice.

It works for pure consequentialist, but if one's values have a deontology in the mix, then your suggestion effectively requires changing of one's values.

And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.

Comment author: wedrifid 19 November 2010 08:06:41PM 0 points [-]

Could LessWrong become such a place, if Eliezer relinquished control of the coveted green button?

No, it's the red button that makes the biggest difference.