Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: roland 20 July 2017 06:38:13PM 0 points [-]

My experience attending classes in universities was extremely negative. They didn't work for me.

Comment author: Alexandros 27 November 2016 10:40:52AM *  66 points [-]

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.

  3. the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.

  4. The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).

  5. Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.

In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

Comment author: roland 02 December 2016 03:58:26PM 3 points [-]

What explosions from EY are you referring to? Could you please clarify? Just curious.

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: roland 20 September 2016 02:29:38PM 1 point [-]

Let E stand for the observation of sabotage

Didn't you mean "the observation of no sabotage"?

Comment author: roland 03 May 2016 04:38:02PM *  0 points [-]

The error in the reasoning is that it is not you who makes the decision, but the COD (collective of the deciders), which might be composed of different individuals in each round and might be one or nine depending on the coin toss.

In every round the COD will get told that they are deciders but they don't get any new information because this was already known beforehand.

P(Tails| you are told that you are a decider) = 0.9

P(Tails| COD is told that COD is the decider) = P(Tails) = 0.5

To make it easier to understand why the "yes" strategy is wrong, if you say yes every time, you will only be wrong on average once every 9 turns, the one time where the coin comes up head and you are the sole decider. This sounds like a good strategy until you realize that every time the coin comes up head another one(on average) will be the sole decider and make the wrong choice by saying yes. So the COD will end up with 0.5*1000 + 0.5*100 = 550 expected donation.

Comment author: roland 20 April 2016 02:32:40PM *  0 points [-]

I'm retracting this one in favor of my other answer:

http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/d9r4

So saying "yea" gives 0.9 * 1000 + 0.1 * 100 = 910 expected donation.

This is simply wrong.

If you are a decider then the coin is 90% likely to have come up tails. Correct.

But it simply doesn't follow from this that the expected donation if you say yes is 0.9*1000 + 0.1*100 = 910.

To the contrary, the original formula is still true: 0.5*1000 + 0.5*100 = 550

So you should stil say "nay" and of course hope that everyone else is as smart as you.

Comment author: roland 15 March 2016 09:52:54PM 0 points [-]

Eliezer Yudkowsky is AlphaGo.

Comment author: roland 25 February 2016 10:42:35AM 0 points [-]

From http://lesswrong.com/lw/js/the_bottom_line/

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.

I remember a similar quotation regarding actions as opposed to thoughts. Does anyone remember how it went?

Comment author: roland 10 January 2016 02:56:24PM 4 points [-]

From a mere act of the imagination we cannot learn anything about the real world. To suppose that the resulting probability assignments have any real physical meaning is just another form of the mind projection fallacy. In practice, this diverts our attention to irrelevancies and away from the things that really matter (such as information about the real world that is not expressible in terms of any sampling distribution, or does not fit into the urn picture, but which is nevertheless highly cogent for the inferences we want to make). Usually, the price paid for this folly is missed opportunities; had we recognized that information, more accurate and/or more reliable inferences could have been made.

-- E T Jaynes Probability Theory the Logic of Science

Comment author: RichardKennaway 13 December 2015 02:05:26PM 1 point [-]

Shorn of context, it could be. But what is the context? I gather from the Wikipedia plot summary that Chigurh (the killer) is a hit-man hired by drug dealers to recover some stolen drug money, but instead kills his employers and everyone else that stands in the way of getting the money himself. To judge by the other quotes in IMDB, when he's about to kill someone he engages them in word-play that should not take in anyone in possession of their rational faculties for a second, in order to frame what he is about to do as the fault of his victims.

Imagine someone with a gun going out onto the street and shooting at everyone, while screaming, "If the rule you followed brought you to this, of what use was the rule?" Is it still a rationality quote?

Comment author: roland 13 December 2015 02:45:01PM 1 point [-]

I saw the movie and the context of the quote was that the killer was about to kill a guy that was chasing him. So we could say that the victim underestimated the killer. He was not randomly selected.

View more: Next