Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to How Much Thought
Comment author: roland 05 December 2017 01:19:33PM *  0 points [-]

thinking has higher expected utility when you're likely to change your mind and thinking has higher expected utility when the subject is important.

Conditioning on you changing your mind from incorret to correct.

Comment author: roland 20 November 2017 12:40:21PM *  0 points [-]

Intuitive explanation of why entropy maximizes in a uniform distribution?

0 roland 23 September 2017 09:43AM

What is the best mathematical, intuitive explanation of why entropy maximizes in a uniform distribution? I'm looking for a short proof using the most elementary mathematics possible.

Please no explanation like "because entropy was designed in this way", etc...

https://en.wikipedia.org/wiki/Entropy_%28information_theory%29#Definition

Comment author: roland 20 July 2017 06:38:13PM 0 points [-]

My experience attending classes in universities was extremely negative. They didn't work for me.

Comment author: Alexandros 27 November 2016 10:40:52AM *  66 points [-]

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.

  3. the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.

  4. The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).

  5. Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.

In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

Comment author: roland 02 December 2016 03:58:26PM 3 points [-]

What explosions from EY are you referring to? Could you please clarify? Just curious.

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: roland 20 September 2016 02:29:38PM 1 point [-]

Let E stand for the observation of sabotage

Didn't you mean "the observation of no sabotage"?

Comment author: roland 03 May 2016 04:38:02PM *  0 points [-]

The error in the reasoning is that it is not you who makes the decision, but the COD (collective of the deciders), which might be composed of different individuals in each round and might be one or nine depending on the coin toss.

In every round the COD will get told that they are deciders but they don't get any new information because this was already known beforehand.

P(Tails| you are told that you are a decider) = 0.9

P(Tails| COD is told that COD is the decider) = P(Tails) = 0.5

To make it easier to understand why the "yes" strategy is wrong, if you say yes every time, you will only be wrong on average once every 9 turns, the one time where the coin comes up head and you are the sole decider. This sounds like a good strategy until you realize that every time the coin comes up head another one(on average) will be the sole decider and make the wrong choice by saying yes. So the COD will end up with 0.5*1000 + 0.5*100 = 550 expected donation.

Comment author: roland 20 April 2016 02:32:40PM *  0 points [-]

I'm retracting this one in favor of my other answer:

http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/d9r4

So saying "yea" gives 0.9 * 1000 + 0.1 * 100 = 910 expected donation.

This is simply wrong.

If you are a decider then the coin is 90% likely to have come up tails. Correct.

But it simply doesn't follow from this that the expected donation if you say yes is 0.9*1000 + 0.1*100 = 910.

To the contrary, the original formula is still true: 0.5*1000 + 0.5*100 = 550

So you should stil say "nay" and of course hope that everyone else is as smart as you.

Comment author: roland 15 March 2016 09:52:54PM 0 points [-]

Eliezer Yudkowsky is AlphaGo.

View more: Next