Comment author: twanvl 20 February 2015 10:40:58PM 7 points [-]

But alas, I fear that Professor Riddle would not have found lasting happiness in Hogwarts."

"Why not? "

"Because I still would've been surrounded by idiots, and I wouldn't have been able to kill them," Professor Quirrell said mildly.

The solution seems obvious (albeit hard and dangerous): make the students smarter so they are no longer idiots.

Comment author: twanvl 17 February 2015 11:00:06AM 5 points [-]

I cannot be truly killed by any power known to me, and lossing Sstone will not sstop me from returning, nor sspare you or yourss my wrath. Any impetuous act you are contemplating cannot win the game for you, boy.

The last sentence was not said in parseltongue. Could it be that Quirrell used English because it is a lie, and he believes that there is something that Harry could do to win?

Comment author: sixes_and_sevens 19 January 2015 10:50:57AM 8 points [-]

Tell us about your feed reader of choice.

I've been using Feedly since Google Reader went away, and has enough faults (buggy interface, terrible bookmarking, awkward phone app that needs to be online all the time) to motivate me towards a new one. Any recommendations?

Comment author: twanvl 19 January 2015 05:46:47PM 2 points [-]

I switched to The Old Reader, which, as the name suggests, is pretty close to Google Reader in functionality.

Comment author: Alsadius 06 January 2015 02:25:34AM *  8 points [-]

Imagine we're looking at an obscure group called the Axiom of Choice which has the following policy agenda and weights them with the following relative importances:

  • Anthropogenic global warming is real and dangerous: weight 20
  • Jesus is the reason for the season: weight 30
  • Farms should be collectively owned: weight 15
  • Monopoly should be played with no house rules: weight 12
  • The Beatles are overrated: weight 20
  • Yogic flying is real: weight 3

If I'm on board with everything but the yogic flying, then I'd refer to myself as being 97% in agreement with them.

Now, of course, exact numbers of this sort are practically impossible to come up with. But with sufficient interaction with a group, you can get a sense of what they're about and what's really important to them, and come up with a fairly decent approximation. (Then of course you halve the distance between that and 100%, to add rhetorical weight to your discussion of your differences with them)

Comment author: twanvl 06 January 2015 11:13:48AM 0 points [-]

You can measure this by looking at the spoken or written works of the group. When talking to an Axiomist of Choice, you would on average agree 97% of the time with what they are saying, since the other 3% they are talking about yogic flying.

Of course in real life people also make a lot of smalltalk, which is probably not ideological at all. This is less of an issue when looking at writing.

Comment author: erratio 23 October 2014 08:45:00PM *  2 points [-]

I take 10 000 units of Vit D each day. Partly because I'm a pasty nerd who never goes out and partly because large doses are anecdotally helpful for mood.

I take around 1.5mg of melatonin each night. Would have preferred 1 or less but it's too difficult to find them in smaller quantities so I make do with halving 3mg tablets. When I take them I find it significantly easier to get to sleep.

Comment author: twanvl 26 October 2014 05:41:41PM 1 point [-]

Wikipedia lists the safe upper limit of vitamin D as 4000 IU (100ug), so taking 10000 could be unhealthy.

Comment author: Metus 05 September 2014 11:58:08AM 6 points [-]

A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don't know suffering, so I conclude that I don't care about other people beyond some abstract measure of proximity and their economic utility for me.

Comment author: twanvl 08 September 2014 10:58:33AM 1 point [-]

If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don't know directly are suffering.

But I don't think it is about the feeling. I also don't really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.

Comment author: zslastman 31 August 2014 09:29:35AM *  14 points [-]

Agree. the road from creation of life to creation of any nervous system at all is an extremely long and fraught one.

Life on our planet has a very specific chemistry. It's possible that almost all possible chemistries limit complexity more than ours - leading to many planets of very simple organisms. Very large number of phyla on earth reach evolutionary dead ends both archae and bacteria are stuck as single cellular organisms, (or very simple aggregrates) - Plants cannot develop movement because of their cell walls, while insects cannot grow bigger because their lungs and exoskeletons do not scale upwards.

Genetics is an entire optimization layer underlying our own, neural one. I think the fact that it had to throw up an entire new, viable optimization layer represents a filter.

Comment author: twanvl 04 September 2014 08:55:15AM 8 points [-]

This is another good explanation instead of / in addition to the Great Filter.

It could be that there are many local optima to life, that are hard to escape. And that intelligence requires an unlikely local optimum. This functions like an early Great Filter, but in addition, failing this filter (by going to a bad local optimum) might make it impossible to start over.

For example, you could imagine that it were possible to evolve a gray goo like organism which eats everything else, but which is very robust to mutations, so it doesn't evolve further.

Comment author: James_Miller 22 July 2014 09:40:56PM *  1 point [-]

In a classical game all the players move simultaneously.

I'm not sure what you mean by "classical game" but my game is not a simultaneous move game. Many sequential move games do not have equivalent simultaneous move versions.

"I hope you agree that the fact that player 2 gets to make a (useless) move in the case that player 1 chooses A doesn't change the fundamentals of the game."

I do not agree. Consider these payoffs for the same game:

A 3,0 [And Player Two never got to move.]

B,X 2,10000

B,Y 2,2

C,X 0,1

C,Y 4,4

Now although Player 1 will never pick A, its existence is really important to the outcome by convincing Player 2 that if he moves C has been played.

Comment author: twanvl 23 July 2014 09:29:12AM 0 points [-]

I do not agree. Consider these payoffs for the same game: ...

Different payoffs imply a different game. But even in this different game, the simultaneous move version would be equivalent. With regards to choosing between X and Y, the existence of choice A still doesn't matter, because if player 1 chose A X and Y have the same payoff. The only difference is how much player 2 knows about what player 1 did, and therefore how much player 2 knows about the payoff he can expect. But that doesn't affect his strategy or the payoff that he gets in the end.

Comment author: James_Miller 22 July 2014 03:01:44AM 0 points [-]

Could you explain what you mean? What uncertainty is there?

If Player 2 gets to move he is uncertain as to what Player 1 did. He might have a different probability estimate in the game I gave than one in which strategy A did not exist, or one in which he is told what Player 1 did.

I'm not convinced that the game has any equilibrium unless you allow for trembling hands. For A,A to be an equilibrium you have to tell me what belief Player 2 would have if he got to move, or tell me that Player 1's belief about Player 2's belief can't effect the game.

Comment author: twanvl 22 July 2014 10:08:18AM 0 points [-]

If Player 2 gets to move he is uncertain as to what Player 1 did. He might have a different probability estimate in the game I gave than one in which strategy A did not exist, or one in which he is told what Player 1 did.

In a classical game all the players move simultaneously. So to repeat, your game is: * player 1 chooses A, B or C * then, player 2 is told whether player 1 chose B or C, and in that case he chooses X or Y * payoffs are (A,-) -> (3,0); (B,X) -> (2,0); (B,Y) -> (2,2); (C,X) -> (0,1); (C,Y) -> (6,0)

The classical game equivalent is * player 1 chooses A, B or C * without being told the choice of player 1, player 2 chooses X or Y * payoffs are as before, with (A,X) -> (3,0); (A,Y) -> (3,0).

I hope you agree that the fact that player 2 gets to make a (useless) move in the case that player 1 chooses A doesn't change the fundamentals of the game.

In this classic game player 2 also has less information before making his move. In particular, player 2 is not told whether or not player 1 choose A. But this information is completely irrelevant for player 2's strategy, since if player 1 chooses A there is nothing that player 2 can do with that information.

I'm not convinced that the game has any equilibrium unless you allow for trembling hands.

If the players choose (A,X), then the payoff is (4,0). Changing his choice to B or C will not improve the payoff for player 1, and switching to Y doesn't improve the payoff for B. Therefore this is a Nash equilibrium. It is not stable, since player 2 can switch to Y without getting a worse payoff.

Comment author: James_Miller 20 July 2014 09:04:32PM 2 points [-]

It's not equivalent because of the uncertainty.

Also, even if it were, lots of games have Nash equilibra that are not reasonable solutions so saying "this is a Nash equilibrium" doesn't mean you have found a good solution. For example, consider the simultaneous move game where we each pick A or B. If we both pick B we both get 1. If anything else happens we both get 0. Both of us picking A is a Nash equilibrium, but is also clearly unreasonable.

Comment author: twanvl 21 July 2014 10:51:43AM 0 points [-]

It's not equivalent because of the uncertainty.

Could you explain what you mean? What uncertainty is there?

Also, even if it were, lots of games have Nash equilibra that are not reasonable solutions so saying "this is a Nash equilibrium" doesn't mean you have found a good solution.

For example, consider the simultaneous move game where we each pick A or B. If we both pick B we both get 1. If anything else happens we both get 0. Both of us picking A is a Nash equilibrium, but is also clearly unreasonable.

This game has two equilibria: a bad one at (A,A) and good one at (B,B). The game from this post also has two equilibria, but both involve player one picking A, in which case it doesn't matter what player two does (or in your version, he doesn't get to do anything).

View more: Prev | Next