Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: kilobug 18 August 2014 04:04:12PM 8 points [-]

Hum, I'm skeptical about all that. It might be part of the evolutionary process behind humor, yes, but I don't think it really qualifies modern humor.

  1. You can very well laugh when you actually expect something bad. Like, all the "hotline jokes" about people calling hotline because their computer doesn't work and after a while admitting that they didn't plug the cable, well, we (people working in IT) do expect that level of "lameness" from some users, and yet we still find them funny.

  2. There are many cases where this formula is true, but where it doesn't generate humor. Like, if someone bakes a cake to me, and the cake isn't very good while I was expecting it to be, it would rarely lead to humor.

  3. Something which doesn't have to do with failure or bad quality would also lead to humor. Like if during a causual conversion with a friend, he would suddenly start using a very elaborated language, it would likely make me smile, even if there is no failure or lower quality than expected, in fact, it's because of higher quality than expected that humor will raise.

  4. Anxiety, like many other negative feelings (anger, tiredness, pain, ...) can make humor (and other positive feelings) harder, but it's not as clear cut as you display it. Many people (and myself too, sometimes) actually use humor as a shield against anxiety. Like a friend of mine recently had to undergo surgery, she was anxious, I did a few silly jokes and while it didn't lower much her anxiety, it did help a bit and make her smile.

I honestly don't think humor can be summarized with such a simple formula, humor is a very complicated cluster in thingspace, different people having different boundaries to it, and lots of different things can contribute positively or negatively to it. Trying to summarize all of humor by such a single formula seems like trying to summarize all of human values by a single explanation ("humans want wealth" or whatever) and then creating ad-hoc twisted justification for people sacrificing themselves for a loved one or altruism, instead of acknowledging that human values are complicated, because "we are godshatter".

Comment author: kilobug 17 July 2014 07:56:14AM 17 points [-]

For those interested, Douglas Hofstadter (of the famous Gödel, Escher, Bach) wrote recently a book called Surfaces and Essences: Analogy as the Fuel and Fire of Thinking which develops the thesis that analogy is the core and fuel of thinking, and does it quite brilliantly.

I'm only half-way through the book yet, but so far I liked it very much, the first part on language for example develops somewhat similar ideas, but with a quite different viewpoint, than the "Humans Guide to Words" Sequence on Less Wrong, and both complement each other well.

Comment author: Maybe_a 10 July 2014 09:23:21AM 7 points [-]

I don't care, because there's nothing I can do about it. It also applies to all large-scale problems, like national elections.

I do understand, that that point of view creates 'tragedy of commons', but there's no way I can force millions of people to do my bidding on this or that.

I also do not make interventions to my lifestyle, since I expect AGW effects to be dominated by socio-economic changes in the nearest half a century.

Comment author: kilobug 13 July 2014 07:55:14AM 1 point [-]

I think that's a common misconception for not actually running the numbers. We individually have a very low chance of changing anything at large-scale problems, but the effects of changing anything in large-scale problems is enormous. When dealing with very minor chance of very major change, we can't just use our intiutions (which breaks down) but we need to actually run the numbers.

And when it's done, like it was on this post, it says that we should care, the order of magnitude of changes being higher than the order of magnitude of our powerlessness.

In response to Too good to be true
Comment author: kilobug 12 July 2014 02:17:33PM 5 points [-]

I don't think the "95% confidence" works that way. It's a lower bound, you never try to publish anything with a lower than 95% confidence (and if you do, your publication is likely to be rejected), but you don't always need to have exactly 95% (2 sigma).

Hell, I play enough RPGs to know that rolling 1 or 20 in a d20 is frequent enough ;) 95% is quite low confidence, it's really a minimum at which you can start working, but not something optimal.

I'm not sure exactly in medicine, but in physics it's frequent to have studies at 3 sigma (99.7%) or higher. The detection of the Higgs boson by the LHC for example was done within 5 sigma (one chance in a million of being wrong).

Especially in a field with high risk of data being abused by ill-intentioned people such as "vaccine and autism" link, it would really surprise me that everyone just kept happily the 95% confidence, and didn't aim for much higher confidence.

Comment author: kilobug 06 July 2014 07:38:02AM 4 points [-]

You often get those kind of problems when playing strategy games, especially 4X (civ-like) games, with developing your cities/bases vs producing military units. The main difference with your "toy problem" is that in games the N isn't fixed, but probabilistic, which makes it much harder.

I often tend to spend most of the time developing my production capacity (as you said, it's the most efficient thing to do with a fixed N) but sometimes I do it too much and I get caught unprepared by an attack...

Comment author: kilobug 04 July 2014 07:44:31AM 6 points [-]

First, thanks Kaj for doing your best out of a complicated situation. I'm op on some IRC channels, and I also know how difficult it is to take such decisions.

I don't think the ban was a mistake as a penalty (nothing prevents Eugine from creating another account, so it's not that harsh a penalty) but I do think it doesn't solve the main problem. The most important remediation would be to undo all of Eugine's mass downvotes, and if not easily possible, all of Eugine's votes. Any chance of that to happen ?

Comment author: kilobug 23 June 2014 02:24:52PM 5 points [-]

Relatively also implies the lack of absolute time - it doesn't make sense to speak of "before" or "faster" or "now" in absolute. What matters for pleasure/pain of sentient entity is relative to their frame of reference, their subjective time.

And while observers in different frames will disagree on "what time is it ?" they will agree on the subjective experience each person has. And the only way to "sum" the pain/happiness between difference frames of inertia is considering when they can mutually agree on something - when the signal from one can reach the other.

If person A is on Earth suffering and person B is on a spaceship in happiness, it only matters to sum the suffering of one with the happiness of the other when a signal from Earth towards the ship (or vice-versa) can reach its destination, and you'll find that doing all the calculation the prediction of the two frames will be the same.

The only point where this can be tricky is if the ship goes beyond our event horizon - if due to expanding universe it reach the point where it can't reach Earth anymore because it's going away from it faster than light. There I've to admit it makes knots in my head.

Comment author: kilobug 11 June 2014 03:52:20PM 5 points [-]

I think a part of the solution is on how you ask the question. It may feel a bit silly for rationalists like us to put such importance on the form rather than on the content, but for a lot of people (and even to a point, I've to admit, to myself), it's very easy to fall in "group 1" or "group 2" depending how the question is formulated. It may either feel as genuine interest and desire for details, or for an aggressive (questioning trust) move, depending how the exact question is formulated.

As for the main issues, rich people paying lower taxes, when it's the case it's often because rich people have ways to play on various loopholes in the law to avoid paying part of their taxes, or as a negative side-effect from tax incentives. Like here in France, you can deduct from your taxes part of the money you spend in improving heat isolation of your primary home (in order to encourage energy savings) and the combination of many similar schemes makes it possible, at the end of the year, for upper-class or upper-middle class people to pay less in income tax (even if the base rate is highly progressive) than low-middle class people (really poor people don't pay any).

Comment author: kilobug 08 June 2014 08:39:30AM 10 points [-]

My own view is :

  1. Mass downvoting of most/all a user wrote regardless of content defeats the purpose of the karma/score system and therefore is harmful to the community.

  2. Mass downvoting is rude and painful for the target, and therefore is harmful to the community.

So we should have an official policy forbidding it. For the current case, I would support using first 1. (it's always good to ask for reasons behind an act before taking coercive action), and then apply any of 2.an 4. and 5. depending on the answer (or lack of it).

Comment author: V_V 14 May 2014 09:12:59PM *  2 points [-]

A personal or privately operated self-driving car should probably minimize the passenger travel time as this probably best aligns with the customer's and, in a reasonably competitive market, the manufacturer's interests.
The crash case is more complicated because there are ethical and legal liability issues.

Comment author: kilobug 15 May 2014 08:17:36AM 8 points [-]

I think there is a confusion going on there. Should reflect to what is ethical, what would be the best option, and I don't see how the manufacturer’s interests really matter for that. Self-driving cars should cooperate with each other in various prisoner's dilemma, not defect with each other, and more generally, they should behave in a way to smooth traffic globally (which at the end of the year would lead to less traffic time for everyone if all cars do so), not behave selfishly and minimize the passenger's travel time.

Now, in a competitive market, due to manufacturer's interests, it is indeed unlikely they would do so. But that is different from should. That's a case of pure market leading to a suboptimal solution (as often with Nash equilibrium), but there might be ways to fix it, either from manufacturers negotiating with each others outside the market channel to implement more globally efficient algorithms (like many standard bodies do), or through the state imposing it to them (like EU imposing the same charger for all cell phones).

Of course there are drawbacks and potential pitfall with all those solutions, but that's a different matter than the should issue.

View more: Next