Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to On Caring
Comment author: kilobug 09 October 2014 12:27:04PM 12 points [-]

Interesting article, sounds a very good introduction to scope insensitivity.

Two points where I disagree :

  1. I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.

  2. I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."

Comment author: kilobug 02 September 2014 09:48:23AM 6 points [-]

Interesting, but I would have two more things to add :

  1. Both dolphin and octopus seem to be a "dead-end" for the purpose of technological civilization. The main reason for that, I would say, is that there are water-based, and water-based makes early civilization much harder (tools are harder to make and use underwater, you can't make fire, ...).

  2. Evolution from common predecessor to dolphins and octopus aren't completely independent from our evolution. They are all dependent on Earth being globally stable enough. Gravity strong enough to hold the atmosphere (unlike Mars), big Moon that stabilize the climate, the Sun being globally constant in heat (it'll not stay so for very much longer at cosmic scale), the Earth being far away from nearby novas, ...

So I far, I think that's mostly where the so-called "Great-Filter" lies, not in a single filter, but that evolving technological civilization takes a lot of time, it requires a lot of trail-and-error and the process can end up in many dead-ends, and for it to finally succeed, it requires a very long time of stable conditions, which aren't that frequent.

If you take the last picture, I wouldn't put a single great red line, but I would put many yellow lines (as there are) each adding lots of time to the "average" development speed. And some very early factors (like a big Moon) influencing how hard some of those filters are. For technological civilization to happen, you need the planet to stay stable enough until all the yellow filters are passed, and that's just very rare, because it'll lose its atmosphere like Mars, and gets blasted by a nearby nova, or its star will become too warm, or ...

Comment author: kilobug 30 August 2014 07:13:25AM 3 points [-]

I'm still highly skeptical of the existence of the "Great Filter". It's one possible explanation to the "why don't we see any hint of existence of someone else" but not the only one.

The most likely explanation to me is that intelligent life is just so damn rare. Life is probably frequent enough - we know there are a lot of exoplanets, many have the conditions for life, and life seems relatively simple. But intelligent life ? It seems to me it required a great deal of luck to exist on Earth, and it does seem somewhat likely that it's rare enough so we are alone not only in the galaxy, but in a large sphere around us. The universe is so vast there probably is intelligent life elsewhere, but if we admit AI can colonize at 10% of c, and the closest is 100 million light years away and exists since only 1 billion of years, it didn't reach us yet.

This whole "we compute how likely intelligent life is using numbers coming from nowhere, we don't detect any intelligence, so we conclude there is a Great Filter" seems very fishy reasoning to me. Not detecting any intelligence should make us, first of all, revise down the probability of the hypothesis "intelligence life is frequent enough", before making us create new "epicycles" by postulating a Great Filter.

A few elements making it unlikely for intelligent life to exist frequently, and that's just a few :

  • life, especially technological civilization, requires lots of heavy elements, which didn't exist too early in the universe, meaning only stars about the same generation as the Sun have chance to have it ;

  • it took 5 billions of years after the planet existed to evolve on Earth, on the 6 billions it has before the Sun becomes too hot and vaporizes water on it ;

  • the dinosaur phase shows that it was easy for evolution to reach some local minima that didn't include intelligence, and it took a great deal of luck to have a cataclysm powerful enough to throw it out of the local minima, without doing too much damage and killing all complex life ;

  • the Sun is lucky to be in a mostly isolated region, where very few nearby supernova blast life on Earth, I don't think intelligent life could develop on a star too close to the galatic center, any single nova to close to it, and all complex life on Earth would be wiped out ;

  • the Moon, which is unusual, played, it seems, a major role in allowing intelligent life to appear, from stabilizing Earth movement (and therefore climate) to easing the transition from sea to land through tides.

Comment author: kilobug 18 August 2014 04:04:12PM 11 points [-]

Hum, I'm skeptical about all that. It might be part of the evolutionary process behind humor, yes, but I don't think it really qualifies modern humor.

  1. You can very well laugh when you actually expect something bad. Like, all the "hotline jokes" about people calling hotline because their computer doesn't work and after a while admitting that they didn't plug the cable, well, we (people working in IT) do expect that level of "lameness" from some users, and yet we still find them funny.

  2. There are many cases where this formula is true, but where it doesn't generate humor. Like, if someone bakes a cake to me, and the cake isn't very good while I was expecting it to be, it would rarely lead to humor.

  3. Something which doesn't have to do with failure or bad quality would also lead to humor. Like if during a causual conversion with a friend, he would suddenly start using a very elaborated language, it would likely make me smile, even if there is no failure or lower quality than expected, in fact, it's because of higher quality than expected that humor will raise.

  4. Anxiety, like many other negative feelings (anger, tiredness, pain, ...) can make humor (and other positive feelings) harder, but it's not as clear cut as you display it. Many people (and myself too, sometimes) actually use humor as a shield against anxiety. Like a friend of mine recently had to undergo surgery, she was anxious, I did a few silly jokes and while it didn't lower much her anxiety, it did help a bit and make her smile.

I honestly don't think humor can be summarized with such a simple formula, humor is a very complicated cluster in thingspace, different people having different boundaries to it, and lots of different things can contribute positively or negatively to it. Trying to summarize all of humor by such a single formula seems like trying to summarize all of human values by a single explanation ("humans want wealth" or whatever) and then creating ad-hoc twisted justification for people sacrificing themselves for a loved one or altruism, instead of acknowledging that human values are complicated, because "we are godshatter".

Comment author: kilobug 17 July 2014 07:56:14AM 16 points [-]

For those interested, Douglas Hofstadter (of the famous Gödel, Escher, Bach) wrote recently a book called Surfaces and Essences: Analogy as the Fuel and Fire of Thinking which develops the thesis that analogy is the core and fuel of thinking, and does it quite brilliantly.

I'm only half-way through the book yet, but so far I liked it very much, the first part on language for example develops somewhat similar ideas, but with a quite different viewpoint, than the "Humans Guide to Words" Sequence on Less Wrong, and both complement each other well.

Comment author: Maybe_a 10 July 2014 09:23:21AM 7 points [-]

I don't care, because there's nothing I can do about it. It also applies to all large-scale problems, like national elections.

I do understand, that that point of view creates 'tragedy of commons', but there's no way I can force millions of people to do my bidding on this or that.

I also do not make interventions to my lifestyle, since I expect AGW effects to be dominated by socio-economic changes in the nearest half a century.

Comment author: kilobug 13 July 2014 07:55:14AM 0 points [-]

I think that's a common misconception for not actually running the numbers. We individually have a very low chance of changing anything at large-scale problems, but the effects of changing anything in large-scale problems is enormous. When dealing with very minor chance of very major change, we can't just use our intiutions (which breaks down) but we need to actually run the numbers.

And when it's done, like it was on this post, it says that we should care, the order of magnitude of changes being higher than the order of magnitude of our powerlessness.

In response to Too good to be true
Comment author: kilobug 12 July 2014 02:17:33PM 5 points [-]

I don't think the "95% confidence" works that way. It's a lower bound, you never try to publish anything with a lower than 95% confidence (and if you do, your publication is likely to be rejected), but you don't always need to have exactly 95% (2 sigma).

Hell, I play enough RPGs to know that rolling 1 or 20 in a d20 is frequent enough ;) 95% is quite low confidence, it's really a minimum at which you can start working, but not something optimal.

I'm not sure exactly in medicine, but in physics it's frequent to have studies at 3 sigma (99.7%) or higher. The detection of the Higgs boson by the LHC for example was done within 5 sigma (one chance in a million of being wrong).

Especially in a field with high risk of data being abused by ill-intentioned people such as "vaccine and autism" link, it would really surprise me that everyone just kept happily the 95% confidence, and didn't aim for much higher confidence.

Comment author: kilobug 06 July 2014 07:38:02AM 3 points [-]

You often get those kind of problems when playing strategy games, especially 4X (civ-like) games, with developing your cities/bases vs producing military units. The main difference with your "toy problem" is that in games the N isn't fixed, but probabilistic, which makes it much harder.

I often tend to spend most of the time developing my production capacity (as you said, it's the most efficient thing to do with a fixed N) but sometimes I do it too much and I get caught unprepared by an attack...

Comment author: kilobug 04 July 2014 07:44:31AM 5 points [-]

First, thanks Kaj for doing your best out of a complicated situation. I'm op on some IRC channels, and I also know how difficult it is to take such decisions.

I don't think the ban was a mistake as a penalty (nothing prevents Eugine from creating another account, so it's not that harsh a penalty) but I do think it doesn't solve the main problem. The most important remediation would be to undo all of Eugine's mass downvotes, and if not easily possible, all of Eugine's votes. Any chance of that to happen ?

Comment author: kilobug 23 June 2014 02:24:52PM 4 points [-]

Relatively also implies the lack of absolute time - it doesn't make sense to speak of "before" or "faster" or "now" in absolute. What matters for pleasure/pain of sentient entity is relative to their frame of reference, their subjective time.

And while observers in different frames will disagree on "what time is it ?" they will agree on the subjective experience each person has. And the only way to "sum" the pain/happiness between difference frames of inertia is considering when they can mutually agree on something - when the signal from one can reach the other.

If person A is on Earth suffering and person B is on a spaceship in happiness, it only matters to sum the suffering of one with the happiness of the other when a signal from Earth towards the ship (or vice-versa) can reach its destination, and you'll find that doing all the calculation the prediction of the two frames will be the same.

The only point where this can be tricky is if the ship goes beyond our event horizon - if due to expanding universe it reach the point where it can't reach Earth anymore because it's going away from it faster than light. There I've to admit it makes knots in my head.

View more: Next