Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: rwallace 13 March 2012 12:59:37AM -2 points [-]

This used to be an interesting site for discussing rationality. It was bad enough when certain parties started spamming the discussion channel with woo-woo about the machine Rapture, but now we have a post openly advocating terrorism, and instead of being downvoted to oblivion, it becomes one of the most highly upvoted discussion posts, with a string of approving comments?

I think I'll stick to hanging out on sites where the standard of rationality is a little better. Ciao, folks.

Comment author: lsparrish 16 February 2012 02:04:12AM 7 points [-]

Why Life Extension is Immoral

Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.

The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacrificed to save a mature person (e.g. a mother) due to the fact that early development represents a costly investment on the part of adults which it is fair for them to expect payoff for (at least for adults who contribute to the rearing of offspring -- which could be indirect, etc.).

I think my rejection for the argument is that I don't think of future humans as objects of moral concern in quite all the same respects that I do for existing humans, even though they qualify in some ways. While I think future beings are entitled to not being tortured, I think they are not (at least not out of fairness with respect to existing humans) entitled to being brought into existence in the first place. Perhaps my reason for thinking this is that most humans that could exist do not, and many (e.g. those who would be in constant pain) probably should not.

On the other hand, I do think it is valuable for there to be people in the future, and this holds even if they can't be continuations of existing humans. (I would assign fairly high utility to a Star Trek kind of universe where all currently living humans are dead from old age or some other unstoppable cause but humanity is surviving.)

Comment author: rwallace 17 February 2012 02:31:02AM 8 points [-]

Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.

As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.

Comment author: moshez 07 February 2012 09:22:51PM 1 point [-]

It's not that costly if you do with university students: Get two groups of 4 university students. One group is told "test early and often". One group is told "test after the code is integrated". For every bug they fix, measure the effort it is to fix it (by having them "sign a clock" for every task they do). Then, do analysis on when the bug was introduced (this seems easy post-fixing the bug, which is easy if they use something like Trac and SVN). All it takes is a month-long project that a group of 4 software engineering students can do. It seems like any university with a software engineering department can do it for the course-worth of one course. Seems to me it's under $50K to fund?

Comment author: rwallace 07 February 2012 11:09:32PM 3 points [-]

That would be cheap and simple, but wouldn't give a meaningful answer for high-cost bugs, which don't manifest in such small projects. Furthermore, with only eight people total, individual ability differences would overwhelmingly dominate all the other factors.

Comment author: [deleted] 07 February 2012 08:21:03PM 0 points [-]

Could you give me some data or link? I would very much like to see it.

Comment author: rwallace 07 February 2012 09:09:12PM 0 points [-]

Sorry, I have long forgotten the relevant links.

Comment author: Polymeron 06 February 2012 10:07:30PM *  6 points [-]

This strikes me as particularly galling because I have in fact repeated this claim to someone new to the field. I think I prefaced it with "studies have conclusively shown...". Of course, it was unreasonable of me to think that what is being touted by so many as well-researched was not, in fact, so.

Mind, it seems to me that defects do follow both patterns: Introducing defects earlier and/or fixing them later should come at a higher dollar cost, that just makes sense. However, it could be the same type of "makes sense" that made Aristotle conclude that heavy objects fall faster than light objects - getting actual data would be much better than reasoning alone, especially is it would tell us just how much costlier, if at all, these differences are - it would be an actual precise tool rather than a crude (and uncertain) rule of thumb.

I do have one nagging worry about this example: These days a lot of projects collect a lot of metrics. It seems dubious to me that no one has tried to replicate these results.

Comment author: rwallace 07 February 2012 09:08:36PM 1 point [-]

We know that late detection is sometimes much more expensive, simply because depending on the domain, some bugs can do harm (letting bad data into the database, making your customers' credit card numbers accessible to the Russian Mafia, delivering a satellite to the bottom of the Atlantic instead of into orbit) much more expensive than the cost of fixing the code itself. So it's clear that on average, cost does increase with time of detection. But are those high-profile disasters part of a smooth graph, or is it a step function where the cost of fixing the code typically doesn't increase very much, but once bugs slip past final QA all the way into production, there is suddenly the opportunity for expensive harm to be done?

In my experience, the truth is closer to the latter than the former, so that instead of constantly pushing for everything to be done as early as possible, we would be better off focusing our efforts on e.g. better automatic verification to make sure potentially costly bugs are caught no later than final QA.

But obviously there is no easy way to measure this, particularly since the profile varies greatly across domains.

Comment author: NancyLebovitz 05 February 2012 09:32:48AM 1 point [-]

Anyone want to come up with a theory about why not bothering to get things right was optimal in the ancestral environment?

Comment author: rwallace 05 February 2012 05:02:38PM 11 points [-]

Because you couldn't. In the ancestral environment, there weren't any scientific journals where you could look up the original research. The only sources of knowledge were what you personally saw and what somebody told you. In the latter case, the informant could be bullshitting, but saying so might make enemies, so the optimal strategy would be to profess belief in what people told you unless they were already declared enemies, but base your actions primarily on your own experience; which is roughly what people actually do.

Comment author: shminux 01 February 2012 06:24:30AM *  -3 points [-]

Why I think that the MWI is belief in belief: buy a lottery ticket, suicide if you lose (a version of the quantum suicide/immortality setup), thus creating an outcome pump for the subset of the branches where you survive (the only one that matters). Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don't follow this strategy, you are either irrational or don't really believe what you say you do (most likely both).

(I'm not claiming that this is a novel idea, just bringing it up for discussion.)

Possible cop-out: "Oh, but my family will be so unhappy in all those other branches where I die." LCPW: say, no one really cares about you all that much, would you do it?

Comment author: rwallace 01 February 2012 12:35:14PM 4 points [-]

That's not many worlds, that's quantum immortality. It's true that the latter depends on the former (or would if there weren't other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.

Comment author: MixedNuts 26 January 2012 08:22:14PM 3 points [-]

I haven't seen a discussion of the concept of intellectual property that did not include a remark to the effect of "Wait, whence the analogy between property of unique objects and control of easily copied information?".

Comment author: rwallace 26 January 2012 08:28:53PM 2 points [-]

True. The usual reply to that is "we need to reward the creators of information the same way we reward the creators of physical objects," and that was the position I had accepted until recently realizing, certainly we need to reward the creators of information, but not the same way - by the same kind of mechanism - that we reward the creators of physical objects. (Probably not by coincidence, I grew up during the time of shrink-wrapped software, and only re-examined my position on this matter after that time had passed.)

Comment author: NihilCredo 26 January 2012 02:55:36PM 3 points [-]

there are better justified and less harmful ways to accomplish this than intellectual property law.

Such as?

Comment author: rwallace 26 January 2012 08:23:40PM 3 points [-]

To take my own field as an example, as one author remarked, "software is a service industry under the persistent delusion that it is a manufacturing industry." In truth, most software has always been paid for by people who had reason other than projected sale of licenses to want it to exist, but this was obscured for a couple of decades by shrinkwrap software, shipped on floppy disks or CDs, being the only part of the industry visible to the typical nonspecialist. But the age of shrinkwrap software is passing - outside entertainment, how often does the typical customer buy a program these days? - yet the industry is doing fine. We just don't need copyright law the way we thought we did.

Comment author: cousin_it 26 January 2012 12:25:53PM *  8 points [-]

A funny unrelated question that just occurred to me: how can one define property rights in a mathematical multiverse which isn't ultimately based on "matter"?

Comment author: rwallace 26 January 2012 08:14:44PM 0 points [-]

We can't. We can only sensibly define them in the physical universe which is based on matter, with its limitations of "only in one place at a time" and "wears out with use" that make exclusive ownership necessary in the first place. If we ever find a way to transcend the limits of matter, we can happily discard the notion of property altogether.

View more: Next