The previous open thread has already exceeded 300 comments – new Open Thread posts should be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Open Thread: April 2010, Part 2
New Comment
202 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The wonderful Ben Goldacre shares a fantastic 1971 story from the "king of evidence based medicine" Archie Cochrane.

It’s 1971, he’s part way through a randomised trial comparing Coronary Care Units against home care, and the time has come to share some results with the cardiologists.

I am not asking you to appreciate the results: this was a long time ago, and the findings will not be generalisable to modern CCU’s.

I am inviting you to appreciate the mischief.

The results at that stage showed a slight numerical advantage for those who had been treated at home. I rather wickedly compiled two reports: one reversing the number of deaths on the two sides of the trial. As we were going into the committee, in the anteroom, I showed some cardiologists the results. They were vociferous in their abuse: “Archie,” they said “we always thought you were unethical. You must stop this trial at once.”

I let them have their say for some time, then apologized and gave them the true results, challenging them to say as vehemently, that coronary care units should be stopped immediately. There was dead silence and I felt rather sick because they were, after all, my medical colleagues.

I'm taking part in the SIAI Visiting Fellows program, and have been keeping a diary of the trip. If anyone's interested in the details of what people actually do in the program, the two most recent entries contain some stuff.

0Matt_Simpson
Very interesting. This makes me want to do the program even more. I look forward to times when I can just pursue whatever interests me (intellectually) at the moment instead of focusing on coursework.
0andreas
Thanks! Please keep on posting, this is interesting.

http://vigilantcitizen.com/?p=3563

When will anti-transhumanism become a serious political issue?

1Kevin
When genetic engineering becomes practical
1arundelo
What a cool Black Eyed Peas video! Edit: Oh, and what a weird essay.
0RobinZ
When the population of transhumanists become a prominent demographic.
2Jack
Nah, transhumanists are weird enough it will happen before then.
2RobinZ
We might have different definitions of "serious political issue" and "prominent demographic" - I'm talking about the level at which candidates for political office make demonizing you a part of their campaign.
2Jack
My prediction is that the demonization will begin long before transhumanists have the popularity, clout or resources to alter the established order in a significant way. Atheists are probably the best parallel. Edit: How would you define prominent?
2RobinZ
Atheists became prominent (again) in the United States around the time that The End of Faith came out and became popular. I think the Stonewall riots brought homosexuals into public prominence in the United States, but I have a poor grasp of history.

Non Sequitur presents The Bottom Line literally.

ETA: Reposted to the Bottom Line thread, for better future findability.

Just an idea: what about putting a "number of votes" next to the "vote total" score for posts and comments? That would distinguish cases where a subject was highly controvertial from those where no-one really cares.

0RobinZ
That would be nice - better than raw #+/#-, actually, because it immediately gives you the score. Are there any programmers listening?
2Morendil
Yep. I'm wondering how this should be formatted - something like 0 (2) maybe? The implementation looks relatively straightforward from what I've already seen of the code. But while working on other changes, namely an integrated Anti-Kibitz script that works under IE, I have discovered that it's non-trivial to write unit tests for things like how a single comment is rendered. The design of the Reddit codebase has some rough spots, like the use of globals for HTML rendering. It's the sort of thing that could be done without tests but that I'd hate to do without tests because that would be adding to a technical debt which has already grown into the danger zone. That takes it from straightforward to moderately hard.
7AlanCrowe
Format suggestion: 12 - 5 = 7 points
0RobinZ
I would say "Score: 2/6" (or whatever the numbers come to). I wish I could help with the rest of it.
4wedrifid
Misleading. "0/6" sounds far worse than it is given that implies a simple positive fraction when plus or minus 6 was the actual limit.
0RobinZ
You're right - AlanCrowe's proposal is much better.

String theory derives entropy for astrophysical black holes. Some references here.

For physics, I think this news is of fundamental significance. This is a huge step towards describing the real world in terms of string theory. The backstory is that almost 40 years ago, Bekenstein and Hawking came up with a formula for black hole entropy, but it was based on macroscopic behavior (like the Hawking temperature) and not on a counting of microscopic states. In the mid-90s you had the first microscopic derivation of black hole entropy in string theory, but it wa... (read more)

0RobinZ
That is excellent news!
5Mitchell_Porter
Actually, it's even better than I realized when I posted that comment. I hadn't yet grasped the "F-theory" model building program in string theory, which is about two years old. I've been studying that lately, and it was mentally apocalyptic to realize how all the details of the standard model could be expressed as a configuration of branes in hyperspace. The morning after, life went on, and I still have heaps to learn, but there's no turning back after an experience like that.

A father talks cryonics with his two daughters.

The always interesting Eric Falkenstein on Risk Taking.

Risk taking, I argue, is uncompensated on average. There is no simple form of risk taking such that, if you can tie yourself to some intellectual mast and bear this psychic pain you should expect a higher return. There is a mistaken syllogism at the bottom of portfolio theory, as just because you have to take risk to get rich, or if you take risk you might get rich, this does not mean if you take risk you will become richer on average.

Any tips on efficiently gathering information on controversial, non-technical subjects, such a "how to raise your kids" or "pros and cons of spanking your kids"? (those are relatively good examples because a lot of people have a strong opinion on them)

I usually look on Wikipedia first, but while it's good at giving a basic overview of a question, it's quite bad at presenting evidence in a properly organized way (I learnt first hand that improving a controversial article is hard).

Research papers are more rigorous and more likely to conta... (read more)

6wedrifid
It is a difficult question to answer. I can point to various studies but I must keep in mind that those that made me aware of such studies are not necessarily unbiased. That depends. You need to know just what you want your kids to be like. And that has to be what you really want your kids to be like, not what it sounds good to say you want your kids to be like. For example, spanking kids will make them more likely to be physically aggressive. But this may well benefit them in the long run, teaching them tactics for maintaining higher status and so improving their health and happiness. There is a clear negative correlation between childhood spanking and IQ. But given that your child's genetic heritage is already determined, your own choice of behaviour quite possibly has no causal influence on the IQ outcome. Low IQ parents are more likely to be physically (rather than verbally or socially) aggressive and also more likely to pass on genes for low IQ so causal influence from the spanking is doubtful.
2jimmy
I'm also interested in hearing other peoples tricks, but I'll share mine. The first thing I'd do is to check LW to see if I got lucky (google "spanking kids site:lesswrong.com" for example) I don't really have any good tricks for finding good sources, but you might want to try adding some related technical words in your search to filter your results towards smarter people. Once I find a source, the main thing I look for is "does this person understand the opposing arguments?". If they say something that suggests that they don't understand the idea that different forms of dis-utility might be interchangeable, or if they ever take the "it's bad because it's wrong!" stance, then I'll move on.
[-]ata60

I noticed an apparently self-defeating aspect of the Boltzmann brain scenario.

Let's say I do find the Boltzmann brain scenario to be likely (specifically, that I find it likely that I myself am a Boltzmann brain), based on my knowledge of the laws of physics. Then my knowledge of the laws of physics is based on the perceptions and memories that I, as a Boltzmann brain, am arbitrarily hallucinating... in which case there is no reason for me to believe that the real universe (that is, whichever one houses the actual physical substrate of my mind) runs on tho... (read more)

2Jack
Let me see if I can formalize. This might not be quite what you had in mind, but I think it will be similar: For clarity we can reduce the possible worlds to two, either there are many many more Boltzman brains than human brains (H1) or there are few if any Boltzman brains (H2). In H2 aprox. everyone who learns of the Boltzman brain hypothesis (and the evidence in favor) is not a Boltzman brain. In H1 very very few Boltzman brains will learn of the Boltzman brain hypothesis (and the evidence in favor). A significantly larger percentage of the non-Boltzman brains capable of conceiving the hypothesis will learn of it (and the evidence in favor). So independent evidence of H1 means (1) H1 is more likely to whatever degree that evidence dictates, (2) if H1 you are more likely than most brains to be non-Boltzman, (3) by the self-indication assumption H2 is more likely because in that world most or all brains are non-Boltzman. The inference from (2) to (3) seems problematic to me. I'm not sure. Questions: 1. How the hell do we evaluate the evidence since any evidence of H1 is also evidence of H2 (if we like the SIA). 2. What the hell is the proper reference class? 3. If new evidence came in against H1 would we have to say were more likely to be Boltzman brains?

Consider the Oyster: Why even strict vegans should feel comfortable eating oysters by the boatload.

http://www.slate.com/id/2248998/?from=rss

Don't Choke ("performing below skill level due to performance related anxieties"): http://scienceblogs.com/cortex/2010/04/dont_choke.php

Today I heard a radio interviewer talking with a politician about House seats that could go to Republicans. It went like this:

Politician: "I think there may be 100 contested seats."

Reporter: "So you think 100 seats could go to the Republicans?"

Followed by confusion due to the fact that neither of them could work out how to use English to distinguish "there are 100 Democrat seats such that that seat could be won by Republicans in the next election" from "the Republicans could gain 100 seats in the next election".

A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week's Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.

Which is all well and ... (read more)

The map image in the masthead confused me when I found LW, and might reduce the probability that casual Web-browsing would-be-rationalists would take the time to understand what LW actually is before moving on.

I'm new to the community; this post may not be structured like the ones you're used to. Bear with me.

If LW is anything like the few sites whose analytics numbers I've seen, a significant portion of traffic comes from Web searches (I would wildly guess 10-30% of their pageviews). According to the analytics I've seen on my own site, out of those landin... (read more)

[-]Rain100

An anecdote:

When I've had people shoulder surf while I was visiting the site, everyone asked, "LessWrong? What's that supposed to mean?" (5+ people). When I explained that it was a rational community where people tried to improve their thinking, they immediately began status attacks against me. One used the phrase "uber-intellectual blog" in a derogatory context and another even asked, "Are you going to come into work with a machine gun?" They often laughed at the concept.

Nobody commented on the graphic.

5pjeby
You probably need a new set of friends/relatives/coworkers.
6wedrifid
New set of relatives. I suppose that is one reason to parent children: "My last family members were a bunch of @#%s. I'm making some new ones!"
2Rain
I consider them to be "normal people." Anti-intellectualism is very popular, and I'm already known for being interested in unusual topics. Once I've trained them to respond appropriately, we tend to have good discussions.
0[anonymous]
You probably need a new set of friends/relatives/coworkers.
1ata
It didn't deter me, but I didn't get it until someone explained it just recently. For a while, I was just thinking "What's that a map of? Is that where FHI is based? Is it the area in Santa Clara surrounding the SIAI House? Whatever it's a map of, is is relevant enough to put it at the top of every page?" (Actual answer from a minute googling street names: it's in San Francisco, but I don't know if there's any reason this particular location was chosen.) O'course, even for those who get it, it may not be the best illustration of the map/territory distinction, because the lower half isn't the territory either. It's just a more detailed map than the top half. Ceci n'est pas le territoire! Anyway, I doubt it will actively deter many people, but there are probably better possibilities.
3ata
Actually, regarding "Ceci n'est pas...", The Treachery of Images is a pretty good illustration of the map/territory distinction. But it probably wouldn't make a great masthead.
0Kevin
There's also a significant percentage of traffic that comes from Stumble Upon. Not sure how we can better optimize for people arriving for Stumble Upon, but certainly the current state is not ideal. There is a possibility of presenting different pages to people depending on their referrers...
0Jack
The map-territory metaphor is pretty central to what goes on here, so I kind of like it. I don't really know if it is a deterrent. Any alternatives in mind? I do think the logo could be a map of somewhere more interesting than Candlestick Park! And maybe a cooler place would keep googler's around. Or make it look a dojo.
2Nevin
The first thing that comes to mind is having no masthead image. Any image will presumably be misunderstood by some fraction of visitors, but the text alone is very clear. I can see why people like the current image; perhaps a solution is to replace it with a solid color for people arriving from Google or StumbleUpon.
0mattnewport
I have to admit I'd never really consciously noticed the image until someone recently pointed out that it symbolizes the map/territory distinction. I guess that is evidence that is not very eye catching or distinctive but neither is it particularly off-putting in my opinion.

Does the market for sperm and egg donors violate supply and demand?

“From compensation rates to the smallest details of donor relations, sperm donors are less valued than egg donors,” Almeling said. “Egg donors are treated like gold, while sperm donors are perceived as a dime a dozen.”

The inequities persist despite the fact that profiles of hundreds of potential egg donors languish on agency Web sites, far outstripping recipient demand, while suitable sperm donors are quite rare, Almeling found. In fact, only a tiny fraction of the male population possess

... (read more)
0wedrifid
Fascinating. If I thought that genetic propagation was still an efficient way to maximise my influence on the future universe I would look into just what these other reasons are so that I could effectively game the system.
[-]ata30

There's this upcoming meetup called Baloney Detection Workshop in Mountain View. It will probably be fairly basic compared to what's covered on LW, but I might go just for fun. Anyone else thinking of going? They're looking for people to give 10-minute talks on related subjects — maybe someone (possibly me, possibly not) could do one that introduces some of LW's matarial, something that can build off the usual skepticism repertoire and perhaps lead some people to LW. Maybe something on motivated/undiscriminating skepticism, really applying the techniques o... (read more)

[-][anonymous]30

Around here, we seem to have a tacit theory of ethics. If you make a statement consistent with it, you will not be questioned.

The theory is that though we tend to think that we're selfless beings, we're actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we're selfless is because thinking we're selfless makes it easier to convince others that we're selfless.

The thing is, I haven't seen much justification of this theory. I might have seen some here, some there, but I don't recall any one big attempt at justifying this theory once and for all. Where is that justification?

8Tyrrell_McAllister
I agree with khafra. If "selfish" means "pursuing things if and only if they accord with one's own values", then most people here would say that every value-pursuing agent is selfish by definition. But, for that very reason (among other things), that definition is not a useful one. A useful definition of "selfish" is closer to "valuing oneself above all other things." And this is not universally agreed to be good around here. I might value myself a great deal, but it's highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
7khafra
I think the general view is more nuanced. If there is a LW theory of selflessness/selfishness, Robin Hanson would be able to articulate it far better than I; but here's my shot: "Selflessness" is an incoherent concept. When you think of being selfless, you think of actions to make other people better off by your own value system. Your own value system may dictate that fulfilling other people's value systems makes them better off, or yours may say that changing others' value systems to "believing in Jesus is good" makes them better off. The latter concept is actually more coherent than the first, because if one of those other systems includes a very high utility for "everyone else dies," you cannot make everyone better off. Many LW members place a high value on altruism, but they don't call themselves selfless; they understand that they're fulfilling a value system which places a high utility on, for lack of a better word, universal eudaimonia.
0[anonymous]
Agreed. If "selfish" means "pursuing things if and only if they accord with one's own values", then most people here would say that every value-pursuing agent is selfish by definition. But, for that very reason (among other things), that definition is not a useful one. A useful definition of "selfish" is closer to "valuing oneself above all other things." And this is not universally agreed to be good around here. I might value myself a great deal, but it's highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
7Morendil
That's news to me. That doesn't describe me. I sometimes act in ways that are detrimental to me and beneficial to others, out of a broader conception of my own self-interest: I figure that those actions are beneficial to my own projects, properly conceived. I most specifically don't want people to think I am exploitable (which is one interpretation of "selfless"). I do want people to think of me as someone with whom it is desirable to cooperate.
5JamesAndrix
I don't think that's the tacit theory of ethics around here. Genes may be selfish, but primates survived better who had other related primates looking out for them, or who showed that they were caring. It could well be that some simple mutations led to primates that showed they were caring because they actually were caring. (Edit: It seems to me that this must be the case for at least part of our value system. ) This is relevant: http://lesswrong.com/lw/uu/why_does_power_corrupt/ but the benefits to the genes can just as easily come from more subtle situational differences, and assistance by related others, rather than a major status change and change in attitudes.
3Amanojack
One would be hard-pressed to find a more perfect example of doublethink than the popular notion of selflessness. Selflessness is supposed to be praiseworthy, but if we try to clarify the meaning of "selfless person" we either get 1. A person who's greatest (or only) satisfaction comes from helping others, or 2. A person who derives no pleasure at all from helping others (not even anticipated indirect future pleasure), but does it anyway Neither of these are generally considered praiseworthy: (1) is clearly someone acting for purely selfish reasons, and (2) is just a robotic servant. Yet somehow a sort of "quantum superposition" of these two is held to be both possible and praiseworthy*. *The common usage of "selfish" is an analogous kind of doublethink/newspeak ETA: I, and probably many others, consider (1) praiseworthy, but if that's the definition of selfless then the standard LW argument you mentioned applies to it.
2knb
1. I don't think that people think they are selfless. They usually think they're more selfless than they actually are, though. 1. I suspect most people at Less Wrong have more a complex view than this description. People also behave selflessly for reasons of inclusive fitness and reciprocal altruism. People also engage in "selfless" behavior for the same reason a "forgiving" tit-for-tat strategy wins in iterated prisoner's dilemmas.
2pjeby
ISTM that any other theory would be the one that requires justification. How do your genes selfishly reproduce if you're genuinely selfish?
0Jonathan_Graehl
This seems obviously true, except that there are certain regimes where genuine cooperation isn't ruled out by selfish genes (typically requiring a sort of altruistic willingness to undertake costly detection and punishment of cheaters). So I would not at all rule out instances of genuine altruism if a case can be made that it's positive-sum enough and widespread enough.

From Hal Daume's blog:

If you believe A => B, then you have to ask yourself: which do I believe more? A, or not B?

Let's say a weak compressor is one that always reduces a (non-empty) file's size by one bit. A strong compressor is one that cuts the file down to one bit. I can easily prove to you that if you give me a weak compressor, I can turn it into a strong compressor by running it N-1 times on files of size N. Trivial, right? But what do you conclude from this? You're certainly not happy, I don't think. For what I've proved really is that weak comp

... (read more)

If Airedale and I organized a meetup in the Chicago area, would anyone come? If there's nontrivial interest and we decide on going through with it, we'll make a top-level post with a place and time.

2Unnamed
I'd be interested, and you could PM these folks to find others who may not be reading this open thread.
0arundelo
I maybe would come. (I live near Detroit.)

I have discovered myself to be in need of a statistical tool I do not possess. I am confident that a frequentist formula exists, based on the nature of the task to be executed, but it occurs to me that there may be people who would like to prove some point about Bayesianism vs. Frequentism - so here's a challenge for you all:

I am a mechanical engineer - numerate, literate, and reasonably intelligent - educated to the extent of one college course in basic probability and statistics. I have also been reading EY's essays for years, and am familiar (approachin... (read more)

5Matt_Simpson
What, in particular, is the tool your are looking for? A First Course in Bayesian Methods is ~$50 used, and covers what I take to be the basics. I'm currently using it in a grad class in Baysian statistics (with a companion text for computing in R) and have no complaints - well, other than that it's not an all encompassing text. The first edition of Gelman's text is going for ~$35 used (~$50 for the second edtion) and has the added advantage of actually being in UM's library (both editions). I've not read either edition, but I hear it's the general Bayesian text to get.
2RobinZ
Thanks for the recommendations! I am being intentionally vague about the nature of the task I need to perform, but it is not esoteric. I would expect the problem to be discussed in any good textbook and many undergraduate statistics courses. Edit: I think I see the chapter in the table of contents of Gelman from Amazon's preview.

I'm looking for a good textbook or two on Bayesian design of experiments. Any suggestions?

While I'm on the topic of Baysian textbooks, is the difference between the 1st and 2nd edition of Gelman's text big enough to be worth buying the 2nd edition over the 1st? (I have a couple of short texts already for one of my courses this semester, but I think the depth is lacking)

Wikipedia page on causal decision theory says:

In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility U of an action A of an action "calculated from probabilities of counterfactuals":

U(A)=\sum\limits_{j} P(A > O_j) D(O_j),

where D(Oj) is the desirability of outcome Oj and P(A > Oj) is the counterfactual probability that, if A were done, then Oj would hold.

David Lewis proved that the probability of a conditional P(A > Oj) does not always equal the conditional probabi

... (read more)
1JGWeissman
An important aspect of a decision theory is how it defines counterfactuals. Anna Salamon wrote a good sequence on this topic.

Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?

I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.

0[anonymous]
Torturing animals is bad. But I wouldn't have a problem with, say, a CGI version.
[-][anonymous]00

Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?

I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.

[-][anonymous]00

Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?

I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.

3Jack
This was a reply to another copy of this comment Torturing animals is bad. But I wouldn't have a problem with, say, a CGI version.
0Strange7
Really? Even if, down the line, somebody credited some better-than-reality gorn as their inspiration for raping and murdering a dozen people?
8thomblake
I'd even not have a problem with Shakespeare, or Less Wrong, or physics textbooks, if someone down the line credited it as their inspiration for raping and murdering a dozen people.
2Matt_Simpson
The question isn't, will this cause one person to go on a murdering rampage?, but rather what is the net effect on murdering rampages (and everything else we care about)? There is some evidence that violent movies reduce violent crime in the short run, serving as a substitute for actually committing violence. I wouldn't be surprised if the same was true for crush videos.
2Strange7
So, if easy access to violent entertainment leads to desensitization which leads to increased demand, and real/simulated violence are substitutes for this purpose, what happens when the supply of simulated violence is interrupted? That's the situation ancient Rome had, politicians compelled to maintain the gladiator games under threat of mass rioting.
5thomblake
If the best horror story you can come up with is people being 'forced' to create entertainment that there's a market for, then I'm not concerned.
4Strange7
No, the horror story is that the entertainment is interrupted for some reason, and then a bunch of bored people go torture real animals the same way they saw it on TV.
0cupholder
Extremely unlikely, unless movie theaters, DVDs and the Internet were obliterated. And if that happened, the (theoretical) resulting uptick in violence would be the least of most people's worries.
0Strange7
DVD players and computers both depend on centralized power generation, and movie theaters don't show crush videos. It's not necessary that the supply be permanently eliminated, just unexpectedly cut back for some reason. Even if the supply is constant, desensitization means there will be ongoing problems as a result.
0cupholder
A centralized power generation failure would probably be even more of a distraction from reenacting violent entertainment than the loss of DVD players and computers! My mistake - I had thought you'd broadened what you were talking about to 'violent entertainment' and 'simulated violence' in general as that's what your parent comment refers to. Fair enough - let's suppose that theaters/DVDs/computers are just temporarily inaccessible in some localized region. I suspect that most potential violent entertainment (or crush video, if we're staying specific) imitators in that region would be too concerned with regaining access to theaters/DVDs/computers to do violence themselves. A constant supply is inconsistent with an 'unexpectedly cut back' supply; I wouldn't expect a constant supply to boost violence if it's decreasing supply that's supposed to boost violence.
0Strange7
In the short term, demand for violence is effectively fixed, so decreases in the supply of simulated violence lead to increases in actual violence as a substitution effect. In the long term, exposure to violence leads to desensitization, so demand for simulated violence expands to meet the supply. Given two otherwise-identical societies, in which one strictly limits the supply of violent imagery and the other does not, I predict that the latter will (eventually, due to desensitization) have a higher demand for violence, leading to more actual, physical violence during blackouts. I've heard it argued that the one time when large-scale censorship would be morally justified is if a "Langford basilisk," that is, an image which kills the viewer, were found to exist. What if there were such an image, but it only killed a tiny percentage of the people who saw it, or required a long cumulative exposure to be effective? What if, rather than killing directly, it compelled the viewer to hurt others, or made those already considering such a course of action more likely to follow through on it? This isn't a fully-general argument for censorship of any given subject that provokes disgust; it's quite specific to violent pornography.
3cupholder
This is undoubtedly possible, though I'd expect far less of a substitution effect than you because of the distraction effects I suggested above. Ultimately I suppose this is an empirical issue. I suspect that once the level of simulated violence in a real society is above some saturation point, further increases in its supply would not be met by increased demand. Ideally there'd be some way to empirically test this too. I smell a Freakonomics chapter! Seriously, if there are any economists or sociologists reading this comment, I think something like this could make a cute topic for a paper. Some quick googling makes me think that the effect of blackouts in general on crime hasn't been researched rigorously - I'm mostly seeing offhand claims like 'looting during blackouts blah blah blah' or studies of individual blackouts like New York '77. I see even less about using blackouts to assess the effect of violent media specifically, but I'd be very interested in the results of such a study. At any rate, your own prediction is an interesting one, if only in terms of thinking about how one could test it, or approximate testing it. As for which variations on the Langford basilisk I'd be OK with banning: I'd work it out by putting on my utilitarian hat on and plugging in numbers. More than that; it's specific to media that (1) desensitizes some viewers and (2) have actual violence as a substitute good, which arguably includes violent non-porn as well as violent porn.
3mattnewport
If the supply of virtual violence is increasing faster than demand so that real violence is going down would you still support banning virtual violence for fear of this potential uptick? Presumably you would want to try and determine the expected value of virtual violence given the relative effects and probabilities? If it helps your estimates, evidence suggests that increasing exposure to virtual violence and pornography correlates with reduced rates of real world violence and sexual violence.
0Strange7
In this case, my goal is to minimize the expected future amount of real violence, so yes I'd like to see the math. Including the long-term black-swan risks, that interruptions to non-critical infrastructure could create an unanticipated surge of sadism. Other evidence, not of an increase in violence, but of hard-to-measure slow-developing side effects..
2mattnewport
This just sounds like one more potential reason near the bottom of an already long list of reasons to mitigate against such interruptions. This argument looks analogous to the claim that making bullets out of lead is bad because someone who is shot multiple times will end up with an unhealthy dose of lead in their bloodstream.
0NancyLebovitz
Very interesting link-- I'm not sure that avoiding superstimuli is part of rationality, but it might be part of the art of living well.
2thomblake
I didn't notice this on the first read-through, but cupholder's comment brought this to my attention - the actual content seems to be an irrelevant factor in your general principle, especially the 'pornography' part. Surely we could say the same thing about non-pornographic violent media. Furthermore, if reading the Oxford English Dictionary or looking at Starry Night increases violent tendencies in the same way, then your argument works just as well.
0Strange7
Indeed it would. I am concerned about this because of the risks, not because of a moral objection to pornography (some kinds are rather pleasant). For that matter, I think the moral revulsion evolved as a means to mitigate the risks associated with superstimuli, fascination with violence, etc.
1PhilGoetz
I think you mean "I predict the latter will", since desensitization occurs more in the society with more violent imagery.
0Strange7
Thank you. Fixed.
0Matt_Simpson
Well, perhaps, though I would expect the effect to be much smaller today - see, for example, parts of this post.
0Jack
I mean for mental health reasons I wouldn't watch them and wouldn't let my children watch them. But they aren't morally wrong- which is the kind of thing that would lead me to want a state intervention. Are you surprised to get the standard left-libertarian response, here?
0Strange7
By definition, standard responses shouldn't be surprising. Disappointment is a separate issue. You've presented a canned NIMBY opinion, not a reasoned argument. Is the statement 'crush videos aren't morally wrong' falsifiable?
6Jack
I guess you can call it that. In my own words I say I am applying a general principle, namely the harm principle to a specific case. I find the harm principle intuitively moral and when applied to a society it describes the kind of place I would like to live in. I don't really go for unified normative theories but the harm principle is consistent with most deontological ethics, a excellent rule of thumb for consequentialists (which is why Mill is the guy who named it), those who follow this rule posses the virtue "tolerance" and it is the bedrock of the liberal political order. Edit: Oh, and contractualism. I might be someone with preferences that others will find obscene so it is in my interest to agree to this principle. Indeed, I have preferences that others probably find obscene so I don't have a lot of trouble thinking this way. I'm not a moral realist. I'm expressing my preference that people be free to fulfill their preferences so long as they don't hurt anyone.
4khafra
The harm principle is good in common cases, but I fear this may be an edge case, and the harm principle tends to break down when the meaning of "harm" or "hurt" is called into question. By the standards of Western Civilization, siphoning money from Joe's bank account is harm to Joe, although any physical effect on Joe is very indirect; making out with someone of the same gender is not harm to Joe, even if the sight of it makes him violently ill. By the standards of Islam, drawing certain pictures can be harm to everyone of their faith. It seems to me that there's a narrow range of value congruence where the harm principle is applicable; go further and it is incoherent, closer and it is redundant.
2Jack
I agree with this. "Harm" is too vague to make the harm principle a fully general argument for the Western liberal order- and it certainly wouldn't do to try and program an AI with it. One thing a liberal society must wrestle with is what kinds of behavior are considered harmful. Usually, we define harm to include some behaviors beyond physical harm: like theft or slander. But watching computer generated images of any kind, in the privacy of your own home is pretty solidly in "doesn't harm anyone" category, as defined by the liberal/libertarian tradition. Part of my point is that there isn't really much of an argument to be had. I suppose if someone demonstrated that the existence of computer generated snuff actually threatened our civilization or something, I could be swayed. But basically I think people should do things that make them happy so long as they avoid hurting others: if that isn't a terminal value it is awfully close.

I intend to start playing World of Warcraft when the summer break begins. Does anyone actually want to do this?

Heh, that is a topic that is very relevant to an article I was intending to post to Less Wrong today.

I've written it, but then noticed I have 17/20 of the required karma points.

Any three people wanna upvote this comment of mine so I can post my article?

If you had a million universes tiled with computronium, what would you do?

Is Pascal's wager terribly flawed and is this controversial?

6Vladimir_Nesov
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won't be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn't correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal's wager. Belief in belief is a situation where you claim to have a belief, and you believe in having the belief, but you act in a way that can only be explained by working from the understanding of reality that involves the belief in question being wrong. Belief in belief keeps the human believers out of most of the trouble, but that's not what Pascal's wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion. You are being offered an option to actually believe, but this is not what people have experience with observing in others. You only see other people believing in belief, which is not as bad as actually believing. Hence, while you believe in belief that Pascal's wager offers you an option to believe in God, actually you believe that you are offered an option to believe in belief in God. (Phew!)
-1byrnema
Regarding the first paragraph, I don't see that Pascal's wager requires all these contortions. It only requires estimating the utility of belief in God, and then makes a positive assertion about what you should do with that utility. Would you agree that your arguments are arguments for why the utility of believing in God should be low? Regarding the second paragraph, I agree there is a weird double-think aspect to Pascal's Wager. Just in that someone admitting, if PW converted they, that they are believing in something just because it was convenient to do so. Can you really believe something for that reason, knowing that is the reason? So this is an argument in the category, 'you can't really choose your beliefs as an act of will'.
1cupholder
(Edit - it's probably a good idea to avoid reading this comment until you try RobinZ's suggestion.) I looked at the parent of the comment of yours, and I think I can see why you disagreed with MatthewB and JGWeissman about Pascal's Wager: the three of you may be thinking of PW differently to each other. It sounds to me that MatthewB was evaluating PW in terms of how well it gets you to the truth, whereas you were evaluating PW in terms of whether it helps you win the +∞ reward for belief. PW is misguided for the first purpose, but could work for the second, depending on the situation. And JGWeissman, I think, was considering PW-as-applied-to-theism whereas you were thinking of PW-in-general - but you identified that difference yourself.
[-]Jack00

"Magic everywhere in this bitch."

(For those who aren't aware of this act, yes, they're sincere and have a very sizeable following [the album this track is from peaked at #4 on the Billboard 200.])

0thomblake
response from Cracked

Applied Rationality April Edition Take 2. Different technique this time.

http://www.reddit.com/r/AskReddit/comments/bu3yg/i_have_pancreatic_cancer_and_i_am_probably_going/c0okwsf

6saturn
Shouldn't you focus your efforts on people who are still eligible for life insurance?
[-]Jack00

Sampling bias may have lead Paleontologists to believe that North American dinosaurs had much smaller ranges than they actually did. Link.

Same questions, new formulation.

It seems that here at Less Wrong, we discourage map/territory discrepancies and mind projection fallacies, etc.

However, "winning" is in the map not the territory.

In one extreme aesthetic, we could become agents that have no subjective beliefs about the territory. But then there would be no "winning"; we'd have to give up on that.

So instead we'd like to have our set of beliefs minimally include enough non-objectively-true stuff to make "winning" coherent. Given this, how can we draw a line abou... (read more)

3Jack
Same answer, new formulation. Nah. Winning isn't determined by the map, it's like a highlighted endpoint (like drawing on a map with a marker). You win when you get there. Note that a little red x or circle on a map isn't really part of the map. There is nothing there that we expect to correspond to the territory (imagine arriving at your destination and everything turns the color of the marker you used!). The theistic move is like not finding any destination on the map that you're happy with so so you draw in a really cool mountain and make it your endpoint. Winning isn't in the map because winning conditions are defined by desires, not beliefs.
0byrnema
Thanks for responding. I'm not sure about the other stuff, but you have to agree that winning is in the map. You can define your win as an objective fact about reality (winning = getting to the mountain) but deciding that any objective fact is a win is subjective. My problem is that I'm trying to identify any lasting, real difference between deciding that a feature of the territory indicated on your map is 'pretty cool' and deciding that aspects of your map are pretty cool in of themselves, even if they don't map to real features in the terrain. OK. But just to check: are you pretty sure this is a real distinction?
2Jack
It is subjective and it isn't in the territory... but that isn't the extent of our ontology. The map corresponds to your beliefs, the territory to external reality. Your desires are something else. Right. I don't think I have a new way of answering this question. :-). "Pretty cool" is at most an intersubjectively determined adjective. To say something is pretty cool in and of itself is a category error. Put it this way: what would it possibly mean for something to be pretty cool in a universe without anyone to find it cool? (Same goes for finding things moral, just so we're on the same page). As certain as I get about anything. Beliefs are accountable to reality, if reality changes beliefs change. From the less wrong wiki on the map and territory: Desires don't generate predictions. In fact, they have exactly the opposite orientation of beliefs. If reality doesn't match our beliefs our beliefs are wrong and we have to change them. If reality doesn't match our desires reality is wrong and we have to change it.
0NancyLebovitz
I think there are maps associated with rewards. The reason you want a reward is that you're expecting something good, whether it's a sensation or a chance at further rewards, to be associated with it. If this has been a difficult question, it suggests that you didn't have your mind (or perhaps your map of your mind) as part of the territory.
1Jack
Do you mind clarifying this?
1NancyLebovitz
I can try, but I'm not sure exactly what's unclear to you, so this is an estimate of what's needed. It looks to me as though the metaphor is a human looking at a road map, and what's being discussed is whether the human's destination is part of the landscape represented on the map. If you frame it that way, I'd say the answer is no. However, the map in hand isn't the only representation the human has of the world. The human has a destination, and ideas about what will be accomplished by getting to the destination. I'm saying that the ideas about the goal are a map of how the world works. From the root of this thread: This is a means, not an end. The purpose of Less Wrong is to live as well as possible-- we can't live without maps because the world is very much larger than our minds, and very much larger than any possible AI. The "extreme aesthetic" of eliminating as much representation as possible doesn't strike me as what we're aiming at, but I'm interested in other opinions on that. If I understand The Principles of Effortless Power correctly, it's about eliminating (conscious?) representation in martial arts fighting, and thereby becoming very good at it. However, the author puts a lot of effort into representing the process.
0Jack
Pretty much all of it, but that might just be me. It is a little clearer now. Was there something in my comment in particular you were responding to? My puny human brain might just be straining at the limitations of metaphorical reasoning. I think we have maps for how to reach our goal but the fact that you have picked goal x instead of any other goal doesn't appear to me to be the product of any belief. Your last three paragraphs still confuse me. In particular, while they all sound like cool insights I'm not entirely sure what they mean exactly and I don't understand how they relate to each other or anything else.
0NancyLebovitz
What caught me was your idea that goals are completely unexaminable. Ultimate goals migtht be, but most of the goals we live with are subordinate to larger goals. I was trying to answer the root post in this thread, and looking at the question of whether we're trying to eliminate maps. I don't think we are. The last paragraph was the best example I could find of a human being using maps as little as possible.
0Jack
Got it. And you're right that my claim should be qualified in this way. I see (I think). I guess my position that is that a free-floating belief that is, one that doesn't constrain anticipated experience, or a desire is like a map-inscription which doesn't correspond to anything on the territory. And there is a sense in which such things aren't really part of the map. They're more like an overlay, than the map itself. You can take the compass rose off a map, it might make the map harder to use or less cool to stare at but it doesn't make the map wrong. And not recognizing that this is the case is a serious error! There is no crazy four pointed island in the middle of the South Pacific. Desires and free-floating beliefs are like this. I don't really want them gone I just want people to realize that they aren't actually in the territory and so in some sense aren't really part of the ideal map (even if you keep them there because it is convenient).
0byrnema
This is as much a response to Morendil as a response to you and Nancy. While it is certainly true that many or most of our desires "come with" the territory, these desires are 'base' or 'instinctual' goals that at times we would like to over-ride. The desire to be free of pain, for example. So-called "ultimate goals" can be more cerebral (and perhaps more fictional) and depend much more on beliefs. For example, the desire to help humanity, avoid existential risks, populate the universe, are all desires based more upon beliefs than the territory.
2Jack
So if we take the view from nowhere: there are brains with which do this thing called being a mind. The minds have things called beliefs and things called desires but all of this is just neuron activity. These minds have a metaphor for relating their neuronal activities called beliefs with the universe that that observe: the map-territory metaphor. The map-territory distinction is only understandable from the subjective perspective. There is something "outside me" which generates sensory experiences. This is the territory. The is something that is somehow a part of me or at least more proximate to me. These are my expectations about future sensory experiences, my beliefs. This is the map. Desire is a third thing (which of course is in the same universe as everything else, apropos the view from nowhere) it neither generates sensory experiences nor constrains our expectations about future sensory experiences. It isn''t in the territory, or in the map. From the subjective perspective desires are simply given. Now of course there are actually complex causal histories for these things, but from the subjective perspective a desire just arises. Now through reasoning with our map what are initially terminal desires throw off sub-desires (Like if I desire food I will also desire getting a job to pay for food.) Perhaps we can also have second order desires: desires about our desires. Of course like beliefs desires exist in the territory as aspects of our brain activity. But in the perspective in which the map-territory metaphor is operative desires are sui generis.
-1byrnema
(Status: so what happened at this point is that gave up. You think that desires are a 3rd thing, which I understand, but I think desires (and beliefs) are something you choose and that you modify in order to be more rational. I didn't realize I gave up until I realized I had stopped thinking about this.)
0Morendil
Our sense of "winning" isn't entirely up for grabs: we prefer sensory stimulation to its absence, we prefer novel stimulations to boring old ones, we prefer to avoid protracted pain, we generally prefer living in human company rather than on desert islands, and so on. In one manner of thinking, our sense of "winning" - considered as a set of statistically reliable facts about human beings - is definitely part of the territory. It's a set of facts about human brains. "Winning" more reliably entails accumulating knowledge about what constitutes the experience of winning, and it seems that it has to be actual knowledge - it's not enough to say "I will convince myself that my sense of winning is X", where X is some not necessarily coherent predicate which seems to match the world as we see it. That may work temporarily and for some people, but be shown up as inadequate as circumstances change.
2byrnema
Yeah, most desires are part of the territory, and not really influenced by our beliefs. As a child I was very drawn to asceticism. I thought that by not qualifying any of my natural desires as 'winning', I could somehow liberate myself from them. I think that I did feel liberated, but I was also very religious and so I imagined there was something else (something transcendent) that I was fulfilling. In later years, I developed a sense that I needed to "choose" earthly desires in order to learn more about the world and cope with existential angst. I considered it a necessary 'selling-out' that I would try for 10 years. All this to explain why I don't tend to think of desires as a given, but as a choice. But I suppose desires are given after all, and in my ascetic years I just believed that being unhappy was winning.
2NancyLebovitz
I believe asceticism is just another human drive, and possibly one not shared with other animals. In any case, it needs as much examination to see whether it fits into the context of a life as any other drive. I have a similar take on the desire to help people.
0NancyLebovitz
I think there's a lot of variation. Some people choose very stable lives, and I don't know of anyone who wants everything to change all the time.

In the Next Industrial Revolution, Atoms Are the New Bits

http://www.wired.com/magazine/2010/01/ff_newrevolution/all/1

Since I don't generally consider myself better informed than the market, I usually invest in index funds. At the moment, though, I find Thiel's diagnosis of irrational exuberance to be pretty reasonable, and I'd like to shift away from stocks for the moment.

My question: Is there an equivalent to index funds for bond markets— i.e. an investing strategy (open to small investors) which matches market performance rather than trying to beat it (at the risk of black-swan blowups)? Or alternately, is there a better investment strategy that I can put into place now and not worry about?

3Rain
Beware trying to time the market. Make sure you're taking this action, not because you feel that the time is right to switch, but because you've carefully analyzed your risk/reward preferences. That said, yes, there are 'index fund' bond investment vehicles, outside of the ETFs mentioned by mattnewport. They generally track the time frame (short, medium, long term) and type of bond (corporate, state, federal). Here are some examples from Vanguard: VBISX (Short Term Index), VBIIX (Intermediate Term Index), VBLTX (Long Term Index), and VBMFX (Total Bond Market Index). What you're talking about is Asset Allocation, and it's the number one predictor of your long term investment results. This generally involves determining your own risk profile and picking bonds vs. stocks appropriately. A rule of thumb is to pick 100 - (your age) as a percentage of stocks, since the younger you are, the more growth you'll need. If you have less tolerance for risk, then you could go lower. I'm currently invested 20% bonds and 80% stocks, but the bonds I have access to are the safest in the world (Federal employee G Fund, the same thing that Social Security invests in). Further breaking down AA, general categories include foreign vs. domestic, index vs. actively managed, taxable vs. non-taxed. Example Asset Allocation: 20% bonds: * 100% Medium-Term Securities (G Fund) 80% equities: * 25-35% International Index Funds (EAFE, VEIEX) * 55-75% Wilshire 5000 Index Funds (C/S Fund 3:1, VTSMX) Tax efficient fund placement: 1. Put your most tax-inefficient funds in TSP, 401ks, 403bs, Traditional IRAs and similar retirement accounts. 2. Put your next most tax-inefficient funds in your Roth(s). 3. Put what's left into your taxable account. Try to use only tax-efficient funds in taxable accounts. List of securities from least to most tax efficient: 1. Hi-Yield Bonds 2. Taxable Bonds 3. TIPS 4. REIT Stocks 5. Stock trading accounts 6. Small-Value stocks 7. Small-Cap sto
0SilasBarta
You forgot to add: -Everyone else is trying to do the same thing, so look at your actually expected real rate of return on all this saving you're planning (negative even before taxes on withdrawls or dividends, over the last 10 years, and with high volatility), and then hang your head and ask why you even bother. I bring this up because I save a lot and use the tax-advantaged options, but when I look at the numbers, I have to ask, what's the point? After taxes (which will have to go up as the tidal wave of unfunded obligations comes due) and inflation, you barely get anything out of saving. (Yes, there's the no-tax Roth, but you get to invest very little in it.) Plus, if you save it for long enough not to be penalized on withdrawl, you have to put off consumption until waaaaay into the future, when it will do less for you. It just seems like you'd be better off buying durable assets or investing in marketable job skills, which are more robust against the kinds of things that punish your savings. I've been exploiring the "infinite banking" option: mutual universal whole life insurance that you can borrow against and which gets a steady, relatively high rate of return and is tax-shielded and has a long pedigree. Seems a lot better than following the herding into IRAs which will probably have their promises violated at some point.
3Rain
I don't believe they are. The vast majority of people I see investing and saving do so in a proactive manner, choosing on a whim, and with a risk horizon of less than a year. They pull out when the market goes down and pile on when hot tips become common ("Real estate can't lose!"). Even the big firms are doing a significant amount of trading and reformulating on a daily basis (evidence: financial "crisis"). I put my trust in the people who seem to understand what's really going on, like Warren Buffet, who says that a passively managed Index Fund is the way 99 percent of people should invest. And if you're ready to say that IRA promises will be broken (which I also consider a good probability), then your "infinite banking" scheme is even less likely to remain stable, as they're backed by private companies rather than the US government.
0SilasBarta
Nice stereotype, but I didn't do any of that, and still lost a lot from the time I started investing (mid '06), despite concentrating on low-cost index funds (to the extent permitted by the 401k). As did anyone else who started in the decade before that. Keep in mind, there's a certain cognitive capture going on here: in the popular mind, long-term saving is equated with using the 401k/Roth options, which require you to invest in a very specific class of assets. Even with all the whimsy you refer to, that's building in an unjustifiably low risk premium that has to change eventually. Wha? What "backing" are you referring to, and is your comparison apples-to-apples? The government doesn't "back" IRAs, it just has a promise they will have certain tax privileges. The assets in the IRAs, where it gets its value, are managed by private companies, just like for mutual whole life insurance (which are member-owned if that matters). Yes, the government could lift their tax privileges too, but this would require breaking an even stronger, longer tradition of not taxing life insurance benefits, which is the (ostensible) purpose of these plans. ETA: Buffet hasn't actually worked out the nuts and bolts of how to get the meaningful diversication you need, starting with much smaller sums than he has, and adhering to account minimums and contribution limits. That advice seems like more of a vague pleasantry than something you can benefit from. And, it's not what he does.
2Rain
I don't like arguing with you, SilasBarta. It feels very combative, and sets off emotional responses in me, even when I think you have a valid point. As such, I'm tapping out.
6Morendil
In case it may help you to know, I've felt the same on a couple occasions when I engaged Silas in argument. I've chalked it down to poor skill at positive sum self-esteem transactions on Silas' part, at least when mediated by text. I don't think it's deliberate, as on some other occasions I concluded on a genuine desire to help on his part.
0SilasBarta
Could you please at least explain what you had in mind by your claim that infinite banking is backed by private companies rather than the US government (as you presumably meant to say IRAs are)? I promise not to reply to that comment.
0Rain
I was incorrect in determining the impact on each type of investment from the government considering private companies manage both. At the time, I was thinking that the government created IRAs through law, and I didn't think that was the case with insurance, and thus the insurance plans seemed more likely to be subject to change by profit motive. However, I don't know enough about the particular form of life insurance you're suggesting to feel comfortable making further claims.
2mattnewport
As Rain said, asset allocation is important. The standard advice to put most of your savings in low cost index funds has the merit of simplicity and is not bad advice for most people but it is possible to do better by having a bit more diversification than that implies. Rain suggests a percentage allocated to international index funds which is a good start. US savers with exposure to foreign index funds, emerging market funds, commodities and foreign currencies (either directly or through foreign indexes) would have done better over the last 10 years than savers with all their exposure concentrated in US equities. Diversification is the only true free lunch in investing and by selecting an asset allocation that includes assets that are historically uncorrelated or negatively correlated with US equities it is possible to get equal or better average returns with lower volatility over the long term. If I were in the US I would share your concerns about future tax increases and raids on currently tax protected retirement accounts but I'd argue that just suggests a broader view of diversification that includes non-traditional savings approaches that are less exposed to such risks. I think most investors (and particularly US investors) are over-invested in their home countries. Since most people's individual economic circumstances are correlated with the performance of the economy as a whole this is poor diversification. Similarly I think it is unwise for people to have significant weighting in sectors or asset classes strongly correlated with the industry they personally earn a living in. Programmers should probably not be over-weighted in tech related investments for example and it is probably a bad idea to retain significant stock in your own employer for most employees. I believe Rain is a government employee and so I would suggest that a lower than normal allocation to government backed investments would be appropriate in that situation for example.
2SilasBarta
Okay, but in a 401k, you're stuck with the choices your employer gives you, which may not have those options. (usually, the choices are moronic and don't even include more than one index fund. Mine has just one, and I reviewed my cousin's and found that it didn't have any. Commodity trades? You jest.) And if you're talking about a Roth, well, no mutual funds, not even Vanguard, will let you start out your saving by dividing up that $4000 between five different funds; each one has a minimum limit. You'd have to be investing for a while first, complicating the whole process. And if you mean taxable accounts, the taxable events incurred gore most of the gains. The US is not alone in that respect -- other, long-developed countries have it even worse. Right, that's what I was referring to: investing in job skills so you can high-tail it to another country if things become unbearable (and hope they don't seize your assets on the way out).
2mattnewport
I'm not in the US so I'm not fully familiar with the retirement options available there. Here in Canada we have what seems to me a pretty good system whereby I can have a tax sheltered brokerage account for retirement savings. In many cases it is hard to argue with the 'free money' of employer matched retirement plans and the tax advantages of particular schemes but I think it is wise to be mindful of all the advantages and disadvantages of a particular scheme (including things like counterparty risk regarding who ultimately backs up your investments) and take that into consideration when weighing options. This is definitely an issue when starting out. Transaction costs can make broad diversification prohibitively expensive when your total assets are modest. I see it as something to aim for over time but you are absolutely right to be mindful of these issues. If you have a reasonable choice of mutual funds you can look for ones that are diversified at least internationally if not across asset classes outside of equities and fixed income. This is why I like the options available in Canada. Between self-directed RRSPs and the new TFSA the tax-friendly saving options are pretty good. Indeed, and this is one reason I'm working towards my Canadian citizenship. It has relatively healthy finances compared to the UK where I grew up. I don't think the necessity for some form of default on the obligations of most developed countries' governments is widely appreciated yet. This is in line with the broader view of diversification I am advocating. Over the typical individual's expected lifespan this is an important consideration. I think it is a sensible long term goal to diversify in a broad sense so that you maintain options to take your capital (human and otherwise) wherever you can expect the best return on it. Assuming that this will always be the same country you happen to have been born in is short sighted in my opinion. On a diversification related note, this propos
0RobinZ
Back up: you can make maximum Traditional & Roth IRA contributions in the same year? (I live in the U.S., and have only been putting funds into my traditional IRA.)
0Rain
No, you cannot max both a Traditional and a Roth; it's either/or. Which one you choose depends on several factors, including length of investment and the income you predict you'll have during disbursement. Traditional is better if you expect low income in retirement, or a shorter time frame until retirement; Roth is better if you expect higher income or a longer time frame.
0RobinZ
How do you pick when to switch, then? I assume tax-efficiency, but how tax-efficient should income be before you put it into Roth rather than Traditional? And how do you measure tax-efficiency of income? I apologize if this is overly off-topic, of course.
3Rain
It's been a while since I did primary research on the topic; I decided on a Roth for my personal circumstances and dumped most of the other knowledge afterward, so I'll be deferring to references: here are a couple articles about the topic of choosing between them, one which links to a calculator. You measure tax efficiency by what percentage of the money you get to keep after it's been taxed in the context of your other income and investments. Putting tax-inefficient funds in tax-efficient formats like an IRA lets you keep a (hopefully much) larger percentage. And I don't see how it's off topic in an Open Thread.
0mattnewport
ETFs offer small investors access to a number of alternative investments to stocks. There are lots of bond/fixed income index ETFs available. You can also use ETFs to diversify out of US stocks (assuming you're a US investor) through international indexes. It is also possible to invest in other asset classes such as commodities and foreign currencies through ETFs but there are a number of caveats and potential hidden costs to many of these so you should do some research before going that route.
[-]Cyan00

I wanted to ask the LW commentariat what they thought of the morality of the "false time constraint" PU ploy. I'm hereby prefacing that discussion with a meta-inquiry as to whether that conversation should even be opened at all. (The contentious ongoing discussion I found when I came here to make the query has made me gun-shy.)

4Jack
How about you ask this again when the present PUA type discussion (which has already devolved to some flame waring, in places) calms down?
0Cyan
OK.
2wedrifid
And do so without asking for @#@#$ permission! Less supplication!
0[anonymous]
Hey screw you pal -- I'll be as unassertive as I want to be! ;-)

Help me, LessWrong. I want to build a case for

  1. Information is a terminal value without exception.
  2. All information is inherently good.
  3. We must gather and preserve information for its own sake.

These phrasings should mean the exact same thing. Correct me if they don't.

Elaboration: Most people readily agree that most information is good most of the time. I want to see if I can go all the way and build a convincing argument that all information is good all of the time, or as close to it as I can get. That misuse of information is problem about the misuser a... (read more)

7Scott Alexander
You probably don't mean trivial information eg the position of every oxygen atom in my room at this exact moment. But if you eliminate trivial information and concentrate only on useful information, you've turned it into a circular argument - all useful information is inherently useful. Further, saying that we "must" gather and preserve information ignores opportunity costs. Sure, anything might eventually turn out to be useful, but at some point we have to say the resources invested in disk space would be better used somewhere else. It sounds more like you're trying to argue that information can never be evil, but you can't even state that meaningfully without making a similar error. Certainly giving information to certain people can be evil (for example, giving Hitler the information on how to make a nuclear bomb). See this discussion for why I think calling something like "information" good is a bad idea.
6khafra
One thing you may want to address is what you mean by "gather and preserve information." The maximum amount of information possible to know about the universe is presently stored and encoded as the universe. The information that's useful to us is reductions and simplifications of this information, which can only be stored by destroying some of the original set of information.
1Document
In other words, "information" in this case might be an unnatural category.
1khafra
Yes. CannibalSmith's usage sounded to me somewhere indeterminately in between the information theoretic definition and the common meaning which is indistinct but similar to "knowledge." My request for clarification assumes the strictly information theoretic definition isn't quite what he wanted.
0CannibalSmith
My mom complains I take things too literally. Now I know what she means. :) Seriously though, I mean readable, usable, computable information. The kind which can conceivably turned into knowledge. I could also say, we want to lossly compress the Universe, like an mp3, with as good a ratio as possible.
5FAWS
Do you mean that information already is a terminal value for (most) humans? Arguing that something should be a terminal value makes only a limited amount of sense, terminal values usually don't need reasons, though they have (evolutionary, cultural etc.) causes.
1CannibalSmith
Neither. I guess I shouldn't have used the term "terminal value". See the elaboration - how do you think I should generalize and summarize it?
3Jack
It sounds like you're trying to say information is an instrumental value, without exception.
4wedrifid
I don't make arguments for terminal values. I assert them. Arguments that make any (epistemic) sense in this instance would be references to evidence to something that represents the value system (eg. neurological, behavioural or introspective observations about the relevant brain).
0CannibalSmith
Looks like I've been using "terminal values" incorrectly.
4NancyLebovitz
Information takes work to produce, to filter, and to receive, and more work to evaluate it and (if genuinely new) to understand it. There's a strong case that information isn't a terminal value because it's not the only thing people need to do with their time. You wouldn't want your inbox filled with all the things anyone believes might be information for you. Another case of limiting information: rules about what juries are allowed to know before they come to a verdict. There might be an important difference between forbidding censorchip vs. having information as a terminal value.
3Rain
I very much doubt that we have enough understanding of human values / preferences / utility functions to say that anything makes the list, in any capacity, without exception. In this case, I think that information is useful as an instrumental value, but not as a terminal value in and of itself. It may lie on the path to terminal values in enough instances (the vast majority), and be such a major part of realizing those values, that a resource-constrained reasoning agent might treat it like a terminal value, just to save effort. I look at it like a genie bottle: nearly anything you want could be satisfied with it, or would be made much easier with its use, but the genie isn't what you really want.
0CannibalSmith
Well, all agents are resource-constrained. But I get what you mean.
2[anonymous]
* Storing information has an inherent cost in resources, and some information might be so meaningless that no matter how abundant those resources are, there will always be a better or more interesting use for them. I'm not sure if that's true. * "Information" might be an unnatural category in the way you're using it. Why are the bits encoded in an animal's DNA worth more than the bits encoded in the structure of a particular rock? Doesn't taking any action erase some information about the state the world was in before that action? * EY might call information bad that prevents pleasant surprise.
2Morendil
A straightforward counter-argument is that forgetting, i.e. erasing information, is a valuable habit to acquire; some "information" is of little value and we would burden our minds uselessly, perhaps to the point of paralysis, by hanging on to every trivial detail. If that holds for an individual mind, it could perhaps hold for a society's collective records; perhaps not all of YouTube as it exists now needs to be preserved for an indefinite future, and a portion of it may be safely forgotten.
2Document
That's a good point, but rather than Youtube I'd suggest something like the exact down-to-the-molecule geography and internal structure of Mercury; or better yet, the output of a random number generator that you accidentally left running for a year. For the record, the wording I came up with originally was "Storing information has an inherent cost in resources, and some information might be so meaningless that no matter how abundant those resources are (even if they seem to be unlimited), there will always be a better or more interesting use for them.". (Edit 4/11: I was thinking of trying to come up with something like torture versus scrambling 3^^^3 bits of useless information, but that probably wouldn't be a good line of argument anyway.)
0NancyLebovitz
Forgetting is crucial for my ability to do dual n-back.
1gwern
That's a fact about the human mind, though; DNB is designed to stress fuzzy human WM's weaknesses. DNB is trivially doable by a computer (look at all the implementations).
0NancyLebovitz
Computers have memory limits. They're just much higher than human limits. WM?
4gwern
It's not just quantity; it's quality. Human WM is qualitatively different from RAM. Yes, you could invent a 'dual 4-gigabyte back', and the computer would do just as well. Bits don't change in RAM. If it needs to compare 4 billion rounds back, it will compare as easily as if it were 1 round back. Computer 'attention' doesn't drift, while a human can still make mistakes on D1B. And so on. You could cripple a computer to make mistakes like a human, but the word 'cripple' is exactly what's going on and demonstrates that the errors and problems of human WM have nothing interesting to say about the theoretical value (if any) of forgetting. You only need to forget in DNB because you have so little WM. If you could remember 1000 items in your WM, what value would forgetting have on D10B? It would have none; forgetting is a hack, a workaround your limits, an optimization akin to Y2K.
1cupholder
Working memory.
1[anonymous]
Reading what you have said in this thread, I was confident that you were committing the fallacy of rationalization. Your statement is simple, and it seems like reality can be made to fit it, so you do so. But your name looked familiar, and so I clicked on it, and found that your karma is higher than mine, which seems to be strong evidence that you would not commit such a fallacy, using phrases so revealing as "I want to build a case for . . .". Your words say you are rationalizing; your karma says you are not. I am confused.
0Morendil
Argument screens off karma. ;) I agree with you about "I want to build a case", the phrasing is unfortunate. However I note that the OP asked for arguments on both "sides".
0[anonymous]
The OP asked for a specific thing to be done with arguments on both sides. "Please place garbage in the bin in the corner" doesn't mean I want the bin to contain more garbage. Or maybe you're not referring to "Please post arguments and . . ."
1Morendil
May I suggest adding to your list of test cases the blueprints for a non-Friendly AI? By that I mean any program which is expected to be a General Intelligence but which isn't formally or rigorously proven to be Friendly. (I still haven't come to definite conclusions about the plausibility of an AI intelligence explosion, therefore about the urgency of FAI research and that of banning or discouraging the dissemination of info leading to non-F, but given this blog's history it feels as if this test case should definitely be on the list.)
1Jack
Some counter-arguments What exactly is the pro-information position here? Cause I'm against this being produced and agree with bans on it's distribution and possession as a way of hurting its purveyors. The way such laws are enforced, at least in America, is sometimes disgraceful. But I don't think it is an inherrently bad policy. Biological, computer and memetic? The last one looks like an open and shut case to me. If learning information (being infected by a meme) can damage me then I should think that information should be destroyed. Maybe we want some record around so that we can identify them to protect people in the future? Maybe this stuff is too speculative to flesh out. For the IQ issue: Here is my read of the status quo: most people believe the science says there is no innate racial difference in IQ. This is probably what it says but if we really want to know for sure we'd need to gather more data. If we gathered more data there are three possible outcomes: (1) we find out conclusively there is no innate IQ difference. Most people's beliefs do not change. An impassioned minority continues to assert that there is an IQ difference and questions the science, perpetuating the controversy. This is socially the status quo but some people paying attention have actually learned something. (2) We don't learn anything conclusive one way or the other. The status quo continues. (3) We learn there are innate racial differences in IQ. All hell breaks lose.
0Strange7
If the purveyors are revealed to the public, I think we'll find better ways to stop them, instead of creating a black-market environment which makes their product more valuable. There's also the non-negligible side benefit of turning fewer innocent people into lifelong pariahs.
0Jack
Well yes, that would be great information. But I don't see how letting people own and distribute child porn is going to reveal that information. The market is always going to be black in some respect if it is illegal to produce it. The reason I asked what the position was is that it isn't obvious to me that producing child pornography isn't gathering information. If you legalize possession but not production you've lowered the cost of consuming (increased the demand) while not affecting the supply. This will drive up prices. Just adjust the laws so that someone who decides to download a huge pornfile that happens to include a few illegal photos doesn't get convicted...
3Strange7
There is this thing called 'peer-to-peer file sharing.' If possession is legal, any possessor can also be a supplier by sharing what they've already got, but the original producers can't claim copyright without incriminating themselves. That drastically increases the supply, driving the price down close to zero.
0Jack
Close to zero? Really? There is already negligible enforcement of copyright and for a number of years there was zero enforcement of copyright. Media industries, porn and otherwise, have been doing fine.If necessary the industry will start only streaming video and uploading decoy files. Not to mention groups of people who just produce it for each other with no money changing hands will be able to operate unhindered. I'm not an expert but I imagine it is drastically more difficult to put someone away for distribution than production- and thats how the industry would end up working, shielding the producers while legal distributors buy and sell.
0Strange7
If producers work closely with specific distributors, it would be possible to get the distributors for 'aiding and abetting' or RICO sorts of things. Customers would also be more willing to cooperate with law enforcement if they knew they wouldn't be punished for doing so, and limited enforcement resources could be concentrated on the actual producers instead of randomly harrassing anyone who happens to have it on their HD. Groups of people who produce it for each other with no money involved would be hard to track down under any circumstances; I don't see how decriminalizing possession makes that worse.
0Jack
A lot harder to prove than distribution and possession. Well you've just taken away law enforcement's entire bargaining position. Right now customers have to cooperate under threat of prosecution. What we want is for law enforcement to concentrate their resources on the producers without taking away the tools they need to do so effectively. The key is structuring the law and incentives for law enforcement so that they have to go after the producers and not guys who accidentally download it. Maybe force prosecutors to demonstrate the possessor had intentionally downloaded it or has viewed it multiple times. Or offer institutional incentives for going after the big fish. Well again, it is a lot easier to prove possession and distribution then it is production.
5Strange7
So most of them avoid law enforcement entirely for fear of getting 'v&' instead of providing tips out of concern for the welfare of the children. I mean, once you've cooperated, what's law enforcement's incentive not to prosecute you? Justice is not necessarily best served by making the cop's job easier. So long as law enforcement is rewarded by the conviction, they'll go for low-hanging fruit: that is, the people who aren't protecting themselves because they think they're not doing anything wrong. Broad laws that anyone could violate unwittingly, and which the police enforce at their own discretion? That's not a necessary tool for some higher purpose, it's overwhelming power waiting to be abused.
-1Jack
You know what prosecutorial immunity is, right? Also, I don't know why you think pedophiles are itching to come forward with tips on their porn suppliers. If they were there are always ways to make anonymous tips to the police. For the third time: make prosecuting the low-hanging fruit more difficult and lower the incentives to do so. That is my position. You don't have to handcuff law enforcement's investigation of the producers to do this. Edit: One other way to do this that I haven't mentioned: legalize small possession of a small amount of child pornography or make small amounts a misdemeanor.
1fburnaby
I'll attempt a counter-example. It's not definitive, but at least makes me question your notion: Does a spy want to know the purpose of his mission? What if (s)he gets caught? Is it easier for them to get though an interrogation not knowing the answers to the questions?
0Document
At first I thought you were saying that you wanted the comments to be flat rather than threaded; I figured that that was because you wanted inbox notification of each new reply. Then I saw you replying to replies yourself, so I was less sure. I take it you actually mean that (for example) I shouldn't include remarks on the main topic in this comment, or vice versa?
0jimrandomh
What would an unfriendly superintelligence that wanted to hack your brain say to you? Does knowing the answer to that have positive value in your utility function? That said, I do think information is a terminal value, at least in my utility function; but I think an exception must be made for mind-damaging truths, if such truths exist.
0FAWS
I don't think the idea of a conditional terminal value is very useful. If information is a terminal value for me I'd want to know what the unfriendly superintelligence would say, but unless it's the only terminal value and I don't think the result would have any influence on other information gathering there would be other considerations speaking against learning that particular piece of information and probably outweighing it. No need to make any exceptions for mind damaging truths because to the extent mind damage is a bad thing according to my terminal values they will already be accounted for anyway.
0Amanojack
First of all, I recommend clearing away the moral language (value, good, and must) unless you want certain perennial moral controversies to muddy the waters. Example phrasings of the case you may be trying to make: I suppose this is true. If you've ever done a jigsaw puzzle, you can probably think of a counterexample to this.
6Nick_Tarleton
You've never done a jigsaw puzzle using optimal Bayesian methods.
0wedrifid
(Or he just believes you probably haven't!)
3[anonymous]
Here's a counterexample. There is an urn filled with lots of balls, each colored either red or blue. You think there's a 40% chance that the next ball you pull out will be red. You pull out a ball, and it's red; you put it back in and shake the urn. Now you think there's a 60% chance that the next ball you pull out will be red, and you announce this fact and bet on it. You pull out one more ball, and it's blue. If you hadn't seen that piece of evidence, your prediction would have been more accurate.
0CannibalSmith
We cannot know what information we might need in the future, therefore we must gather as much as we can and preserve all of it. Especially since much (most?) of it cannot be recreated on demand.
3Matt_Simpson
That's not an argument for information as a terminal value since it depends on the consequences of information, but it's a decent argument for gathering and preserving information.
0CannibalSmith
If that distinction exists, my three formulations are not identical. Yes?
2Document
Not sure. "Inherently good" could mean "good for its own sake, not good for a purpose", but it seems like it could also mean "by its very nature, it's (instrumentally) good". And the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving. I'm not sure one couldn't find similarly sized semantic holes in anything, but there they are regardless.
0[anonymous]
I think so. "All information is inherently good" could mean "inherently instrumentally good", and the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving.
0Matt_Simpson
Your 3 formulations should be identical. Here's your argument: My first thought when I read this is, Why are we gathering information? The answer? Because we may need it in the future. What will we need it for? Presumably to attain some other (terminal) end, since if information was a terminal end the argument wouldn't be "we may need it in the future," it would be "we need it." Maybe I am just misunderstanding you?