Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, January 15-31, 2012

9 Post author: OpenThreadGuy 16 January 2012 12:56AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)

Comments (240)

Comment author: NancyLebovitz 17 January 2012 06:36:14AM *  8 points [-]

Some thinking is easier in privacy.

In a fascinating study known as the Coding War Games, consultants Tom DeMarco and Timothy Lister compared the work of more than 600 computer programmers at 92 companies. They found that people from the same companies performed at roughly the same level — but that there was an enormous performance gap between organizations. What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed. Sixty-two percent of the best performers said their workspace was sufficiently private compared with only 19 percent of the worst performers. Seventy-six percent of the worst programmers but only 38 percent of the best said that they were often interrupted needlessly.

These are interesting results, but the research was from 1985--"Programmer Performance and the Effects of the Workplace," in Proceedings of the 8th International Conference on Software Engineering, August 1985. It seems unlikely that things have changed, but I don't know whether the results have been replicated.

Comment author: saturn 17 January 2012 09:16:59AM 1 point [-]

I don't know of any studies, but there are many anecdotal reports about this.

Comment author: gwern 28 January 2012 06:09:53PM 0 points [-]

Worth noting: is correlational, not causal.

Comment author: gwern 28 January 2012 06:08:03PM 6 points [-]

As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.


Starting point: $4k in bulk right now, from Illumina http://investor.illumina.com/phoenix.zhtml?c=121127&p=irol-newsArticle_print&ID=1561106 (I ran into a ref saying knomeBASE did <$5k sequencing - http://hmg.oxfordjournals.org/content/20/R2/R132.full#xref-ref-106-1 - but after thoroughly looking through their site, I'm fairly sure what they are actually offering is interpretation of a sequence, possibly done by Illumina.)

Projections: "The advent of personal genome sequencing" Drmanac http://wch.org.au/emplibrary/ccch/CPH_D5_L4_Genome_Sequencing.pdf Genetics in Medicine (http://journals.lww.com/geneticsinmedicine/Abstract/2011/03000/The_advent_of_personal_genome_sequencing.4.aspx)

Experts predict that the consumer price to sequence a complete human genome will drop to $1000 in 2014.[9] In our opinion, this will be achieved with existing DNA nanoarray technologies. We further believe that the existing DNA nanoarray technologies, with expected engineering advances, are capable of driving the cost per genome to significantly below $1000 in the following years. By 2020, with improved technology and reduced cost, we may expect tens of millions of personal genomes to be sequenced worldwide....We expect that advances in electronics will allow permanent lifelong storage of personal genetic variants (1 GB/person) for less than $10. [see also my previous discussion of kryder's law]

cite 9 = Metzger ML. Sequencing technologies—the next generation. Nature Rev. Genet. 2010;11:31– 46 http://eebweb.arizona.edu/nachman/Further%20Interest/Metzker_2009.pdf Confusingly, on pg44:

Closing the gap between $10,000 and $1,000 will be the greatest challenge for current technology developers, and the $1,000 genome might result from as-yet-undeveloped innovations. A timetable for the $1,000 draft genome is difficult to predict, and even more uncertain is the delivery of a high-quality, finished-grade personal genome.

Where does 2014 come from? I suggest attributing it to Drmanac and not Metzker. (I've emailed him to ask where his 2014 came from.) Drmanac is commercially involved and seems very optimistic; compare his answers in http://www.clinchem.org/content/55/12/2088.full to the other experts. But there is general agreement it is possible (see also paragraph 3 in https://www.sciencemag.org/content/311/5767/1544.full ).

Here's a citation for 2013: http://content.usatoday.com/communities/sciencefair/post/2011/07/race-to-1000-human-genome-machine-intensifies/1 discussing the new sequencing device in http://www.nature.com/nature/journal/v475/n7356/full/nature10242.html (more media coverage: http://www.nature.com/news/2011/110720/full/475278a.html )

In e-mailed comments to USA TODAY, [Jonathan] Rothberg confirms his team has sequenced Moore's genes:

...Much like computing, sequencing directly on a ion chip enables the rapid and continual increase in speed and reduction in cost. At the rate of Ion's current technology improvements we will reach the $1,000 human genome in 2013 and continue to drop the cost from there.

A guy from GenomeQuest (http://www.crunchbase.com/company/genomequest) agrees with Rothberg, saying $100 (not $1000) will be hit within a decade, and $1000 by July 2013: http://blogs.discovermagazine.com/gnxp/2010/07/genomic-liftoff/#comment-27818

As well: Snyder M, Du J, Gerstein M. Personal genome sequencing: current approaches and challenges http://stanford.edu/class/gene210/files/readings/Snyder_GenesDev_2010.pdf - pg 3 has a nice graph of the super-exponential price decrease (left, blue) vs total number of sequenced genomes (right, red). Probably don't need that though for a footnote.

A promising lead would be journalist Kevin Davies's The $1,000 Genome: The Revolution in DNA Sequencing and the New Era of Personalized Medicine. I read a few reviews including one in Nature, but unfortunately no one specifically quotes a due date for price-points and the book is not on library.nu for me to search.

Hopefully that is enough for sequencing! Phew. (Something of an echo chamber.)

Comment author: gwern 15 June 2012 07:27:55PM 0 points [-]

"It beats Moore’s Law with a stick,” says [Raymond] McCauley, who believes that the $100 genome is only three years away.

--"Secrets of my DNA", Wired March 2011 (so 2014?)

Comment author: gwern 17 March 2013 02:57:39PM 1 point [-]

BGI quotes prices as low as $3,000 to sequence a person’s DNA. ...Zhang Yong, 33, a BGI senior researcher, predicts that within the next decade the cost of sequencing a human genome will fall to just $200 or $300

Inside China’s Genome Factory, Technology Review

Comment author: gwern 26 June 2013 06:59:51PM 0 points [-]

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3663089/

A few months ago, the National Human Genome Research Institute (NHGRI) updated their analysis of the cost of sequencing and, for the first time since records began, it got more expensive (Figure ​(Figure1).1). You know the graph, the one which looks like the profile of an aqua-park waterslide, a gradual incline followed by a precipitous drop as next generation sequencing kicks in. Well, now the waterslide ends with a treacherous upward flick! We have become so comfortable in the knowledge that DNA sequencing reduces in cost at a rate that makes each run cheaper than the last, that some of the scientific community are in denial. I have even seen people present this graph at meetings and explain how sequencing is getting cheaper every day despite the fact they are standing in front of a 10 foot PowerPoint slide showing clearly that this is not true. In fact, the cost of sequencing a human genome increased by $717 (an increase of 12%) between April 2012 and October 2012. This month the new figures showed that the price fell again, but the point remains - you can forget Moore's law! Some of you will think this merely means you need to replace the opening slide in your PowerPoint deck and tone down some of the rhetoric around $10 human genomes and the advent of free sequencing. I, however, think that the long-term ramifications may be more profound...'But...', I hear you scream, '...this is a temporary blip. Soon we will be saved by new cool technology that will plug into my laptop and sequence a genome for $10 in an hour'. In reality, is this just something we simply want to believe? There really is no reason to think that sequencing methodology is about to undergo a revolution in the near future. I am always amazed at the self-inflicted hype that follows any hint of a story where some company has come across a new way of sequencing that is going to turn all our Illumina kits into oversized doorstops. Often this comes not from the companies themselves but the scientists who are so desperate to buy them. The hype is usually followed by hyper-critical twitter and blog commentaries when the machine in question does not appear to do what we want it to (see this revealing interview with Oxford Nanopore's Clive Brown), in a cycle that has repeated itself at least three times in the last 5 years. I begin to wonder why we don't learn from history.

http://biomickwatson.wordpress.com/2013/05/15/a-pedantic-look-at-the-cost-of-sequencing/

This graph may or may not tell a different story. The story is that yes, sequencing costs are coming down; but since late 2007, early 2008 the rate of change of that reduction has been following an upwards trend i.e. over time, the reduction in cost from one period to the next has been increasing.

http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/

I’m going to try and lay this out in a completely technology neutral way, though I will have to mention different sequencing technologies at some point. However, I am pretty convinced of this one fact: there is not a single sequencing technology out today that can deliver 30X of a human genome for anywhere near $1000....None of the current sequencing companies can deliver 30x of a human genome for less than $1000 reagent costs (using list prices) Yes, that’s right – even ignoring points 2-5, even just buying reagents, the cost is greater than $1000 for a 30x human genome. Now, it’s possible Broad, BGI, Sanger etc can get below $1000 for the reagents due to sheer economies of scale and special deals they have with sequencing companies – but then remember they have to add in those extra charges (2-5) above. Obviously, Illumina don’t charge themselves list price for reagents, and nor do LifeTech, so it’s possible that they themselves can sequence 30x human genomes and just pay whatever it costs to make the reagents and build the machines; but this is not reality and it’s not really how sequencing is done today.

http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/#comment-2031

In my recent talk at the NIH symposium to mark the 10th anniversary of the HGP: http://bit.ly/KDHGP10 … I quoted a personal communication from Illumina CSO David Bentley, who says that in batch mode, the HiSeq can currently sequence five human genomes (presumably to 30x or higher) for a reagents list price of $25,000 — or $5,000/genome. With negotiated discounts (or if you want to estimate the wholesale cost), take 1/3 or 1/2 of that figure. So for what it’s worth, we might be edging close to the $2,500 genome, but that’s as good as it gets for now.

http://www.utsandiego.com/news/2013/Jun/19/1000-genome-mirage/2/?#article-copy

"He's right," Topol said. "If you get a bunch of genomes done at Illumina, you can get 'em for $2,500 each -- today." But in 2004, Topol said, sequencing a human genome cost $28.7 million. "Now we already have a 99.8 percent plus price reduction," Topol said. "We don't have that much further to go to get from $2,500 to $1,000. Most everyone would forecast that in a couple of years we will get to that number, with deep coverage of 40-fold, so it's accurate. I think it's clearly within reach now." "And I want to take it a step further," Topol said. "It's going to go well below $1,000 a genome in the future."...Topol agreed with Watson's take on the PeerJ statement [$100]. "That's a little far-fetched," he said. But incremental progress in getting the price lower once that $1,000 mark has been reached will continue.

Comment author: Konkvistador 22 January 2012 06:14:32PM *  5 points [-]

When it comes to accepting evolution, gut feelings trump fact

“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts...

In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For the subjects of this study, belonging to a religion had almost no additional impact on beliefs about evolution, beyond subjects’ feelings of certainty....

For teaching evolution, the researchers suggest using exercises that allow students to become aware of their brains’ dual processing. Knowing that sometimes what their “gut” says is in conflict with what their “head” knows may help students judge ideas on their merits.

Seems to be classic System 1 vs. System 2. Also religion's small impact didn't surprise me.

Comment author: MileyCyrus 16 January 2012 05:07:31AM 5 points [-]

What are some efficient ways to signal intelligence? Earning an advanced degree from a selective university seems rather cost intensive.

Comment author: dbaupp 16 January 2012 05:31:15AM *  11 points [-]

In a Dark-Arts-y way, glasses?

(A brief search indicates there are several studies that suggest wearing glasses increases percieved intelligence (e.g. this and this (paywall)), but there are also some that suggest that it has no effect (e.g. this (abstract only)))

Comment author: Jayson_Virissimo 17 January 2012 09:57:44AM *  4 points [-]

There definitely exists a stereotype that people that wear glasses are more intelligent. The cause of this common stereotype is probably that people that wear glasses are more intelligent.

Comment author: multifoliaterose 18 January 2012 10:34:23PM 3 points [-]

But what's the purported effect size?

Comment author: asr 16 January 2012 06:48:05AM *  8 points [-]

The best ways to signal intelligence are to write, say, or do something impressive. The details depend on the target audience. If you're trying to impress employers, do something hard and worthwhile, or write something good and get it published. If you're a techie and trying to impress techies, writing neat software (or adding useful features to existing software) is a way to go.

if you are asking about signalling intelligence in social situations, I suggest reading interesting books and thinking about them. Often, people use "does this person read serious books and think about them" as a filter for smarts.

Comment author: Grognor 16 January 2012 07:13:52AM *  11 points [-]

I figured someone would have said this by now, and it seems obvious to me, but I'm going to keep in mind the general principle that what seems obvious to me may not be obvious to others.

You said efficient ways to signal intelligence. Any signaling worth salt is going to have costs, and the magnitude of these costs may matter less than their direction. So one way to signal intelligence is to act awkwardly, make obscure references, etc.; in other words, look nerdy. You optimize for seeming smart at the cost of signaling poor social skills.

Some less costly ones that vary intensely by region, situation, personality of those around you, and lots and lots of things, with intended signal in parentheses:

  • Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)
  • Talk quickly.
  • Quote famous people all the time. (He quotes people; therefore he is well-read; therefore he is intelligent.)
  • In general, do things quickly. Eating, walking, reacting to fire alarms. (Smart people have less time for sitting around.)
  • During conversations, make fun of beliefs that you mutually do not hold. Being clever about it is better, but I don't know how to learn cleverness. If you already have it, good. (He is part of my tribe and one of my allies. Therefore, because of the affect heuristic, he must be smart as well.)
  • Learn a little bit of linguistics.
  • Tutor people in things. (You have to be smart to teach other people things.)

It was not intentional that all of these related to conversation. Maybe that's not a coincidence and I've been unconsciously optimizing for seeming smart my entire life.

Comment author: faul_sname 16 January 2012 09:42:50AM 4 points [-]

Tutor people in things. (You have to be smart to teach other people things.)

Definitely this. Tutoring is a very strong signal of intelligence, but is really a matter of learned technique. I was able to tutor effectively in Statistics before I had taken any classes or fully understood the material by using tutoring techniques I had learned by teaching other subjects (notably Physics). The most common question I found myself asking was "what rule do we apply in situations like this," a question you do not actually need to know the subject material to ask.

Comment author: dbaupp 16 January 2012 09:33:13AM 2 points [-]

Learn a little bit of linguistics.

I'd be interested if you were to expand on this.

Comment author: Grognor 16 January 2012 06:50:04PM 1 point [-]

It has worked for me. People are impressed when I point out their own sentence structure, things like how many phonemes are in the word "she", etc. I don't know if this also helps signal intelligence, but I also rarely get confused by things people say. Instead of saying, "What?" I say "Oh, I get it. You're trying to say X even though you actually said Y."

Also, I guess it seems like a subject only smart people are interested in. And not even most of them. Guess I got lucky in that regard.

Comment author: Emily 18 January 2012 09:45:30PM 1 point [-]

I'm not the OP of that comment, but as a linguistics student I can corroborate. I think there are a couple of reasons that occasionally throwing a relevant piece of linguistic information into a conversation can produce the smartness impression. Firstly, conversations never fail to involve language, so opportunities to comment on language are practically constant if you're attuned to noticing interesting bits and pieces. This means that even occasional relevant comments mean you're saying something interesting and relevant quite frequently. This is an advantage that linguistics has over, say, marine biology. Secondly, I have the impression that most people are vaguely interested in language and under the equally vague impression that they know just how it works -- after all, they use it all the time, right? So even imparting a mundane little piece of extremely basic linguistics can create the impression that you're delivering serious cutting-edge expert-level stuff: after all, your listener didn't know that, and yet they obviously know a pretty decent amount about language!

Comment author: paper-machine 16 January 2012 07:40:28AM 1 point [-]

Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)

I perhaps should work on this one. It might improve my signal/noise ratio.

Your list is quite wisely written.

Comment author: [deleted] 16 January 2012 05:33:11AM 8 points [-]

Here's a few suggestions, some sillier than others, in no particular order:

  • Join organizations like Mensa
  • Look good
  • Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.
  • If your particular field has certifications you can get instead of a degree, these may be more cost-effective
  • Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)
  • Learn other languages--doing so not only makes you more employable, it can be a big status boost
Comment author: Prismattic 16 January 2012 05:55:48AM *  9 points [-]

Much depends on the audience one is signalling to.

Join organizations like Mensa

To stupid or average people, this is a signal of intelligence. To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.

Again this works as a signal to people who are at a remove from these activities, because the average player is smarter than the average human. People who themselves actually play, however, will have encountered many people who happen to be good at certain specific things that lend themselves to abstract strategy games, but are otherwise rather dim.

Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)

Agree with this one. It's especially useful because it has the opposite sorting effect of the previous two. Other intelligent people will pick up on it as a sign of intelligence. Conspicuously unintelligent people will fail to get it.

Learn other languages--doing so not only makes you more employable, it can be a big status boost

This one seems like it might vary by geography. It's a lot less of a distinction for a European than an American. In the US, the status signal from "speaks English and Spanish" is different from the status signal from "speaks English and some language other than Spanish".

Comment author: Viliam_Bur 16 January 2012 09:53:27AM 8 points [-]

To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

My experience seems to support this. The desire to signal intelligence is often so strong that it eliminates much of the benefits gained from high intelligence. It is almost impossible to have a serious discussion about something, because people habitually disagree just to signal higher intelligence, and immediately jump to topics that are better for signalling. Rationality and mathematics are boring, conspiracy theories are welcome. And of course, Einstein was wrong; an extraordinarily intelligent person can see obvious flaws in theory of relativity, even if they don't know anything about physics.

Mensa membership will not impress people who want to become stronger and have some experience with Mensa. Many interesting people make the Mensa entry test, come to the first Mensa meeting... and then run away.

Comment author: Normal_Anomaly 16 January 2012 08:19:31PM 2 points [-]

My experience with Mensa was similar to yours. I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren't worth the time. There was far less original thought in Mensa then I had expected.

Comment author: khafra 17 January 2012 03:27:26PM 5 points [-]

I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren't worth the time. There was far less original thought in Mensa then I had expected.

Saying this about Mensa is a much better way to signal intelligence to other intelligent people than actually being a Mensa member.

Comment author: TheOtherDave 17 January 2012 03:50:57PM 3 points [-]

Well, it's worth being a little careful here. Saying dismissive things about an outgroup is an effective way to present myself as a higher-status member of the ingroup; that works as well for "us intelligent people" and "those Mensa dweebs" as any other ingroup/outgroup pairing. Which makes it hard to tell whether I'm really signalling intelligence at all.

Comment author: Normal_Anomaly 17 January 2012 10:16:03PM 1 point [-]

Yes, and I knew that when I said it. But it's also true.

Comment author: Viliam_Bur 17 January 2012 10:55:33AM *  5 points [-]

Right now my question is: Is abandoning Mensa the most useful thing, or can it be used to increase rationality somehow?

Seems to me that the selection process in Mensa has two steps. First, one must decide to make a Mensa entry test. Second, one must decide to be a Mensa member, despite seeing that Mensa is only good for signalling -- this is sometimes not so obvious to a non-Member. For example when I was 15, I imagined that Mensa would be something like... I guess like I now imagine the LW meetups. I expected there people who are trying to win, not only to signal intelligence to other members.

So I conclude that people who pass the first filter are better material than people who pass both filters. A good strategy could be this: Start a local rationalist group. Become a member of Mensa, so you know when Mensa does tests. Prepare a flyer describing your rationalist group and give it to everyone that completes the Mensa test -- they will probably come to the first following Mensa meeting, but many of them will not appear again.

This is what I want to do, when I overcome my laziness. Also I will give a talk in Mensa about rationality and LW, though (judging by reactions on our facebook group) most members will not be really interested.

Comment author: Manfred 16 January 2012 04:29:34PM 2 points [-]

Be interested in lots of things that other people might not find interesting. I think it's the way that I personally signal intelligence the most. For example, if someone has a herpolhoder on their desk, I try to ask intelligent questions about it. Or if the rain on the window is dripping in nice straight lines because of the screen occasionally pressing against the glass, notice that.

Comment author: sixes_and_sevens 16 January 2012 02:38:55PM 4 points [-]

Do something prohibitively difficult that not a lot of people are competent enough to do.

Comment author: amcknight 17 January 2012 12:49:16AM 4 points [-]

Of course, make sure it's something people "know" is hard, like rocket science.

Comment author: D_Alex 16 January 2012 08:07:39AM 1 point [-]

I have a different perspective on this compared with other commenters... Intelligence is very hard to fake.

What's the best way to signal guitar playing skills? Play the guitar, and play it well!

The efficient way to signal intelligence is: to do worthwhile things, intelligently!

Comment author: faul_sname 16 January 2012 08:43:29AM 1 point [-]

How can you tell if someone is doing things intelligently?

Comment author: D_Alex 16 January 2012 09:02:12AM 1 point [-]

Fair question, but difficult to answer in brief, I might try to do this later. For now let me answer with a couple of questions:

How can you tell if someone is playing a guitar well?

In general, can YOU tell the difference between someone doing things intelligently, and doing things unintelligently?

Comment author: Viliam_Bur 16 January 2012 10:00:18AM 0 points [-]

How can you tell if someone is playing a guitar well?

a) Listen to them playing.

b) Do they have concerts, CDs, fans, other symbols of "being a successful guitar player"? Do they write blogs or books about guitar playing? Do people write guitar-playing-related blogs and books about them?

The second option is less reliable and easier to fake, but it is an option that even a deaf person can use.

Comment author: Solvent 17 January 2012 06:10:59AM 3 points [-]

a) Listen to them playing.

Speaking as a guitar and piano player. I can do things on guitar and piano that are fairly easy, but look very impressive to someone who doesn't play the instrument. You actually need to play an instrument before you can judge how good someone is accurately.

(Obviously, it's pretty obvious if someone is distinctly bad. But distinguishing different levels of "good" is hard.)

Comment author: faul_sname 16 January 2012 09:32:00AM *  0 points [-]

First question: A good guitar player a steady rhythm and hit the appropriate notes with appropriate volume and tone. At a higher level, they improvise in a way that sounds good. Sounding good seems to involve sticking to a standard scale with only a few deviations, and varying the rhythms. At the level above that, I really don't know.

Second question: I really don't know, at least that generally. I think I may use proxies such as the ability to find novel (good) solutions to problems and draw on multiple domains, then aggregate them into one linear value that I call "intelligence". I am probably also influenced by the person's attractiveness and how close their solution is to the one I would have proposed. I would definitely like your take on this as well.

Comment author: multifoliaterose 18 January 2012 01:32:58PM 0 points [-]

Why are you asking?

Comment author: Prismattic 16 January 2012 05:58:49AM *  0 points [-]

Earning an advanced degree from a selective university seems rather cost intensive.

Depending on the selective university, an advanced degree might not cost much at all. Harvard, for example, only recently started paying the way of its undergraduates, but it has paid the way of its graduate students for a long time.

Comment author: grouchymusicologist 16 January 2012 06:44:05AM 10 points [-]

True, but free tuition or not, it's plenty costly in terms of opportunity.

(This is true to an almost hilarious extent if you're a humanities scholar like me: I'm not getting those ten (!!!!!!!) years of my life back.)

Comment author: Prismattic 16 January 2012 06:51:36AM 3 points [-]

Is that the reason for "grouchy"musicologist?

Comment author: grouchymusicologist 16 January 2012 06:57:41AM 2 points [-]

Haha, no. I'm only grouchy because people occasionally say ill-informed things about musicology. Other than that, I really like my job and my chosen field. I rarely think I'd be much happier if I had chosen to pursue some lucrative but non-musicological career.

Comment author: Solvent 17 January 2012 06:13:18AM 1 point [-]

What's it like being a musicologist? What do you spend your days doing?

How many instruments do you play?

What's better out of Mozart's Jupiter Symphony and Holst's Jupiter movement?

Comment author: grouchymusicologist 17 January 2012 07:47:30AM 6 points [-]

Well, I wrote a bit about what musicologists do here. In terms of research areas, I myself am the score-analyzing type of musicologist, so I spend my days analyzing music and writing about my findings. I'm an academic, so teaching is ordinarily a large part of what I do, although this year I have a fellowship that lets me do research full-time. Pseudonymity prevents me from saying more in public about what I research, although I could go into it by PM if you are really interested.

I am (well, was -- I don't play much any more) what I once described as a "low professional-level [classical] pianist." That is, I play classical piano really well by most standards, but would never have gotten famous. At a much lower level, I can also play jazz piano and Baroque harpsichord. I never learned to play organ, and never learned any non-keyboard instruments. Among professional musicologists, I'm pretty much average for both number of instruments I can play and level of skill.

As to pieces about Jupiter, I can only offer you my personal opinion -- being a musicologist doesn't make my musical preferences more valid than yours. Both pieces are great, and I had a special fondness for the Holst when I was a kid (I heard it in a concert hall when I was about 11, and spent the whole 40 minutes grinning hard enough I should have burst a blood vessel). But I'll take the Jupiter Symphony without the slightest hesitation. Here you have one of the greatest works of one of the tiny handful of greatest composers ever, versus an excellent piece by a one-hit wonder among classical composers.

Really, though, I don't much like picking favorites among pieces of music, and always want to preface my answers with "Thank goodness I don't really have to choose!"

Comment author: GabrielDuquette 31 January 2012 07:55:03PM *  0 points [-]

Weird. I, too, was super into "The Planets" when I was 11ish. I also had well-worn cassettes of Copland and a bunch of the Russian composers... and lots of comedy albums. That was pretty much it until grunge.

I blame Carl Stalling.

Comment author: endoself 17 January 2012 12:26:20AM *  4 points [-]

In Marcus Hutter's list of open problems relating to AIXI at hutter1.net/ai/aixiopen.pdf (this is not a link because markdown is behaving strangely), problems 4g and 5i ask what Solomonoff induction and AIXI would do when their environment contains random noise and whether they could still make correct predictions/decisions.

What is this asking that isn't already known? Why doesn't the theorem on the bottom of page 24 of this AIXI paper constitute a solution?

Comment author: rocurley 16 January 2012 07:12:15AM 8 points [-]

I sometimes run into a situation where I see a comment I'm ambivalent about about, that I would normally not vote on. However, this comment also has an extreme vote total, either very high or very low. I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have. What do you do in this situation?

Comment author: wedrifid 16 January 2012 07:54:46AM 9 points [-]

I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have.

You get to modify the karma rating by one in either direction. Do so in whatever manner seems most desirable to you.

You have too much voting power if you create a sock puppet and vote twice.

Comment author: rocurley 16 January 2012 09:13:24PM 4 points [-]

Do so in whatever manner seems most desirable to you.

This is my attempt to figure out what is most desirable to me. At the moment, I want to do whatever would be the best overall policy if everyone followed it, with "best" here being defined as "resulting in the best lesswrong possible" (with a very complicated definition of best that I don't think I can specify well).

Given that that's what I want, how best to achieve it? The karma system is valuable because it makes more visible posts that are highly upvoted, so it's valuable to the extent that the highest upvoted comments are the best.

It should be noted that only relative karma matters (for sorting within an article), and the karma of other posts will tend to be rising (most posts wind up with positive karma). There is some number between 0 and 1 (call it x)that represents the expected vote of someone who votes.

Because karma is relative, if you've decide you care enough to vote, you should subtract x from your vote to determine if it counts as evidence that the post is good or bad. Do you want to vote 1-x, -x, or -1-x? Note 1-x>0, and the other two (not voting and down voting) are less than 0, downvoting by quite a bit. Which of these best corresponds to the sentiment "I liked this but think it's overrated"?

Comment author: shminux 16 January 2012 11:53:15PM *  0 points [-]

I roughly follow the following (prioritized) rules:

  1. Up-vote if I want to see more posts like this/down-vote if I don't want to see more posts like this, regardless of the current total.

  2. A comment that I do not feel very strongly about I may up- or down-vote based on what total karma I expect the comment of this kind to deserve.

  3. Very occasionally, I might like or dislike the author for unrelated reasons, and decide to up-/down-vote based on that.

Comment author: Alex_Altair 16 January 2012 06:32:47PM 1 point [-]

I have previous thought that maybe karma should be hidden until after you vote.

But then there's the problem where part of the point of karma is to tell you whether something is worth reading. If karma was hidden until after voting, users would still have their total karma to motivate them, and we could still hide sufficiently negative comments.

Maybe we should hide comment karma before voting, but not article karma?

Comment author: MixedNuts 16 January 2012 01:52:28PM 1 point [-]

You should vote without knowledge of total karma, otherwise it biases comments' karma scores towards 0 (except at extremes, where it creates bandwagon effects). Power doesn't enter into it, though.

Comment author: Manfred 16 January 2012 03:18:41PM 7 points [-]

You're assuming that biasing karma scores towards zero (relative to what they would be before) is bad. Sure, it could be, but I don't see any particular reason why.

Comment author: Solvent 17 January 2012 06:06:44AM 0 points [-]

otherwise it biases comments' karma scores towards 0 (except at extremes, where it creates bandwagon effects)

[citation needed]

Comment author: Konkvistador 20 January 2012 01:12:27PM *  7 points [-]

I'm reading Moldbug's Patchwork and considering it as a replacement for Democracy. I expected it to be dystopia, but it actually sounds like a neat place to live, it is however a scary Eutopia.

Has anyone else read this recently?

Comment author: asr 21 January 2012 01:39:51AM *  1 point [-]

Every time I read Moldbug's stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem.

The reason we have government isn't that we sat down once upon a time in the state of nature to design a political system. We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it.

Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder -- it turns out that it's potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord's entourage.

Politically, we don't do welfare spending and criminal justice purely for the fuzzies, or solely because they're ends in themselves. Every so often, we have organized and vigorous protests against the status quo. When this happens, those in power can either appease the protesters, use force to crush the protesters, or try to make them go away quietly without violence. If the protesters are determined enough, this last approach doesn't work. And the government can either use clubs, or buy off the protesters.

It turns out that power structures that become habitually brutal don't do too well. People who get in the habit of using force aren't good neighbors, aren't good police, and aren't trusty subordinates. Bystanders don't want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power -- or else overthrown.

Moldbug talking about cryptographically controlled weapons is missing the point: we don't want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.

Comment author: Konkvistador 21 January 2012 10:12:45AM *  4 points [-]

Bystanders don't want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power -- or else overthrown.

Just because governments often employ violence just before they loose power does not mean that employing violence was the cause of their downfall. Many sick people take medication just before they die. Sure violence may do them no good, like an aspirin does no good for a brain tumour, but it is hard to therefore argue that aspirin is the cause of death. The assertion is particularly dubious since historically speaking governments have used a whole lot of violence and this actually seems to have often saved them. Even in modern times we have plenty examples of this.

This Robin Hanson post seems somewhat relevant:

Once upon a time, poor masses suffered under rich elites. Then one day the poor realized they could revolt, and since then, the rich help the poor, fearing the poor will revolt if they ever feel they suffer too much.

Revolution experts mostly reject this myth; famous revolutions happened after things had gotten better, not worse, for the poor.

Comment author: Konkvistador 21 January 2012 08:58:34AM *  4 points [-]

We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it.

The state can be thought of as a sedentary bandit, who instead of pillaging and burning a village of farmers extorted them and eventually started making sure no one else pillages or burns them since that interferes with the farmers paying him. The roving bandit has no incentive to assure the sustainability of a particular farming settlement he parasites. A stationary banding in a sense farms the settlement.

Government can expediently be defined, ultimately beneath all the full, as a territorial monopolist of violence. There is a trade off between government violence used to prevent anyone else from exercising violence and violence by other organized groups. How do we know we are at the optimal balance in a utilitarian sense?

Also Moldbug dosen't want to do away with government he wants to propose a different kind of government. And we have in the past had systems of government that where the result of people sitting down and then trying to design a political system. To take modern examples of this (though I could easily pull out several Greek city states), perhaps the Soviet Union was a bad design, but the United States of America literally took over the world. In any case this demonstrates that new forms of government (not necessarily very good government) can be designed and implemented.

Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder --

Government violence s ideally more predictable than the violence it prevents (that's the whole reason we in the West think rule of law is a good idea). Sure the government has other tools to prevent violence than just violence of its own, but ultimately all law is violence. In the sense of the WHO definition:

...as the intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment or deprivation.

You can easily make the violence painless by say sedating a would be rapist with the stun setting on your laser gun, and you can easily also eliminate the suffering of imprisoning him, by modifying his brain with advanced tools. But changing a persons mind without their consent or by giving them a choice between 6 years imprisonment and modifying their brain has surely just experienced violence according to the above definition.

it turns out that it's potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord's entourag

The point of the the cryptographically controlled weapons is that you need a very small group of people who thinks being a citizen soldier is less fun than being paid handsomely by Blackwater to work.

Comment author: Jayson_Virissimo 21 January 2012 10:53:04AM *  3 points [-]

The reason we have government isn't that we sat down once upon a time in the state of nature to design a political system.

I believe the main thrust of Moldbug's writings is that we should be (but aren't) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of "leftists" for everything under the sun).

Comment author: taelor 24 January 2012 05:28:54AM 1 point [-]

So much of Moldbug's belief system, and even his constructed identity as an "enlightened reactionary", ride on his complete rejection of whiggish historical narratives; however, he takes this to such an extent that he ends up falling into the very trap that Whig Interpretation's original critic, Herbert Butterfield warned of in his seminal work on the subject:

Further, it cannot be said that all faults of bias may be balanced by work that is deliberately written with the opposite bias; for we do not gain true history by merely adding the speech of the prosecution to the speech for the defence; and though there have been Tory – as there have been many Catholic – partisan histories, it is still true that there is no corresponding tendency for the subject itself to lean in this direction; the dice cannot be secretly loaded by virtue of the same kind of original unconscious fallacy.

Comment author: asr 22 January 2012 02:42:10AM 0 points [-]

I believe the main thrust of Moldbug's writings is that we should be (but aren't) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of "leftists" for everything under the sun).

Except, none of his prescriptions are sensible engineering. Crypto-controlled weapons as foundation for social order are more science-fiction than sensible design for controlling violence in society. it's much too easy for people to build or buy weapons, or else circumvent the protections. Pinning your whole society on perfect security seems pretty crazy from a design point of view.

Comment author: Jayson_Virissimo 22 January 2012 04:40:34AM 2 points [-]

Right, I don't think he succeeds either. I was merely trying to summarize his project as I think he sees it.

Comment author: gwern 28 January 2012 06:13:25PM 2 points [-]

Every time I read Moldbug's stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem.

Abba Lerner, "The Economics and Politics of Consumer Sovereignty" (1972):

"An economic transaction is a solved political problem... Economics has gained the title Queen of the Social Sciences by choosing solved political problems as its domain."

Comment author: Konkvistador 21 January 2012 10:37:50AM 2 points [-]

Moldbug talking about cryptographically controlled weapons is missing the point: we don't want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.

In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population.

Also are you really that sure that people wouldn't want to live in a Neocameralist system? When you say efficiency I don't think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies.

Capital punishment is a legal form of punishment in Singapore. The city-state had the highest per-capita execution rate in the world between 1994 and 1999, estimated by the United Nations to be 1.357 executions per hundred thousand of population during that period.[1] The next highest was Turkmenistan with 0.143 (which is now an abolitionist country). Each execution is carried out by hanging at Changi Prison at dawn on a Friday.

Singapore has had capital punishment since it was a British colony and became independent before the United Kingdom abolished capital punishment. The Singaporean procedure of hanging condemned individuals is heavily influenced by the methods formerly used in Great Britain.

Further more consider this:

Under the Penal Code,[12] the commission of the following offences may result in the death penalty:

  • Waging or attempting to wage war or abetting the waging of war against the Government*
  • Offences against the President’s person (in other words, treason)
  • Mutiny
  • Piracy that endangers life
  • Perjury that results in the execution of an innocent person
  • Murder
  • Abetting the suicide of a person under the age of 18 or an "insane" person
  • Attempted murder by a prisoner serving a life sentence
  • Kidnapping or abducting in order to murder
  • Robbery committed by five or more people that results in the death of a person
  • Drug trafficking
  • Unlawful discharge of firearms, even if nobody gets injured

Internal Security Act

The preamble of the Internal Security Act states that it is an Act to "provide for the internal security of Singapore, preventive detention, the prevention of subversion, the suppression of organised violence against persons and property in specified areas of Singapore, and for matters incidental thereto."[15] The President of Singapore has the power to designate certain security areas. Any person caught in the possession or with someone in possession of firearms, ammunition or explosives in a security area can be punished by death.

Arms Offences Act

The Arms Offences Act regulates firearms offences.[16] Any person who uses or attempts to use arms (Section 4) can face execution, as well as any person who uses or attempts to use arms to commit scheduled offences (Section 4A). These scheduled offences are being a member of an unlawful assembly; rioting; certain offences against the person; abduction or kidnapping; extortion; burglary; robbery; preventing or resisting arrest; vandalism; mischief. Any person who is an accomplice (Section 5) to a person convicted of arms use during a scheduled offence can likewise be executed.

Trafficking in arms (Section 6) is a capital offence in Singapore. Under the Arms Offences Act, trafficking is defined as being in unlawful possession of more than two firearms.

That sounds pretty draconian. But we also know Singapore is a pretty efficiently run government by most metrics. Is Singapore an unpleasant place to life? If so why do so many people want to live there? If you answer economic opportunities or standard of living or job opportunities, well then maybe Moldbug does have a point in his very economic approach to it.

Comment author: asr 22 January 2012 02:54:27AM 3 points [-]

In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population.

I had assumed we were talking about government for [biased, irrational] humans, not for perfect utilitarians or some other mythical animal. I was saying that routine application of too much violence will upset humans, not that it should upset them.

Also are you really that sure that people wouldn't want to live in a Neocameralist system? When you say efficiency I don't think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies.

I'm sure many people would live quite happily in Singapore. Clearly, it works for the Singaporians. But I don't think that model can be replicated elsewhere automatically, nor do I think Moldbug has a completely clear notion why it works.

Moldbug talks about splitting up the revenue generation (taxation) from the social-welfare spending. This seems like a recipe for absentee-landlord government. And historically that has worked terribly. The government of Singapore does have to live there, and that's a powerful restraint or feedback mechanism.

In the US (and I believe the rest of the world), the population would like to pay lower taxes, and pointing to the social welfare benefits is the thing that convinces them to pay and tolerate higher rates. I think once the separation between spending and taxation becomes too diffuse, you'll get tax revolts. Remember, we are designing a government for humans here -- short-sighted, biased, irrational, and greedy. So the benefits of unpleasant things have to be made as obvious as possible.

Comment author: Prismattic 21 January 2012 06:38:41PM 1 point [-]

Is Singapore an unpleasant place to life? If so why do so many people want to live there?

I'm open to being corrected on this, since I don't have a good source for Singaporean immigration statistics, but my prior is that people who choose to live in Singapore are coming there from other places that are much more corrupt while also still being rather draconian (China, Malaysia). I'm pretty sure well-educated Westerners could get a well-paying job in Singapore, and the reason few move there is not, in fact, about economics.

Comment author: TimS 20 January 2012 02:58:31PM *  0 points [-]

I've read through the pieces, and I'm struggling to come up with something to say that a reactionary absolutist like Moldbug would find interesting. For example, in the first piece linked, Moldbug says (Let's ignore that the last sentence is questionable as a matter of historical fact):

if you want stable government, accept the status quo as the verdict of history. There is no reason at all to inquire as to why the Bourbons are the Kings of France. The rule is arbitrary. Nonetheless, it is to the benefit of all that this arbitrary rule exists, because obedience to the rightful king is a Schelling point of nonviolent agreement. And better yet, there is no way for a political force to steer the outcome of succession - at least, nothing comparable to the role of the educational authorities in a democracy.

I don't disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there's a reason why mature absolute monarchies (like Louis XIV) invented "divine right of kings." I assert that you can't throw that away (as Moldbug does) and assume that nothing changes about the setup.

My next point would be that there is no reason to expect a government to make a profit. But Moldbug's commitment to accepting the verdict of history means that he wouldn't find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there's every reason to expect that failing to continue winning will lead in short order to your replacement.

Comment author: Konkvistador 20 January 2012 04:43:21PM *  4 points [-]

My next point would be that there is no reason to expect a government to make a profit.

The idea is that it is possible to make the cake bigger by having efficient government. This is why he invokes Laffer curves as relevant concepts.

I find myself sympathetic to this. If you say give some amount of stocks to foundations that provide free healthcare to those who can't afford it or preserve natural habitat ect. that matches current GDP spending, but come up with a government that is more efficient at providing funds for all these endeavours you get more spent in an absolute sense on healthcare or environmentalism than otherwise.

If you want to do efficient charity, you don't work in a soup kitchen, you work hard where you have a comparative advantage to earn as much money as possible and then donate it to an efficient charity. Moldbug may not approve but I actually think his design with the right ownership structure, might be together with some properly designed foundations be a much better "goodness generating machine" than a democratic US or EU might ever be.

I also like the idea of being able to live in a society with laws that you can agree with, if you don't like it you just leave and go somewhere where you do agree with them.

The profit motive is transparent and it is something that is easy to track down than "doing good", which is as the general goal of government far less transparent. As a shareholder or employee in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals.

It also has the neat property of seemingly guaranteeing human survival in a Malthusian em future (check out Robin Hansons writing on this). As long as humans own stocks it wouldn't matter if they where made obsolete by technology they could still basically collect a simply vast amount of rent which would continue growing at a rapid rate for millennia or even millions of years. The real problem is how these humans don't get hacked into being consumption machines by various transhuman service providers but optimize for Eudaimonia.

I don't disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there's a reason why mature absolute monarchies (like Louis XIV) invented "divine right of kings." I assert that you can't throw that away (as Moldbug does) and assume that nothing changes about the setup.

He says robot armies and cryptographically locked weaponry eliminate the need to care about what your population thinks. The technology simply wasn't there in the time of Louis XIV. The governing structure has no need to mess with people's minds in various ways to convince them it is a just system.

And the thing is, while such technology as ubiquitous surveillance or automated soldiers in the hands of government sounds scary, there seems to be no relevant reason at all to think other government types won't have this technology anyway. Worse the technology to modify your mind in various ways will also be rapidly available (as if current brainwashing and propaganda technology wasn't scary enough).

In other words people living in such Patchwork instead of the futuristic US or the PRC would trade political freedoms for freedom of thought and association. The last two are not really guaranteed in any sense, but he gives several strong reasons why a sovereign corporation might have an interest in preserving them. Reasons that most other states as self-stabilizing systems don't seem to have.

But Moldbug's commitment to accepting the verdict of history means that he wouldn't find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there's every reason to expect that failing to continue winning will lead in short order to your replacement.

He basically says that whether we like it or not might does make right. The USA defeated Nazi Germany not because it was nobler but because it was stronger. This is why Germany is a democracy today. The US defeated the Soviet Union not because it was nobler but because its economy could support more military spending and the Soviet Communist party couldn't or wouldn't use military means as efficiently as say the Chinese to stomp out dissenting citizens. This is why Russia is a democracy today. Democracies won because they where better at convincing people that they where legitimate, their economies where better and as a result of these two they where better at waging war than other forms of government.

He also seems very confident that if his proposed form of government was enacted somewhere it would drastically out-compete all existing ones.

Comment author: TimS 20 January 2012 06:00:39PM 1 point [-]

The profit motive is transparent and it is something that is easy to track down than "doing good", which is as the general goal of government far less transparent. As a citizen in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals.

Many government programs provide services to people who can't afford the the value of the service provided. Police and public education provided to inner-cities cannot be paid from the wealth of the beneficiaries. Moldbug complains about the inefficiency of the post office, but that problem is entirely caused by non-efficiency based commitments like delivering mail to middle-of-nowhere small towns. Without those constraints, USPS looks more like FedEx. That's not a Moldbuggian insght - everyone who's spent a reasonable amount of time thinking about the issue knows this trade-off.

He says robot armies and cryptographically locked weaponry eliminate the need to care about what your population thinks. The technology simply wasn't there in the time of Louis XIV. The governing structure has no need to mess with people's minds in various ways to convince them it is a just system.

And I simply don't believe this is a likely outcome. There will be times when a realm does not want to use its full arsenal of unobtanium weapons (i.e. to deal with jaywalking and speeding). Anyway, isn't it easier (and more efficient) to use social engineering to suppress populist sedition?

The US defeated the Soviet Union . . .

I mostly agree with your analysis, in that I think we've been lucky in some sense that the good guys won. But doesn't Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon?

He also seems very confident that if his proposed form of government was enacted somewhere it would drastically out-compete all existing ones.

I think it likely that any system of government backed by unobtanium weapons would defeat any existing government system. It's not clear to me that a consent-of-the-governed system backed by the super weapons wouldn't beat Moldbug's absolutist system. And even if that isn't true, why should we want a return to absolutism. It's painfully obvious to me that my rejection of absolutism is the basis of most of my disagreement with Moldbug. I think government should provide "unprofitable" services, and he doesn't.

Comment author: Konkvistador 20 January 2012 06:20:15PM *  3 points [-]

I mostly agree with your analysis, in that I think we've been lucky in some sense that the good guys won.

The good guys did win, because I'm not a National Socialist or a Communist or a Muslim or a Roman. But I don't think we where lucky. "The Gift We Give Tomorrow" should illustrate why I don't think you can say we where "lucky". By definition anyone that won would have made sure we viewed them as the more or less good guys.

But doesn't Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon?

That wasn't Moldbug's argument about the USSR, it was mine :)

Yes, if I recall right his model goes something like this: The State Department wanted to make the Soviet Union its client much like say Britain or or West Germany or Japan where, it viewed US society and Soviet society as on a converging path, with the Soviet Union's ruling class having its heart in the right place but sometimes going too far. Something they could never do with any truly right wing regime. This is why they often basically sabotaged the Pentagon's efforts and attempts at client making. The Cold War and the Third World in general would have never been as bloody if the State Department vs. the Pentagon civli war by proxy wouldn't have been going on.

Anyway, isn't it easier (and more efficient) to use social engineering to suppress populist sedition?

Sure but I don't want to live in a society that takes this logic to its general conclusion. I want to be able to dislike the government I'm living under even if I can't do anything about it. Many people might not either, and we may be willing to tolerate living in a different less wealthy part of patch land or paying higher taxes for it.

consent-of-the-governed .

What is that? Can we depack this concept?

I think government should provide "unprofitable" services, and he doesn't.

I'm trying to figure out what you mean by this. Can't we have a "Deliver mail to far off corners foundation" and give it 0.5% of the stocks of Neo-Washington corp. when the thing takes off? Do you in principle object to government being for profit or is it just you think that nonprofits funded by shares of the government of equal GDP fractions as they have right now couldn't provide services of equally quality? What is the governments mission then? Which unprofitable services should it provide? All possible ones? Those that have the most eloquent rent-seekers? Those that are "good"? Can you define then the mission of government in words that are a bit more specific than universal benevolence? And if democratic government is so good at that why don't we have seed AI report to congress for approval of each self-modification? Don't worry the AI also gets one vote.

Comment author: TimS 20 January 2012 06:45:12PM 0 points [-]

So, Moldbug's Cold War explanation is total nonsense? I thinks the Cold War follows after WWII even if the USA was ruled by King Truman I and the USSR was rule by King Stalin I. More formally, I think political realism is the empirically best description of international relations.


Anyway, you asked about patches and realms, and I said that governments do the unprofitable. If it were profitable, government wouldn't need to do it. Moldbug seems to say that we ought not to want government to do the unprofitable. That explains his move to a corporate form of government, but it doesn't justify the abandonment of the role that every government in history has decided it wanted to do.

Comment author: Konkvistador 20 January 2012 06:50:53PM *  4 points [-]

You completely missed my point. Who gets to decide what is unprofitable? Who decides which unprofitable things are worth doing? The set of all possible unprofitable activities is vastly larger than the set of profitable ones.

If it were profitable, government wouldn't need to do it.

You do realize we where talking about the USSR just a few seconds ago right? I guess Russia was a bad place to make cars so the government had to step in and do that.

Comment author: Konkvistador 20 January 2012 11:29:09PM *  3 points [-]

source

So we can separate California's expenses into two classes: those essential or profitable for California as a business; and those that are unnecessary and wasteful, such as feeding the poor, etc, etc. Let them starve! Who likes poor people, anyway? And as for the blind, bumping into lampposts will help them build character. Everyone needs character.

I am not Steve Jobs (I would be very ill-suited to the management of California), and I have not done the math. But my suspicion is that eliminating these pointless expenses alone - without any other management improvements - would turn California, now drowning in the red, into a hellacious, gold-spewing cash machine. We're talking dividends up the wazoo. Stevifornia will make Gazprom look like a pump-n-dump penny stock.

And suddenly, a solution suggests itself.

What we've done, with our separation of expenses, is to divide California's spending into two classes: essential and discretionary. There is another name for a discretionary payment: a dividend. By spending money to heal the lame, California is in effect paying its profits to the lame. It is just doing it in a very fiscally funky manner.

Thus, we can think of California's spending on good works as profits which are disbursed to an entity responsible for good works. Call it Calgood. If, instead of spending $30 billion per year on good works, California shifts all its good works and good-workers to Calgood, issues Calgood shares that pay dividends of $30 billion per year, and says goodbye, we have the best of both worlds. California is now a lean, mean, cash-printing machine, and the blind can see, the lame can walk, etc, etc.

Furthermore, Calgood's shares are, like any shares, negotiable. They are just financial instruments. If Calgood's investment managers decide it makes financial sense to sell California and buy Google or Gazprom or GE, they can go right ahead.

So without harming the poor, the lame, or the blind at all, we have completely separated California from its charitable activities. The whole idea of government as a doer of good works is thoroughly phony. Charity is good and government is necessary, but there is no essential connection between them.

Of course, in real life, the idea of Calgood is slightly creepy. You'd probably want a few hundred special-purpose charities, which would be much more nimble than big, lumbering Calgood. Of course they would be much, much more nimble than California. Which is kind of the point.

We could go even farther than this. We could issue these charitable shares not to organizations that produce services, but to the actual individuals who consume these services. Why buy canes for the blind? Give the blind money. They can buy their own freakin' canes. If there is anyone who would rather have $100 worth of free services than $100, he's a retard.

Some people are, of course, retards. Excuse me. They suffer from mental disabilities. And one of the many, many things that California, State of Love, does, is to hover over them with its soft, downy wings. Needless to say, Stevifornia will not have soft, downy wings. It will be hard and shiny, with a lot of brushed aluminum. So what will it do with its retards?

My suspicion is that Stevifornia will do something like this. It will classify all humans on its land surface into three categories: guests, residents, and dependents. Guests are just visiting, and will be sent home if they cause any trouble. Residents are ordinary, grownup people who live in California, pay taxes, are responsible for their own behavior, etc. And dependents are persons large or small, young or old, who are not responsible but need to be cared for anyway.

The basic principle of dependency is that a dependent is a ward. He or she surrenders his or her personal independence to some guardian authority. The guardian holds imperium over the dependent, ie, controls the dependent's behavior. In turn the guardian is responsible for the care and feeding of the dependent, and is liable for any torts the dependent commits. As you can see, this design is not my invention.

At present, a large number of Californians are wards of the state itself. Some of them are incompetent, some are dangerous, some are both. Under the same principle as Calgood, these dependents can be spun off into external organizations, along with revenue streams that cover their costs.

Criminals are a special case of dependent. Most criminals are mentally competent, but no more an asset to California than Jew-eating crocodiles. A sensible way to house criminals is to attach them as wards to their revenue streams, but let the criminal himself choose a guardian and switch if he is dissatisfied. I suspect that most criminals would prefer a very different kind of facility than those in which they are housed at present. I also suspect that there are much more efficient ways to make criminal labor pay its own keep.

And I suspect that in Stevifornia, there would be very little crime. In fact, if I were Steve - which of course I'm not - I might well shoot for the goal of providing free crime insurance to my residents. Imagine if you could live in a city where crime was so rare that the government could guarantee restitution for all victims. Imagine what real estate would cost in this city. Imagine how much money its owners would make. Then imagine that Calgood has a third of the shares. It won't just heal the lame, it will give them bionic wings.

This is why choosing the state as the actor that must bear unprofitable activities, regardless of on who's behalf, seems to my sentiments less an aesthetic choice or one that should be based on historic preference but an economic question that deserves some investigation. The losses of utility over such a trivial preference seem potentially large.

Comment author: Bugmaster 21 January 2012 12:57:41AM 0 points [-]

Charity is good and government is necessary, but there is no essential connection between them.

I suppose it depends on what you see as "charity". For example, free childhood vaccinations can be seen as charity -- after all, why shouldn't people just buy their own vaccines on the free market ? -- but having a vaccinated population with herd immunity is, nonetheless, a massive public good. The same can be said of public education, or, yes, canes for blind people.

Comment author: TimS 21 January 2012 12:39:47AM *  0 points [-]

Let's do some [Edit: more abstract] analysis for a moment. [Edit: I suggest that] government is the entity that has been allocated the exclusive right to legitimate violence. And the biggest use of this threat of violence is compulsory taxation. Why do people put up with this threat of violence? As Thomas Hobbes says, to get out of the state of nature and into civil society. (As Moldbug says, land governed by the rule of law is more valuable than ungoverned land).

What does the government do with the money it receives. At core, it provides services to people who don't want them. The quote mentioned letting prisoners choose their jailors. It probably would increase prisoner utility to offer the choice. It might even save money (for example, some prison systems mandate completing a GED if the prisoner lacks a high school degree). But that's not what society wants to do to criminals. If the government uses compulsory power to fund prisons, I assert a requirement that the spending vaguely correspond to taxpayer desires for the use of the funds. (Moldbug seems to disagree).

Consider another example, the DMV. At root, the government threatens violence if you drive on the road without the required government license, on the belief that the quality of driving improves when skill requirements are imposed and the requirements will not (or cannot) be imposed without the threat of violence. It is common knowledge that going to the DMV to get the license is a miserable experience because the lines are long and the workers are not responsive to customer concerns. By contrast, the MacDonald's next door is filled with helpful people who quickly provide you with the service desired as efficiently as possible. Why the difference? In part, it is the compulsory nature of the license and in part, it is that benefits of improved service at the DMV do not accrue to anyone working for or supervising the DMV. See James Wilson's insightful discussion (pages 113-115 & 134-136) (There's also an interesting discussion of the post office on pp. 122-25). I assert that much "inefficiency" in government is simply the deadweight loss inherent in compulsory taxation, which is one part of government Moldbug doesn't want to abolish.

And there's less justification for calling an entity with compulsory tax powers a profit making entity. In what way has Moldbug's Calgood acted in a competitive marketplace? Voting with your feet is just as possible in the United States or Western Europe today as it would be in the patch & realm system.

Comment author: Prismattic 21 January 2012 02:01:57AM 2 points [-]

For the libertarian, government is the entity that has been allocated the exclusive right to legitimate violence.

Max Weber was a libertarian?

Comment author: TimS 21 January 2012 03:42:57AM *  2 points [-]

Hmm. It's embarrassing to admit I'm not as well read as I'd like. I'd only ever heard the concept in libertarian discussions. Thanks.

Comment author: ahartell 16 January 2012 07:05:38PM 3 points [-]

Wow, 66 comments in 1 day. It looks the idea of having a mid-month open thread was a good one.

Comment author: shminux 16 January 2012 07:51:05PM 2 points [-]

Seems like an indication that a third tier of posts, possibly karma-free, might be a good idea. Something like Stupid Questions, or Beginner's Corner, or Sandbox, or...

Comment author: Armok_GoB 23 January 2012 09:30:02PM 0 points [-]

I've been sporadically trying to get something like this done for AGES. There was even a forum made, but without official endorsement it got like 5 members and died within days.

Comment author: shminux 23 January 2012 09:45:29PM 1 point [-]

If you were to offer a tested contrib to the LW code base, Trike might agree to add it on a trial basis, provided EY&Co approve. Not sure what their policies are.

Comment author: Armok_GoB 23 January 2012 10:05:06PM 0 points [-]

No idea how to do that, and wont have for the foreseeable future... I just don't have the attention span for coding or hacking any more for medical reasons.

Comment author: Grognor 16 January 2012 01:22:54AM *  3 points [-]

How did Less Wrong get its name?

I have two disjunct guesses that are not mutually exclusive, but do not depend on each other:
1. It was Michael Vassar's idea. He is my best guess for who came up with the name.
2. It was inspired by this essay. This is my best guess for what inspired the name.

I don't know if either of these is true, or both, or whatever. I want to know the real answer.

Searching this site and Google has been useless so far.

Comment author: XFrequentist 16 January 2012 03:14:48AM 6 points [-]

EY polled Overcoming Bias readers on their favorite from a list of several options, and "Less Wrong" was the overwhelming winner. Not sure how the options were generated.

Comment author: Grognor 16 January 2012 03:59:50AM 2 points [-]

Source?

Comment author: XFrequentist 16 January 2012 02:56:57PM 2 points [-]

Memory.

Comment author: Solvent 16 January 2012 01:41:01AM 1 point [-]

I remember Eliezer's post announcing LW. He didn't give any explanation of why it was called that, he just said "tentatively titled Less Wrong."

I'd be interested in hearing the answer to this. I suspect it was just a cool name that Eliezer came up with.

Comment author: Konkvistador 16 January 2012 07:16:43PM *  9 points [-]

Straw fascist ... has a point?

Comment author: Multiheaded 25 January 2012 02:57:14PM *  1 point [-]

Yes he does, and it's a Superhappy kind of point... if all the words in this video are taken at face value, "you'll never have to think again" near the end spells "wireheading".

It all comes down to the grand debate between inconvenient uncertain "freedom" and more founded, more stable "happiness"; during our recent conversations, I've been leaning towards the former in some things and you've been cautioning people about how they might prefer to trade that for the latter - but in the end it's all just skirting our terminal values, so there's certainly no "correct" or "incorrect" conclusion to arrive at.

Comment author: moridinamael 16 January 2012 08:24:06AM 6 points [-]

I've been incubating some thoughts for a while and can't seem to straighten them out enough to make a solid discussion post, much less a front page article. I'll try to put them down here as succinctly as possible. I suspect that I have some biases and blindspots, and I invite constructive criticism. In other cases, I think my priors are simply different than the LW average, because of my life experiences.

Probably because of how I was raised, I've always held the opinion that the path to world-saving should follow the following general steps: 1) Obtain a huge amount of personal wealth. 2) Create and/or fund the types of organizations that you believe are likely to save the world.

Other pathways feel (to me) like attempts to be too clever. I admit a likely personal bias here, but it looks like it should be easier to become wealthy by any available means than it is to singlehandedly solve all the world's important problems. If you do not agree with this assessment, I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind. I think that generally speaking very few people are actually trying to become wealthy; most people just try to match their parents' socioeconomic tier and then stop.

Comment author: faul_sname 16 January 2012 08:48:34AM 10 points [-]

Might it not be even more effective to convince others to become ultra-rich and fund the organizations you want to fund? (Actually, this doesn't seem too far off the mark from what SIAI is doing).

Comment author: moridinamael 16 January 2012 07:03:37PM 2 points [-]

I agree completely. I stopped myself short of saying this in my first post because I wanted to keep it succinct. I would go a bit further to suggest that SIAI could be doing more than merely convincing people to take this path. For example, providing trustworthy young rationalists with a financial safety net in order to permit them to take more risks. (One tentative observation I've made is that nobody becomes wealthy without taking risk. The "self-made" wealthy tend to be risk-loving.)

Comment author: faul_sname 16 January 2012 08:25:15PM *  2 points [-]

This is likely worth doing, but I am fairly sure that LWers are for the most part not wealthy enough to create this financial safety net. This seems like a concept that is worth a discussion post: what would LWers do if they had a financial safety net?

Comment author: Gabriel 16 January 2012 03:51:58PM 7 points [-]

I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind.

Any arguments that legitimately push you towards that conclusion should be easily convertible into actual advice about how to become ultra-rich. I think you're underestimating the difficulty of turning vague good-sounding ideas into effective action.

Comment author: moridinamael 16 January 2012 06:55:42PM 0 points [-]

I think there's plenty of available advice on how to become ultra-rich. Just look at the Business section of any bookstore. The problem is that this advice typically takes you from a 0.001% chance of becoming ultra-rich, through sheer lucky accident or lottery, to a 0.1% chance, through strategy and calculated risks.

I'm not arguing that it's not really hard and really improbable. However, folks tend to assess P(becoming wealthy by any means) ~ P(winning the lottery).

Comment author: Anatoly_Vorobey 16 January 2012 08:41:34AM 4 points [-]

I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind. I think that generally speaking very few people are actually trying to become wealthy; most people just try to match their parents' socioeconomic tier and then stop.

What's ultra-rich? This claim isn't saying much unless you quantify it.

Intuitively, I find both your claims - that most people only try to match their parents' tier, and that it's easy to become ultra-rich if you focus on it - to be wrong, but it'd be interesting to see more arguments or evidence in their favor.

Comment author: moridinamael 16 January 2012 07:13:13PM 3 points [-]

What's ultra-rich? This claim isn't saying much unless you quantify it.

I don't know, a billion dollars?

Intuitively, I find both your claims - that most people only try to match their parents' tier, and that it's easy to become ultra-rich if you focus on it - to be wrong, but it'd be interesting to see more arguments or evidence in their favor.

A quick Googling turns up a few papers which suggest that parental expectations largely define a child's level of educational and financial achievement. On a more intuitive level, I can only point out that the clear majority of Americans either don't go to college because their financial ambitions are satisfied by blue collar work, or they go to college in pursuit of a degree with a clear Middle Class career path attached to it. Do you know anybody whose stated goal is to be wealthy, rather than to be a doctor or an engineer or some specific career? I don't.

Comment author: Nick_Roy 16 January 2012 11:42:07AM 2 points [-]

Personally, I figure I'm not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!

Comment author: dbaupp 16 January 2012 09:40:44AM 1 point [-]

it looks like it should be easier to become wealthy by any available means than it is to singlehandedly solve all the world's important problems

Doesn't becoming very wealthy for the purpose of saving the world (and then actually saving the world) count as singlehandedly solving all the problems?

Comment author: moridinamael 16 January 2012 06:57:33PM 1 point [-]

What I was getting at is that the cognitive effort required to actually solve a Millennium problem may be greater than the cognitive effort of making a billion dollars and hiring a thousand mathematicians to work in Millennium problems.

Comment author: faul_sname 16 January 2012 10:35:19AM *  0 points [-]

Who's counting?

Comment author: dbaupp 16 January 2012 11:33:28AM 2 points [-]

Is this a joke? (Serious question, I can't tell. FWIW, I was using "count" as "fit the definition of".)

Comment author: faul_sname 16 January 2012 07:03:22PM *  1 point [-]

Partly, but not entirely. I noticed that I was asking myself seriously if that counted, then wondered why it mattered if it fit the definition.

Comment author: David_Gerard 17 January 2012 11:56:39AM *  8 points [-]

An outside view of LessWrong:

I've had a passing interest in LW, but about 95% of all discussions seem to revolve around a few pet issues (AI, fine-tuning ephemeral utilitarian approaches, etc.) rather than any serious application to real life in policy positions or practical morality. So I was happy to see a few threads about animal rights and the like. I am still surprised, though, that there isn't a greater attempt to bring the LW approach to bear on problems that are relevant in a more quotidian fashion than the looming technological singularity.

As far as I can tell, the reason for this is that in practical matters, "politics is the mind killer" is the mind killer.

Comment author: steven0461 21 January 2012 10:59:31PM 4 points [-]

Is there an argument behind "quotidian" besides "I have a short mental time horizon and don't like to think weird thoughts"?

Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?

Comment author: David_Gerard 22 January 2012 09:45:31AM *  3 points [-]

There's already enough geek-libertarian atmosphere that those of us who aren't really notice it. But yeah - as I said, I'm not actually sure it would be a good idea. But the shying away from practical application to that particular part of things people are actually interested in fixing in their daily lives is a noteworthy absence.

Your implied claim that quotidian thoughts are unworthy of attention is ... look, if you want to convince people all of this is actually a good idea, then when someone asks "so, OK. What are the practical applications of reading a million words of philosophy and learning probability maths?", answering "How dare you be so short-termist" strikes me as unlikely to work. I mean, I could be wrong ...

Comment author: J_Taylor 19 January 2012 02:01:23AM 2 points [-]

That's because in practice, "politics is the mind-killer" is the mind-killer.

If it is not too much trouble, could you explain further what you mean by that?

Comment author: David_Gerard 19 January 2012 09:06:18AM 3 points [-]

It seems to be treated as a thought stopper. "Do not go beyond this point." There are good reasons for it, but the behaviour looks just like shying away from a bad thought.

Comment author: steven0461 21 January 2012 11:11:58PM 1 point [-]

The thoughts are there, they're just not expressed on this particular site.

Comment author: J_Taylor 19 January 2012 10:53:53PM 0 points [-]

I always assumed it was more a discussion-stopper, meant to keep people polite and quiet. However, your interpretation is probably better.

Comment author: David_Gerard 19 January 2012 11:08:44PM *  1 point [-]

I assume that was the intention. I'm not actually convinced that it would improve the site for us to dive headfirst into politics ... but it's odd for the stuff discussed here not to be applied even somewhere else, or even in the discussion section, without a flurry of downvotes. There's a strong social norm that even the slightest hint of political discussion is inherently bad and must be avoided.

Comment author: J_Taylor 19 January 2012 11:13:49PM 1 point [-]

It should be noted that RationalWiki is not a website known to be, let us say, lacking in killed minds.

Comment author: David_Gerard 19 January 2012 11:51:13PM 3 points [-]

It is a very silly place.

Comment author: billswift 16 January 2012 02:43:44PM 6 points [-]

The biggest risk of "existential risk mitigation" is that it will be used by the "precautionary principle" zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.

A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.

Comment author: amcknight 17 January 2012 12:55:01AM *  3 points [-]

see the fear-mongering associated with the startup of the new collider at CERN.

Was there really deceptive fear-mongering? That's news to me. Fear was overblown, but I don't think anyone was using it for anything other than what they thought was safety.

A slowdown in new science is the one thing I am certain will increase future risks

I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?

Comment author: TimS 17 January 2012 01:55:07AM 6 points [-]

Fear was overblown, but I don't think anyone was using it for anything other than what they thought was safety.

I'm not highly read on the criticisms, but it wouldn't surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don't have a cite for you.

I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?

It's not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can't necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for "seeing the disaster coming" that isn't essentially doing more research?

Comment author: vi21maobk9vp 17 January 2012 06:09:46AM 1 point [-]

A major plague is not, strictly speaking, an existential risk, although it would deal a lot of suffering. It will delay malthusian trap, though...

Comment author: vi21maobk9vp 17 January 2012 06:19:09AM *  3 points [-]

Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading.

Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a "stop giving them ideas" thing?

ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.

Comment author: faul_sname 16 January 2012 08:46:48PM *  0 points [-]

Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?

Comment author: TimS 17 January 2012 02:01:38AM 4 points [-]

There's no reason to believe that we've reached the optimum point for ending scientific research in any particular field. If we'd stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn't have a label telling us how it's going to be useful, yet the evidence is pretty strong that basic research is worth the money.

Regarding your specific example, isn't it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren't really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we'd be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.

Comment author: faul_sname 17 January 2012 02:08:53AM *  0 points [-]

The question is not "Is it useful?" but "Is it useful enough to justify the risk?" In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.

Comment author: TimS 17 January 2012 02:59:07AM 2 points [-]

I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you've picked a bad example of "dangerous" research, because I don't think the virus research you identified is a close question.

(That said, not my downvotes)

Comment author: faul_sname 17 January 2012 05:49:07AM *  1 point [-]

Upon further research, you're right. The research appears not to be as dangerous as it seemed at first glance.

Comment author: tgb 16 January 2012 01:31:21PM 2 points [-]

An unusual answer to Newcomb's problem:

I asked a friend recently what he would do if encountering Newcomb's problem. Instead of giving either of the standard answer, he immediately attempted to create a paradoxical outcome and, as far as I can tell, succeeded. He claims that he would look inside the possibly-a-million-dollars box and do the following: If the box contains a million dollars, take both boxes. If the box contains nothing, take only that box (the empty one).

What would Omega do if he predicted this behavior or is this somehow not allowed in the problem setup?

Comment author: Gabriel 16 January 2012 03:31:25PM 11 points [-]

Not allowed. You get to look into the second box only after you have chosen. And even if both boxes were transparent, the paradox is easily fixed. Omega shouldn't predict what will you do (because that's assuming that you will ignore the content of the second box and Omega isn't stupid like that) but what will you do if box B contains a million dollars. Then it would correctly predict that your friend would two-box in that situation, so it wouldn't put the million dollars into the second box and your friend would take only the empty box according to his strategy. So yeah.

Comment author: tgb 16 January 2012 04:50:23PM 0 points [-]

That's a nice simple way to reword it. Thanks.

Comment author: Manfred 16 January 2012 04:15:41PM 5 points [-]

There actually is a variant where you're allowed to look into the boxes - Newcomb's problem with transparent boxes.

And yes, it is undefined if you apply the same rules. However, there are two ways to re-define it.

1: Reduce the scope of the inputs. For example, Omega could operate on the following program: "If the contestant would take only one box when the million dollars is there, put the million dollars there." Before, Omega was looking at both situations, and now it's only looking at one.

2: Increase the scope of the program. There are two possible responses in two possible situations for a total of four inputs, so you just need to define Omega's response for all four. It's interesting that Omega now treats you differently depending on your thoughts, not just depending on which box you take, so this changes the genre of the problem.

Comment author: Viliam_Bur 16 January 2012 10:30:35AM 5 points [-]

At LW, religion is often used as a textbook example of irrationality. To some extent, this is correct. Belief in untestable supernatural is a textbook example of belief in belief and privileging the hypothesis.

However, religion is not only about belief in supernatural. A mainstream church that survives centuries must have a lot of instrumental rationality. It must provide solutions for everyday life. There are centuries of knowledge accumulated in these solutions. Mixed with a lot of irrationality, sure. Many religious people were pretty smart, for example Reverend Thomas Bayes, right? Also in my life I know religious people whose rationality is very high above average.

I am afraid that because of the halo effect we can miss a great source of rationality here. For example I am pretty sure that there are many successful anti-acrasia tactics written by religious authors. Another example: a list of capital sins, if you replace the religious terminology with something more lesswrongian, is simply a list of mental biases. (Pride = refusing to use an outside view. Gluttony = using a scarcity mindset in an abundance environment.) So I guess we could sometimes reuse the wheel instead of reinventing it.

Comment author: TheOtherDave 16 January 2012 04:15:27PM 6 points [-]

I agree that religious organizations have developed many effective techniques for getting certain kinds of things done, and I endorse adopting those techniques where they achieve goals I endorse.

I'm not sure I agree that this isn't already happening, though.

Can you provide some examples of such techniques that aren't also in use outside of the religious organizations that developed them?

Incidentally, the word "rationality" seems to contribute nothing to this topic beyond in-group signalling effects.

Comment author: Nisan 16 January 2012 04:47:11PM 5 points [-]

Have you seen this sequence? It reveals how the LDS church gets things done: By providing a real community for its members, and making them feel like they belong by giving them responsibilities. I'm sure an aspiring-rationalist version of that would be even better.

This is the super-secret rationality technique of churches. It's the reason religious people are happier than nonreligious people in the US. It's the domain where religious people are correct when they say that nonreligious people are missing out on something good. Now we just have to implement it. It's not something that we can do individually.

Comment author: dbaupp 16 January 2012 11:30:36AM 4 points [-]

A mainstream church that survives centuries must have a lot of instrumental rationality

This isn't obviously true. Once a belief system is established it is easily continued via indoctrination, especially when the indoctrination includes the idea that indoctrinating others is a Good thing.

Comment author: curiousepic 18 January 2012 08:06:08PM 1 point [-]
Comment author: NancyLebovitz 17 January 2012 06:16:53AM 0 points [-]

Accedia, an overview of catholic (and other, if I remember correctly) writing about sloth, plus a personal memoir. As I recall, quite an interesting book, but not personally useful-- and this is backed up by the top three amazon reviews.

The fact that such a seriously researched book doesn't turn up much that's easily useful (a more careful or motivated reader might have found something) suggests that there may not be much practical advice in the tradition.

This is reminding me of Theodore Sturgeon's complaint that Christianity told people to be more loving, but didn't say anything about how. (From memory, I don't have a cite.)

Comment author: mstevens 16 January 2012 10:39:53AM *  5 points [-]

A current thought experiment I'm pondering:

Scientists discover evidence that <a social group> popularly discriminated against really does have all the claimed negative traits. The evidence is so convincing that everyone who hears it instantly agrees this is the case.

If you want to picture a group, I suggest the discovery that Less Wrong readers are evil megalomaniacs who want to turn you into paperclips.

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

I've heard Peter Singer says useful and interesting things about this, but it hasn't yet reached the top of my bookqueue.

Comment author: TheOtherDave 16 January 2012 04:08:44PM 18 points [-]

I'm puzzled that you describe this as a hypothetical.

For example, the culture I live in is pretty confident that five-year-olds are so much less capable than adults of acting in their own best interests that the expected value to the five-year-olds of having their adult guardians make important decisions on their behalf (and impose those decisions against their will) is extremely positive.

Consequently we are willing to justify subjecting five-year-olds to profound inequalities.

This affects my ideas of equality quite a bit, and always has. It is indeed OK to discriminate "against" them, and to treat them differently legally, and to not invite them to dinner, and always has been.

Comment author: erratio 16 January 2012 07:19:57PM 5 points [-]

The practice in the US of alerting people in the neighbourhood to the presence of convicted child molesters (or was it rapists? I don't remember) seems to indicate that at least some people think that it's a great idea. I think that as we get better at testing people for sociopathy we're likely to move towards certain types of legal discrimination towards them too.

None of this affects my personal ideas of equality though. I would prefer not to be friends with an evil megalomaniac in the same way that I would prefer not to be friends with a drug addict, but if I met an interesting person and then discovered that they were an evil megalomaniacal drug addict I wouldn't necessarily cut them out of my life, either.

Comment author: Konkvistador 16 January 2012 02:50:27PM *  16 points [-]

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

We are actually as a society ok with discriminating against the vast majority of possible social groups. If this was not the case life as we know it would simply become impossible because we would have to treat everyone equally. That would be a completely crazy civilization to live in. Especially if it considered the personal to be political.

You couldn't like Alice because she is smart, since that would be cognitivist. You couldn't hang out with Alice because she has a positive outlook on life, because that would discriminate against the mentally ill (those who are currently experiencing depression for starters). You couldn't invite Alice out for lunch because you think she's cute, because that would be lookist. ect. ect.

Without the ability to discriminate between the people who have traits we find desirable or useful and those we don't, without a bad conscience, most people would be pretty miserable and perpetually repressed. Indeed considering humans are social creatures I'd say the repression and psychological damage would dwarf anything ever caused by even the most puritanical sexual norms.

Comment author: Multiheaded 21 January 2012 02:41:38PM 1 point [-]

See faul_sname's comment below; "discrimination" should really be tabooed with "prejudice based on weak prior evidence without any personal contact" in this discussion.

Comment author: faul_sname 16 January 2012 10:49:02AM 8 points [-]

"Discrimination" usually just means "applying statistical knowledge about the group to individuals in the group" and is a no-no in our society. If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

Comment author: Konkvistador 16 January 2012 04:53:13PM *  6 points [-]

If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

The problem is that one of the only ways to prove someone is indeed using statistical knowledge, on the handful of cases that we have forbidden it, is to analyse their patterns of behaviour, basically look at the recorded statistics of their interactions. Both the records and the results of such an analysis which can be easily faked and misinterpreted.

Which means that if the forbidden statistical knowledge is indeed useful and reliable enough to be economical to use it, and someone else is very very serious about preventing it from being used, the knowledge will both be employed in a clandestine way and most of the economic gains from it will be eaten up by the cost of avoiding detection. This leads to a net loss of wealth.

Say a for-profit company that spends 90% of the gains from forbidden knowledge on avoidance of detection, the governments spends half or a third of that amount to monitor the company. The company would be indirectly paying for government monitoring regardless if it used the knowledge or not. It is therefore irrational for the company to not use the particular forbidden set statistical knowledge in such a situation.

Comment author: Konkvistador 16 January 2012 05:06:16PM *  4 points [-]

BTW To get the full suckiness hidden in the bland phrase "net loss of wealth" most people need some aid to fix their intuitions. Converting "wealth" to happy productive years or dead child currency sometimes works.

Comment author: TheOtherDave 16 January 2012 05:50:36PM 1 point [-]

(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase "using forbidden statistical knowledge."

Once we convert everything to Expected Number of Happy Productive Years (for example), it's easier to ask whether we'd prefer system A, in which Sum(ENoHPY) = N1 and Standard Deviation(ENoHPY) = N2, or system B where Sum(ENoHPY) = (N1 - X) and Standard Deviation(ENoHPY) = N2- Y.

Comment author: Konkvistador 16 January 2012 06:56:05PM *  3 points [-]

(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase "using forbidden statistical knowledge."

That is kind of the point of being a utilitarian. And remembering to consider opportunity cost let alone estimate it often is the hard part when it comes to policy.

Comment author: vi21maobk9vp 17 January 2012 06:52:32AM 3 points [-]

There are two problems: statistical knowledge being easily faked or misinterpreted and life being a multiple-repetition game.

It is hard to apply the knowledge of "many X are Y and it is bad" when X is easier to check than Y in such a way as to not diminish the return on investment of X who work hard to not be Y. The same with the positive case: if you think that MBA programs teach something useful and think "many MBAs have learnt the useful things from MBA program" then getting into the program and not learning starts making sense. And we have that effect!

http://www.freakonomics.com/2011/10/12/why-do-only-top-mba-programs-practice-grade-non-disclosure/

Comment author: mstevens 16 January 2012 01:10:33PM 2 points [-]

But don't people talking about discrimination often claim that the statistical trends aren't there?

Comment author: fubarobfusco 19 January 2012 01:54:59AM *  1 point [-]

Yes. For instance, the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences. The arrest rates indicate that the law-enforcement system "believes" that black Americans use illegal drugs more — a statistical trend which isn't there.

Another way of thinking about these issues, rather than talking about "discrimination against <minority group>", is "privilege held by <majority group>". This can describe the same thing but in terms which can cast a different (and sometimes useful) light on it.

For instance, one could say "<minority> people are harassed by police when they hang out in public parks." However, this could be taken as raising the question of what those <minority> people are doing in those parks to attract police attention — which would be privileging the hypothesis (no pun intended). Another way of describing the same situation, without privileging the hypothesis, is "<majority> people get to hang out in public parks without the police taking interest."

Comment author: Alicorn 19 January 2012 03:15:57AM 4 points [-]

the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences.

Where does the data about the actual proportion come from, since it can't be the legal system's data?

Comment author: fubarobfusco 19 January 2012 03:51:34AM 4 points [-]

Having re-checked the above from, e.g. the National Survey on Drug Use and Health, done by the Department of Health & Human Services, I retract the claim that black Americans use drugs less than white Americans.

Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don't get arrested for snorting coke two decades ago. The white:black ratio in the population as a whole is 5.7, according to the Census. In lifetime illegal drugs use, 6.6; in last-month illegal drugs users, 5.1.

However, from the Census data on arrests, the white:black ratio in illegal drugs arrests is 1.9. Now, this doesn't break down by severity of alleged offenses, e.g. possession vs. dealing; or quantities; or aggravating factors such as school zones.

Comment author: Multiheaded 21 January 2012 02:48:36PM *  0 points [-]

Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don't get arrested for snorting coke two decades ago.

Sorry, I don't understand that. Does it simply mean that white people in general as seen here used to do more drugs some years/decades ago, but now their proportion dropped below that of blacks?

Comment author: fubarobfusco 21 January 2012 03:27:49PM 0 points [-]

Maybe but not necessarily. It would be consistent with, for instance, there being proportionally more white people who tried illegal drugs once and didn't continue using.

Illegal drugs are an interesting place to try some Bayescraft.

Comment author: billswift 22 January 2012 02:35:02AM *  0 points [-]

The arrest rates indicate that the law-enforcement system "believes" that black Americans use illegal drugs more — a statistical trend which isn't there.

In fact your interpretation is wrong. It is not "the law-enforcement system "believes"" that blacks use more. It is that blacks are more often dealers, and it is easier to get a conviction or plea bargain as a user than as a dealer, since the latter requires intent as well as possession and will be fought harder because of higher penalties.

Comment author: TimS 24 January 2012 02:19:22AM 0 points [-]

I suspect that blacks are not over-represented as drug dealers. Rather, blacks live in urban areas, which can be policed at lower cost than rural areas for population density reasons.

Comment author: mstevens 16 January 2012 05:58:54PM 1 point [-]

As vague context, the whole area of equality and discrimination is something that nags me at me as not making enough sense. I hope with enough pondering to come up with a clear view on things, but it's failing so far.

Comment author: faul_sname 16 January 2012 08:17:59PM *  2 points [-]

Something has been bothering me about Newcomb's problem, and I recently figured out what it is.

It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.

In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.

Comment author: amcknight 17 January 2012 01:07:31AM 4 points [-]

You probably know this, but just in case:
In Newcomb's problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn't repeatedly observe backwards causality, even if they might be justified in thinking they did.

Comment author: faul_sname 17 January 2012 01:46:29AM 1 point [-]

How is that observably different from backwards causality existing? Perhaps we need to taboo the word "cause".

Comment author: TimS 17 January 2012 01:52:20AM *  0 points [-]

It seems very intuitive to me that being very good at predicting someone's decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don't think Newcomb's problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.

Comment author: faul_sname 17 January 2012 02:06:06AM 1 point [-]

This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the 'same' universe if the simulation is identical to the 'real' universe?

Comment author: Alejandro1 18 January 2012 05:35:43AM 3 points [-]

Newcomb's problem doesn't lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.

Comment author: TimS 17 January 2012 02:56:15AM 2 points [-]

Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?

But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of "pre-commitment." Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.

Comment author: khafra 17 January 2012 03:37:00PM 3 points [-]

Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega's decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.

Comment author: shminux 16 January 2012 11:42:50PM 1 point [-]

In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.

In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.

Comment author: ahartell 25 January 2012 03:54:08AM 1 point [-]

So I was reading a book in the Ender's Game series, and at one point it talks about the idea of sacrificing a human colony for the sake of another species. It got me thinking about the following question. Is it rational to protect 20 "piggies" (which are morally equivalent to humans) and sacrifice 100 humans if the 20 piggies constitute 100% of their species' population and the humans represent a very very small fraction of the human race. At first, it seemed obvious that it's right to save the "piggies," but now I'm not so sure. Having tried to think of why saving them is right (for a few minutes), all I came up with was that diversifying investments in intelligent life makes intelligent life safer from extinction. But is diversity of life inherently valuable? What makes a future with "piggies" and humans better than one with just one or the other?

While writing this, I noticed one other reason: the valuable information that the "piggies" have. If this is eliminated, is it still worth saving them? And how many human lives can the "good of diversity" and the "loss of information" overcome? These are basically rhetorical questions (i.e. I'm not looking for answers like "53,243 humans per 'piggy'"), so I'm really just looking for your thoughts on this issue.

Comment author: shminux 25 January 2012 05:24:00AM 3 points [-]

Is it rational to protect 20 "piggies"...and sacrifice 100 humans

Depends on your goal... If it is the survival of the human colony, then no. If it is the survival of the human race an the piggies hold a key to it, then yes (they do not, in this story). If it is the survival of the pequenino race, then yes. It does not make sense to ask which of the goals is rational, unless you can measure them against something else.

Comment author: ahartell 25 January 2012 05:39:25AM 1 point [-]

Right. Let's say that you just value "intelligent life," though, rather than the humans or pequeninos in particular. Say you're the hive queen. A piggy is equal to a human and the human race is equal to a human race.

(I worry that I'm still missing the point and the question is moot without first resolving whether you value "diversity" in it's own right or not, and that such valuing is a preference independent of rational decision making. Still, I feel as if some preferences can be irrational.)

Comment author: Craig_Heldreth 17 January 2012 03:29:52PM 1 point [-]
Comment author: multifoliaterose 18 January 2012 01:35:13PM 4 points [-]

Why do you bring this up?

For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.

Comment author: Craig_Heldreth 18 January 2012 02:33:49PM 1 point [-]

It would be easier to discuss the merits (or lack) of the book if you specify something about the book you believe lacks merit. The opinion that the book is overly hyped is a common criticism, but is too vague to be refuted.

It was a bestseller. Of course many of those people who bought it are silly.

Comment author: multifoliaterose 18 January 2012 04:24:21PM 1 point [-]

I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.

Comment author: Grognor 26 January 2012 02:08:32PM 0 points [-]
Comment author: David_Gerard 17 January 2012 01:06:22PM 1 point [-]

Stephen Law on his new book, Believing Bullshit:

Intellectual black holes are belief systems that draw people in and hold them captive so they become willing slaves of claptrap. Belief in homeopathy, psychic powers, alien abductions - these are examples of intellectual black holes. As you approach them, you need to be on your guard because if you get sucked in, it can be extremely difficult to think your way clear again.

Comment author: billswift 16 January 2012 02:43:08PM 0 points [-]

Utility functions do a terrible job of modelling our conscious wants and desires. Our conscious minds are too non-continuous to be modeled effectively. But our total minds are far more continuous, radical changes are rare which is why "character" and "personality" are recognizable over time and often despite our conscious desires, even quite strong conscious desires.

Comment author: ahartell 16 January 2012 11:05:25PM 1 point [-]

Does anyone know how one would go about suggesting a new feature for predictionbook.com? I think it would be better if you could tag predictions so that then you could see separate ratings for predictions in different domains. Like, "Oh look, my predictions of 100% certainty about HPMOR are correct 90% of the time but my predictions of 100% certainty about politics are right 70% of the time." Also, you could look at recent predictions for only a specific topic, or see how well calibrated another user is in a specific area.

Comment author: gwern 28 January 2012 06:38:47PM *  0 points [-]

Does anyone know how one would go about suggesting a new feature for predictionbook.com?

http://github.com/tricycle/predictionbook/issues

As Anubhav pointed out, PB is not important to Trike since it's orders of magnitude less popular than LW (as useful as I may find it). If you really want tagging for per-domain calibration, you either need to get your hands dirty or put up a bounty.

Comment author: Anubhav 21 January 2012 11:58:06AM 0 points [-]

PB has a severe manpower shortage. New features not coming any time soon, AFAICT.

Comment author: TimS 29 January 2012 01:45:28AM 0 points [-]

Depressing article opposing life extension research is depressing. Brief summary: In the least convenient possible world, human research trials would be unethically exploitative. And this is presented as an argument against attempting to end aging. <sigh>

Comment author: Normal_Anomaly 27 January 2012 03:58:08PM 0 points [-]

I've found a video that would be really cool if it were true, but I don't know how to judge its truth and it sounds ridiculous. This talk by Rob Bryanton deals with higher spatial dimensions, and suggests that different Everett branches are separated in the 5th dimension, universes with different physical laws are separated in the sixth dimension, etc. I can't find much info about the creator online, but one site accuses him of being a crank.Can somebody who knows something about physics tell me if there is any grain of truth to this possibility?

Comment author: gwern 28 January 2012 06:39:54PM 0 points [-]

That reminds me of Tegmark's multi-level classification of multiverses, but that classification doesn't make sense as a spatial set of dimensions, IIRC.

Comment author: ahartell 26 January 2012 10:01:40PM 0 points [-]

In what ways do Frequentists and Bayesians disagree?

Comment author: Oscar_Cunningham 28 January 2012 10:57:34PM 0 points [-]

For a Bayesian a random quantity is just an unknown one. For example a coin not yet flipped is random (because I don't know which way it will land), and so is the population of Colorado (because I don't know what it is). Frequentists treat randomness as an inherent property of things, so that the coin flip would still be random (because it's not predetermined) but the population of Colorado isn't (because it's already fixed).

So given the problem of estimating the population of Colorado, a Bayesian would just hand you back a probability distribution (i.e. tell you how probable each population was). This option wouldn't be available to the Frequentist, who would refuse to put a probability distribution on a variable that wasn't random. Instead the Frequentist would give you an estimate and then tell you that the algorithm that generated the estimate had desirable properties, like being "unbiased".

Comment author: naturelover7 25 January 2012 05:25:45PM 0 points [-]

I am interested in guidance on coping with loved one's irrationality.

Comment author: lessdazed 18 January 2012 10:05:18PM *  0 points [-]

"My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,"

"I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,"

"Wot evah! I [believe] what I want!"

Comment author: tgb 17 January 2012 05:13:12PM 0 points [-]

Question regarding the quantum physics sequence:

This article tells me that the amplitude for a photon leaving a half mirror in each of the two directions is 1 and i (for straight and taking a turn, respectively) for an amplitude of 1 of a photon reaching the half-mirror. This must be a simplification, otherwise two half mirrors in a line would result in amplitude of i photon turning at the first mirror, an amplitude of i photon turning at the second mirror, and an amplitude of 1 of photon passing through both. This means that the squared-modulus ratio is 1:1:1 and all events are equally likely, and hence the existence of the second (possibly very distant) half mirror reduces the amount of light leaving the first half-mirror to 1/3 from 1/2 the intensity. I would be shocked to find that such a result is reality since it would, among other things, allow transmission of information faster than the speed of light.

Okay, so the obvious fix is to say that Eliezer simplified things and the real rule is that there is a factor of 1/sqrt(2) to each factor. Then the squared modulus ratio of the above example is 1/2:1/4:1/4 as expected.

But then I run into my second problem: suppose that there is a photon headed at a half-mirror. Turning at the half mirror leads to a detector. Going straight leads to a set of four mirrors which brings the photon back to the starting point. This introduces a loop in the system. What is the amplitude of the light reaching the detector? Intuitively, I would expect this to be 1 or possibly less than 1. Assuming that my above factor of 1/sqrt(2) is correct, then we get an infinite sum 1/sqrt(2) + 1/sqrt(4) + ... which converges to 1 + sqrt(2). This seems very wrong - we would need a factor of 1/2 to converge to 1, but then the previous situation gives a square modulus ratio of 1/4:1/16:1/16 or 4:1:1 which is again unexpected.

So is there a factor on each term of the half-mirror and if so what is it? Since no factor would agree with both of these setups, what have I done wrong?

Comment author: Oscar_Cunningham 18 January 2012 11:05:39AM 2 points [-]

What dbaupp said. But in particular you square first and then add because arriving at a different time makes the possibilities distinguishable, and so there is no interference (you don't add the complex amplitudes).

Comment author: tgb 18 January 2012 01:32:30PM 0 points [-]

Ah good. This is a good explanation and I had been wondering how the different timing would affect it. Thanks to you and dbaupp.

Comment author: dbaupp 18 January 2012 03:37:17AM *  2 points [-]

Assuming that my above factor of 1/sqrt(2) is correct, then we get an infinite sum 1/sqrt(2) + 1/sqrt(4) + ...

To get the ratio, one needs to add the squared moduli, so 1/2+1/4+..., and that gives 1.

Comment author: TimS 18 January 2012 07:00:23PM -1 points [-]

What is the rational case for having children?

One can tell a story about how evolution made us not simply to enjoy the act that causes children but to want to have children. But that's not a reason, that's a description of the desire.

One could tell a story about having children as a source of future support or cost-controlled labor (i.e. farmhands). But I think the evidence is pretty strong that children are not wealth-maximizing in the modern era.

And if there is no case for having children, shouldn't that bother us on "Our morality should add up to normal, ceteris parabis" grounds?

Comment author: jimrandomh 18 January 2012 07:50:04PM *  5 points [-]

Rationality helps you map out the relations between actions and goals, and between goals and subgoals; and it can help us better understand the structure of the goals we already have. We can say that doing something is good because it helps achieve goals, or bad because it hinders them; and we can say that certain things are also goals (subgoals), if achieving them helps with our original goals. However, this has to bottom out somewhere; and we call the places where it bottoms out - goals that're valued in and of themselves, not just because they help with some other goal - terminal values.

Rationality has nothing whatsoever to say about what terminal values you should have. (In fact, those terminal values are implicit when you use the word "should".) For people who want children, that is usually a terminal value. You cannot argue that it's good because it achieves something else, because that is not why people think it's good.

Comment author: TimS 18 January 2012 08:15:57PM 2 points [-]

You are right. And that's at least the second time I've made that mistake, so hopefully I'll learn from it.

Let me ask the sociological question I should have asked: It appears that many of the folks invested enough in "rationality" to be active participants in LW not only don't have children, but think that having children is not a good goal. That constellation of beliefs suggests that there is some selection pressure that links those two beliefs. Should the existence of that selection pressure worry us on "Add up to normal" grounds?

Comment author: torekp 22 January 2012 09:17:08PM 1 point [-]

However, this has to bottom out somewhere; and we call the places where it bottoms out - goals that're valued in and of themselves, not just because they help with some other goal - terminal values.

This seems to be a near-consensus here at LessWrong. But I'm not convinced that "it bottoms out in goals that're valued in and of themselves" follows from "this has to bottom out somewhere". I grant the premise but doubt the conclusion. I doubt that where-it-bottoms-out needs to be, specifically, goals -- it could be some combination of beliefs, habits, experiences, and/or emotions, instead.

But you say, we call the places where it bottoms out goals ... (emphasis added). Of course, you can do that, and it's even true that people will pretty well understand what you mean. You can call these things goals, and do so without doing terrible violence to the language, but I'm not convinced that this is the most felicitous way of speaking about motivation and ethical learning. Whether these bottom-level items are best described as goals, or habits, or beliefs, or something quite different, depends on psychological facts which may not yet be in (sufficient) evidence.

Comment author: Alicorn 20 January 2012 06:52:01AM 0 points [-]

I wish it to be known that the next person to sign on as a beta for my fiction is entitled to the designation "pi".