If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)

New Comment
248 comments, sorted by Click to highlight new comments since: Today at 9:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

An outside view of LessWrong:

I've had a passing interest in LW, but about 95% of all discussions seem to revolve around a few pet issues (AI, fine-tuning ephemeral utilitarian approaches, etc.) rather than any serious application to real life in policy positions or practical morality. So I was happy to see a few threads about animal rights and the like. I am still surprised, though, that there isn't a greater attempt to bring the LW approach to bear on problems that are relevant in a more quotidian fashion than the looming technological singularity.

As far as I can tell, the reason for this is that in practical matters, "politics is the mind killer" is the mind killer.

Is there an argument behind "quotidian" besides "I have a short mental time horizon and don't like to think weird thoughts"?

Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?

5David_Gerard12y
There's already enough geek-libertarian atmosphere that those of us who aren't really notice it. But yeah - as I said, I'm not actually sure it would be a good idea. But the shying away from practical application to that particular part of things people are actually interested in fixing in their daily lives is a noteworthy absence. Your implied claim that quotidian thoughts are unworthy of attention is ... look, if you want to convince people all of this is actually a good idea, then when someone asks "so, OK. What are the practical applications of reading a million words of philosophy and learning probability maths?", answering "How dare you be so short-termist" strikes me as unlikely to work. I mean, I could be wrong ...
3J_Taylor12y
If it is not too much trouble, could you explain further what you mean by that?
3David_Gerard12y
It seems to be treated as a thought stopper. "Do not go beyond this point." There are good reasons for it, but the behaviour looks just like shying away from a bad thought.
2steven046112y
The thoughts are there, they're just not expressed on this particular site.
0J_Taylor12y
I always assumed it was more a discussion-stopper, meant to keep people polite and quiet. However, your interpretation is probably better.
2David_Gerard12y
I assume that was the intention. I'm not actually convinced that it would improve the site for us to dive headfirst into politics ... but it's odd for the stuff discussed here not to be applied even somewhere else, or even in the discussion section, without a flurry of downvotes. There's a strong social norm that even the slightest hint of political discussion is inherently bad and must be avoided.
1J_Taylor12y
It should be noted that RationalWiki is not a website known to be, let us say, lacking in killed minds.
6David_Gerard12y
It is a very silly place.
0multifoliaterose12y
I agree

I sometimes run into a situation where I see a comment I'm ambivalent about about, that I would normally not vote on. However, this comment also has an extreme vote total, either very high or very low. I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have. What do you do in this situation?

I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have.

You get to modify the karma rating by one in either direction. Do so in whatever manner seems most desirable to you.

You have too much voting power if you create a sock puppet and vote twice.

7rocurley12y
This is my attempt to figure out what is most desirable to me. At the moment, I want to do whatever would be the best overall policy if everyone followed it, with "best" here being defined as "resulting in the best lesswrong possible" (with a very complicated definition of best that I don't think I can specify well). Given that that's what I want, how best to achieve it? The karma system is valuable because it makes more visible posts that are highly upvoted, so it's valuable to the extent that the highest upvoted comments are the best. It should be noted that only relative karma matters (for sorting within an article), and the karma of other posts will tend to be rising (most posts wind up with positive karma). There is some number between 0 and 1 (call it x)that represents the expected vote of someone who votes. Because karma is relative, if you've decide you care enough to vote, you should subtract x from your vote to determine if it counts as evidence that the post is good or bad. Do you want to vote 1-x, -x, or -1-x? Note 1-x>0, and the other two (not voting and down voting) are less than 0, downvoting by quite a bit. Which of these best corresponds to the sentiment "I liked this but think it's overrated"?
0shminux12y
I roughly follow the following (prioritized) rules: 1. Up-vote if I want to see more posts like this/down-vote if I don't want to see more posts like this, regardless of the current total. 2. A comment that I do not feel very strongly about I may up- or down-vote based on what total karma I expect the comment of this kind to deserve. 3. Very occasionally, I might like or dislike the author for unrelated reasons, and decide to up-/down-vote based on that.
2MixedNuts12y
You should vote without knowledge of total karma, otherwise it biases comments' karma scores towards 0 (except at extremes, where it creates bandwagon effects). Power doesn't enter into it, though.

You're assuming that biasing karma scores towards zero (relative to what they would be before) is bad. Sure, it could be, but I don't see any particular reason why.

0Solvent12y
[citation needed]
1Alex_Altair12y
I have previous thought that maybe karma should be hidden until after you vote. But then there's the problem where part of the point of karma is to tell you whether something is worth reading. If karma was hidden until after voting, users would still have their total karma to motivate them, and we could still hide sufficiently negative comments. Maybe we should hide comment karma before voting, but not article karma?
0Wrongnesslessness12y
Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you? In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.
1rocurley12y
I meant that the intrinsic value of the comment does not justify its vote count.

Some thinking is easier in privacy.

In a fascinating study known as the Coding War Games, consultants Tom DeMarco and Timothy Lister compared the work of more than 600 computer programmers at 92 companies. They found that people from the same companies performed at roughly the same level — but that there was an enormous performance gap between organizations. What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed. Sixty-two percent of the best performers said their workspace was sufficiently private compared with only 19 percent of the worst performers. Seventy-six percent of the worst programmers but only 38 percent of the best said that they were often interrupted needlessly.

These are interesting results, but the research was from 1985--"Programmer Performance and the Effects of the Workplace," in Proceedings of the 8th International Conference on Software Engineering, August 1985. It seems unlikely that things have changed, but I don't know whether the results have been replicated.

2saturn12y
I don't know of any studies, but there are many anecdotal reports about this.
0gwern12y
Worth noting: is correlational, not causal.
[-][anonymous]12y100

Straw fascist ... has a point?

1Multiheaded12y
Yes he does, and it's a Superhappy kind of point... if all the words in this video are taken at face value, "you'll never have to think again" near the end spells "wireheading". It all comes down to the grand debate between inconvenient uncertain "freedom" and more founded, more stable "happiness"; during our recent conversations, I've been leaning towards the former in some things and you've been cautioning people about how they might prefer to trade that for the latter - but in the end it's all just skirting our terminal values, so there's certainly no "correct" or "incorrect" conclusion to arrive at.

The biggest risk of "existential risk mitigation" is that it will be used by the "precautionary principle" zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.

A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.

3amcknight12y
Was there really deceptive fear-mongering? That's news to me. Fear was overblown, but I don't think anyone was using it for anything other than what they thought was safety. I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?
9TimS12y
I'm not highly read on the criticisms, but it wouldn't surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don't have a cite for you. It's not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can't necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for "seeing the disaster coming" that isn't essentially doing more research?
1vi21maobk9vp12y
A major plague is not, strictly speaking, an existential risk, although it would deal a lot of suffering. It will delay malthusian trap, though...
5vi21maobk9vp12y
Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading. Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a "stop giving them ideas" thing? ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.
0fubarobfusco12y
If you claim that a modern nuclear reactor can level a nearby city, you are telling a falsehood.
0vi21maobk9vp12y
I was slightly unclear. Your statement is true. I do not say that a modern nuclear reactor can level a city. I don't even claim or disclaim that the worst currently running nuclear reactor can level a city under reasonably imaginable coditions (I tend to agree that the fallout will be a problem, but a full-scale nuclear explosion is very unlikely but I have not enough evidence and knowledge to be sure either way). I describe a situtation of the research of nuclear fission. Imagine that someone knows that a bigger pile of uranium emits more radiation and wants to build a power plant based on this in 10–20 years. Some research is done to be able to predict the behaviour of such a system — of course, there are no power plant designs from Earth-2010-our-timeline. How should one do the research to prevent Chernobyl type disasters, minimize the risk of Fukushima type disasters and not find something that makes military build a nuclear bomb before first nuclear power plant is built? Note that one needs to do enrichment both for a power plant and for a bomb. It is true that simply piling even warhead-grade enriched uranium will not lead to a weapon-scale explosion, but the results of building a reactor without careful research into implications are not likely to be good.
-1faul_sname12y
Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?
7TimS12y
There's no reason to believe that we've reached the optimum point for ending scientific research in any particular field. If we'd stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn't have a label telling us how it's going to be useful, yet the evidence is pretty strong that basic research is worth the money. Regarding your specific example, isn't it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren't really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we'd be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.
0faul_sname12y
The question is not "Is it useful?" but "Is it useful enough to justify the risk?" In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.
3TimS12y
I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you've picked a bad example of "dangerous" research, because I don't think the virus research you identified is a close question. (That said, not my downvotes)
1faul_sname12y
Upon further research, you're right. The research appears not to be as dangerous as it seemed at first glance.

As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.


Starting point: $4k in bulk right now, from Illumina http://investor.illumina.com/phoenix.zhtml?c=121127&p=irol-newsArticle_print&ID=1561106 (I ran into a ref saying knomeBASE did <$5k sequencing - http://hmg.oxfordjournals.org/content/20/R2/R132.full#xref-ref-... (read more)

0gwern12y
--"Secrets of my DNA", Wired March 2011 (so 2014?)
2gwern11y
Inside China’s Genome Factory, Technology Review
0gwern11y
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3663089/ http://biomickwatson.wordpress.com/2013/05/15/a-pedantic-look-at-the-cost-of-sequencing/ http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/ http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/#comment-2031 http://www.utsandiego.com/news/2013/Jun/19/1000-genome-mirage/2/?#article-copy
5gwern5y
Consumer WGSes hit ~$1000 with Veritas in 2016. In 2018, Dante Labs began offering WGS at ~$600, with a sale of $350. And we now have a rumor that Illumina will announce a $100 genome in a few months (presumably in early 2019): https://twitter.com/coregenomics/status/1058790189752049664 $100 might be a little questionable here (apparently Illumina has a history of making the most favorable possible assumptions about volume/amortization) but revisiting my original prediction from 7 years ago: I was too pessimistic about SNP genotyping (it was actually more like $50 in 2014, I was completely unaware of UK Biobank at the time or its scale or savings), definitely right about '<$1000 by 2020', and I think I will turn out to be somewhat wrong about WGS being <$100 by 2020: even if Illumina is fudging some numbers for early 2019 at $100, it'll have almost a whole year to drop the cost a little more, and honestly, even if it's actually $110 does it make a difference considering how many things you can use whole genomes for & general medical overhead? You can hardly get some prescription aspirin these days for $100... Overall self-assessment: I was more right than I had any right to be in that set of predictions given I was using some simple extrapolating and adding some pessimism/mean-reversion. Not bad, past-self!

Just got a Veritas-related email:

Veritas Genetics will be offering their MyGenome product (30x whole genome sequencing) normally $999, for $199 to the first 1000 customers, starting tomorrow, Monday, 9 AM ET.

Even allowing for promotional discounts, I'm still impressed. EDIT: Dante Labs too!

2Wei Dai5y
Thanks! I've been conflicted about which SNP service to use, and now I don't have to decide. :) Do you know if there are any potential downsides to consenting to let Veritas use the data for research? Would you tick that box?
2gwern5y
Yes. In fact, I am already a PGP participant. I am not sure you necessarily want to use Veritas/Dante Labs (Veritas might be sold out already based on their Twitter), as WGS reports are usually pretty raw and you won't get all of the interpretive services somebody like 23andMe would provide. I don't believe 23andMe or the other major services let you just upload sequencing data either, only download. Offhand, I'm not sure how easy it would be to even use Promethease (not that Promethease is very worthwhile, as most of their report is candidate-gene junk). Personally, I am holding off on getting a WGSes done. I don't know what I would do with mine, and the price should keep getting lower.
2Wei Dai5y
Oh, I misunderstood the purpose of your comment and thought you were recommending people to take advantage of the sale. I knew it was going to sell out quickly so I made the order prior to posting my question. (I gave consent for research since it said that I could withdraw that consent at any time.) It looks like Veritas offers VCF file download so it's compatible with Promethease but the format it uses only gives 16,000 genotypes. Also apparently Veritas used to provide the full BAM raw data, but no longer does, which is disappointing, so I'll probably cancel my order and take advantage of the Dante $199 sale instead which does offer BAM. Looks like sequencing.com lets you upload a BAM file and offers a bunch of apps to do different analyses on it.
2gwern5y
No, I was mentioning the sales because they offer a measurement of what WGS costs end to end now - presumably Veritas/Dante or Nebula are offering at close to their marginal cost (as they aren't big or wealthy enough to afford to give it away and WGSes aren't exactly a repeat-customer business). As far as Dante goes, I have seen some complaints about very slow or inconsistent service; on IRC, one of us did a previous sale and their original spit didn't work, so they sent him another tube and forgot the postage. Not sure if he's gotten his WGS yet either.
2Wei Dai5y
I see. Given that I haven't done a genotype yet, would you suggest that I go through with Dante anyway, or wait until the price comes down further? (Presumably it would definitely be worth doing at $100?)
2gwern5y
Well, do you have anything in mind specifically to do with it? If you do, it may not be worthwhile to wait. But if you don't have something which needs to be done with a WGS right now, you probably aren't going to be struck with inspiration once you get your download either.
[-][anonymous]12y80

I'm reading Moldbug's Patchwork and considering it as a replacement for Democracy. I expected it to be dystopia, but it actually sounds like a neat place to live, it is however a scary Eutopia.

Has anyone else read this recently?

1TimS12y
I've read through the pieces, and I'm struggling to come up with something to say that a reactionary absolutist like Moldbug would find interesting. For example, in the first piece linked, Moldbug says (Let's ignore that the last sentence is questionable as a matter of historical fact): I don't disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there's a reason why mature absolute monarchies (like Louis XIV) invented "divine right of kings." I assert that you can't throw that away (as Moldbug does) and assume that nothing changes about the setup. My next point would be that there is no reason to expect a government to make a profit. But Moldbug's commitment to accepting the verdict of history means that he wouldn't find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there's every reason to expect that failing to continue winning will lead in short order to your replacement.
5[anonymous]12y
The idea is that it is possible to make the cake bigger by having efficient government. This is why he invokes Laffer curves as relevant concepts. I find myself sympathetic to this. If you say give some amount of stocks to foundations that provide free healthcare to those who can't afford it or preserve natural habitat ect. that matches current GDP spending, but come up with a government that is more efficient at providing funds for all these endeavours you get more spent in an absolute sense on healthcare or environmentalism than otherwise. If you want to do efficient charity, you don't work in a soup kitchen, you work hard where you have a comparative advantage to earn as much money as possible and then donate it to an efficient charity. Moldbug may not approve but I actually think his design with the right ownership structure, might be together with some properly designed foundations be a much better "goodness generating machine" than a democratic US or EU might ever be. I also like the idea of being able to live in a society with laws that you can agree with, if you don't like it you just leave and go somewhere where you do agree with them. The profit motive is transparent and it is something that is easy to track down than "doing good", which is as the general goal of government far less transparent. As a shareholder or employee in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals. It also has the neat property of seemingly guaranteeing human survival in a Malthusian em future (check out Robin Hansons writing on this). As long as humans own stocks it wouldn't matter if they where made obsolete by technology they could still basically collect a simply vast amount of rent which would continue growing at a rapid rate for millennia or even millions of years. The real problem is how these humans don't get hacked into being consum
2TimS12y
Many government programs provide services to people who can't afford the the value of the service provided. Police and public education provided to inner-cities cannot be paid from the wealth of the beneficiaries. Moldbug complains about the inefficiency of the post office, but that problem is entirely caused by non-efficiency based commitments like delivering mail to middle-of-nowhere small towns. Without those constraints, USPS looks more like FedEx. That's not a Moldbuggian insght - everyone who's spent a reasonable amount of time thinking about the issue knows this trade-off. And I simply don't believe this is a likely outcome. There will be times when a realm does not want to use its full arsenal of unobtanium weapons (i.e. to deal with jaywalking and speeding). Anyway, isn't it easier (and more efficient) to use social engineering to suppress populist sedition? I mostly agree with your analysis, in that I think we've been lucky in some sense that the good guys won. But doesn't Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon? I think it likely that any system of government backed by unobtanium weapons would defeat any existing government system. It's not clear to me that a consent-of-the-governed system backed by the super weapons wouldn't beat Moldbug's absolutist system. And even if that isn't true, why should we want a return to absolutism. It's painfully obvious to me that my rejection of absolutism is the basis of most of my disagreement with Moldbug. I think government should provide "unprofitable" services, and he doesn't.
3[anonymous]12y
The good guys did win, because I'm not a National Socialist or a Communist or a Muslim or a Roman. But I don't think we where lucky. "The Gift We Give Tomorrow" should illustrate why I don't think you can say we where "lucky". By definition anyone that won would have made sure we viewed them as the more or less good guys. That wasn't Moldbug's argument about the USSR, it was mine :) Yes, if I recall right his model goes something like this: The State Department wanted to make the Soviet Union its client much like say Britain or or West Germany or Japan where, it viewed US society and Soviet society as on a converging path, with the Soviet Union's ruling class having its heart in the right place but sometimes going too far. Something they could never do with any truly right wing regime. This is why they often basically sabotaged the Pentagon's efforts and attempts at client making. The Cold War and the Third World in general would have never been as bloody if the State Department vs. the Pentagon civli war by proxy wouldn't have been going on. Sure but I don't want to live in a society that takes this logic to its general conclusion. I want to be able to dislike the government I'm living under even if I can't do anything about it. Many people might not either, and we may be willing to tolerate living in a different less wealthy part of patch land or paying higher taxes for it. What is that? Can we depack this concept? I'm trying to figure out what you mean by this. Can't we have a "Deliver mail to far off corners foundation" and give it 0.5% of the stocks of Neo-Washington corp. when the thing takes off? Do you in principle object to government being for profit or is it just you think that nonprofits funded by shares of the government of equal GDP fractions as they have right now couldn't provide services of equally quality? What is the governments mission then? Which unprofitable services should it provide? All possible ones? Those that have the most eloquent r
-1TimS12y
So, Moldbug's Cold War explanation is total nonsense? I thinks the Cold War follows after WWII even if the USA was ruled by King Truman I and the USSR was rule by King Stalin I. More formally, I think political realism is the empirically best description of international relations. ---------------------------------------- Anyway, you asked about patches and realms, and I said that governments do the unprofitable. If it were profitable, government wouldn't need to do it. Moldbug seems to say that we ought not to want government to do the unprofitable. That explains his move to a corporate form of government, but it doesn't justify the abandonment of the role that every government in history has decided it wanted to do.
4[anonymous]12y
You completely missed my point. Who gets to decide what is unprofitable? Who decides which unprofitable things are worth doing? The set of all possible unprofitable activities is vastly larger than the set of profitable ones. You do realize we where talking about the USSR just a few seconds ago right? I guess Russia was a bad place to make cars so the government had to step in and do that.
-2TimS12y
Communism (and socialism in general) have inefficient (i.e. not wealth-maximizing) preferences for wealth distribution. So no, it doesn't surprise me that that massive government planning was required to try to implement the communist preference. If equal wealth distribution were wealth-maximizing, then the government wouldn't have needed to intervene to make it happen. This isn't a groundbreaking point. It falls out straightforwardly from the economic definition of efficiency.
5[anonymous]12y
I repeat myself: Unless you are arguing Communist preferences of wealth redistribution and the opportunity cost that entails where automatically representative of those of "the Russian people" because duh they had the October revolution and a civil war in which Communists won. In which case I will ask why they would not be in North Korea, and would also ask you if all regimes deciding things are representative of "the people" why do we even need this democracy thing? Obviously Ancient Egyptian peasants wanted to be involved in the unprofitable business of building Pyramids for Pharaoh. If we are not sure the ancient Egyptian Monarchies captured people's preferences for unprofitable activities that should be done according to the values of those indirectly funding them, if the same cannot be said of Rome, if the same cannot be said of Communism ... why do you think it can be said of say the US government? Why do you think this is more efficient than having government be a money making machine that gives its citizens free money because they own stock and lets them spend it on whatever charity (which also by definition do unprofitable things) or indulgence (which often are also unprofitable - whenever I go stop to smell the flowers or go watch a movie I don't do this to maximize my profit in currency, but to hopefully maximize my utility) they want? Or if it interferes with the operation of the state why not have the stockholders spend it in some other part of Patchland that specializes in being a great place to spend your money for good causes or fun? And if you don't think people's preferences even matter when deciding what unprofitable stuff to spend resources on ... well whose preferences should then? I want unprofitable stuff that I like done too. Like helping people not having to die if they don't want to. All else being equal I don't however much care who does them. BTW I'm not too sure about Moldbug's government type either, I wouldn't volunteer to live the
0TimS12y
For Moldbug, the answer is . . . not you. Unless the CEO of the realm put your charity on the cleared list. But I suspect that most of the things I would want to do with my dividends would be prohibited as security risks. Political control without thought control has never happened, and I don't think that super weapons could make it happen.
5[anonymous]12y
I'm interested in your answer. That is a good argument. Overall I think Moldbug does a better job of giving decent explanatory power for the modern world than providing workable solutions (if there are any) for its ills. :)
-1Multiheaded12y
Please elaborate on how, completely disregarding political realism in favor of an overarching conspiracy theory (as already mentioned above) and just ignoring the whole iceberg of neuroscience, evolutionary psychology, etc, one can arrive at a decent explanation for it all. "The leftist social sciences professor down the street is a witch, she did it" is not up to my standards of "decent".
3[anonymous]12y
That is not Moldbug's model. How much have you read? He has decent models in my mind for many things including the genesis of the leftwards social movement for the past few decades or centuries, the genesis of modern morality, US foreign policy, the sociological aspect development of ideology ect. I don't think I'm that much of a outlier in my estimation here, I've heard many people I know from LessWrong express interest in his thought (for example gwern, or Vladimir_M). He even had a live recorded debate with Robin Hanson back in 2010 on Futarchy (though he lost, everyone looses debates to RH ;) ). Top posters like Yvian and Eliezer also seem to have read some of Moldbug since they refer to his writing occasionally, ect. People sometimes agree and other times disagree with him, but I think they generally don't view him as a "crank" . I really don't have the time right now to discuss all of this but there are a few older discussions in the comment sections of various articles (just search for "Moldbug" on the site), LessWrong that may interesting you if you'd like to learn more about his stuff and why people find it interesting. My recent thread on one of his post also had some discussion.
0Multiheaded12y
I have read all of that, at first glance expecting a fun and intriguing contrarian ride. It came across as considerably more insane (in the LW/OB sense) and less grounded in reality than the milder forms of ol' good fascism to me.
5[anonymous]12y
I generally don't see what's so insane about WASP Blue State Protestant progressivism being the sociological, philosophical and cultural predecessor of WASP Blue State progressivism. Or say that modern ethics aren't the product of pure reason and moral progress but a clear descendant of older Western morality. Or that US foreign policy is often crazy and mixed up because the US isn't a monolithic entity and that more specifically the interests of the State department and the Pentagon diverge. Or that in a modern parlimentary democracy power is wielded by opinion makers (academia and journalists) who create the intellectual fashion of the rich and well positioned subscribe to and with a twenty or so year lag the general population (they adopt it not just to copy the elites but because legislation and education are updated to push new beliefs on them) which then vote for representatives that are supposed to keep the unelected elites in check and working for their interests. Culturally any ethical ideas or value sets adopted by elite academia are assured long term victory. I think that covers my examples. Meh, fascists are often too mystical for my tastes (try reading Julius Evola. Religious Paleocons are a bit better but their axioms are all messed up, believing in God and all that. The few irreligious ones are often lots of fun.
3[anonymous]12y
source This is why choosing the state as the actor that must bear unprofitable activities, regardless of on who's behalf, seems to my sentiments less an aesthetic choice or one that should be based on historic preference but an economic question that deserves some investigation. The losses of utility over such a trivial preference seem potentially large.
1Bugmaster12y
I suppose it depends on what you see as "charity". For example, free childhood vaccinations can be seen as charity -- after all, why shouldn't people just buy their own vaccines on the free market ? -- but having a vaccinated population with herd immunity is, nonetheless, a massive public good. The same can be said of public education, or, yes, canes for blind people.
-1TimS12y
Let's do some [Edit: more abstract] analysis for a moment. [Edit: I suggest that] government is the entity that has been allocated the exclusive right to legitimate violence. And the biggest use of this threat of violence is compulsory taxation. Why do people put up with this threat of violence? As Thomas Hobbes says, to get out of the state of nature and into civil society. (As Moldbug says, land governed by the rule of law is more valuable than ungoverned land). What does the government do with the money it receives. At core, it provides services to people who don't want them. The quote mentioned letting prisoners choose their jailors. It probably would increase prisoner utility to offer the choice. It might even save money (for example, some prison systems mandate completing a GED if the prisoner lacks a high school degree). But that's not what society wants to do to criminals. If the government uses compulsory power to fund prisons, I assert a requirement that the spending vaguely correspond to taxpayer desires for the use of the funds. (Moldbug seems to disagree). Consider another example, the DMV. At root, the government threatens violence if you drive on the road without the required government license, on the belief that the quality of driving improves when skill requirements are imposed and the requirements will not (or cannot) be imposed without the threat of violence. It is common knowledge that going to the DMV to get the license is a miserable experience because the lines are long and the workers are not responsive to customer concerns. By contrast, the MacDonald's next door is filled with helpful people who quickly provide you with the service desired as efficiently as possible. Why the difference? In part, it is the compulsory nature of the license and in part, it is that benefits of improved service at the DMV do not accrue to anyone working for or supervising the DMV. See James Wilson's insightful discussion (pages 113-115 & 134-136) (There's also
3Prismattic12y
Max Weber was a libertarian?
3TimS12y
Hmm. It's embarrassing to admit I'm not as well read as I'd like. I'd only ever heard the concept in libertarian discussions. Thanks.
0asr12y
Every time I read Moldbug's stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem. The reason we have government isn't that we sat down once upon a time in the state of nature to design a political system. We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it. Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder -- it turns out that it's potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord's entourage. Politically, we don't do welfare spending and criminal justice purely for the fuzzies, or solely because they're ends in themselves. Every so often, we have organized and vigorous protests against the status quo. When this happens, those in power can either appease the protesters, use force to crush the protesters, or try to make them go away quietly without violence. If the protesters are determined enough, this last approach doesn't work. And the government can either use clubs, or buy off the protesters. It turns out that power structures that become habitually brutal don't do too well. People who get in the habit of using force aren't good neighbors, aren't good police, and aren't trusty subordinates. Bystanders don't want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power -- or else overthrown. Moldbug talking about cryptographically controlled weapons is missing the point: we don't want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.
5Jayson_Virissimo12y
I believe the main thrust of Moldbug's writings is that we should be (but aren't) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of "leftists" for everything under the sun).
0taelor12y
So much of Moldbug's belief system, and even his constructed identity as an "enlightened reactionary", ride on his complete rejection of whiggish historical narratives; however, he takes this to such an extent that he ends up falling into the very trap that Whig Interpretation's original critic, Herbert Butterfield warned of in his seminal work on the subject:
-1asr12y
Except, none of his prescriptions are sensible engineering. Crypto-controlled weapons as foundation for social order are more science-fiction than sensible design for controlling violence in society. it's much too easy for people to build or buy weapons, or else circumvent the protections. Pinning your whole society on perfect security seems pretty crazy from a design point of view.
2Jayson_Virissimo12y
Right, I don't think he succeeds either. I was merely trying to summarize his project as I think he sees it.
5[anonymous]12y
Just because governments often employ violence just before they loose power does not mean that employing violence was the cause of their downfall. Many sick people take medication just before they die. Sure violence may do them no good, like an aspirin does no good for a brain tumour, but it is hard to therefore argue that aspirin is the cause of death. The assertion is particularly dubious since historically speaking governments have used a whole lot of violence and this actually seems to have often saved them. Even in modern times we have plenty examples of this. This Robin Hanson post seems somewhat relevant:
4[anonymous]12y
The state can be thought of as a sedentary bandit, who instead of pillaging and burning a village of farmers extorted them and eventually started making sure no one else pillages or burns them since that interferes with the farmers paying him. The roving bandit has no incentive to assure the sustainability of a particular farming settlement he parasites. A stationary banding in a sense farms the settlement. Government can expediently be defined, ultimately beneath all the full, as a territorial monopolist of violence. There is a trade off between government violence used to prevent anyone else from exercising violence and violence by other organized groups. How do we know we are at the optimal balance in a utilitarian sense? Also Moldbug dosen't want to do away with government he wants to propose a different kind of government. And we have in the past had systems of government that where the result of people sitting down and then trying to design a political system. To take modern examples of this (though I could easily pull out several Greek city states), perhaps the Soviet Union was a bad design, but the United States of America literally took over the world. In any case this demonstrates that new forms of government (not necessarily very good government) can be designed and implemented. Government violence s ideally more predictable than the violence it prevents (that's the whole reason we in the West think rule of law is a good idea). Sure the government has other tools to prevent violence than just violence of its own, but ultimately all law is violence. In the sense of the WHO definition: You can easily make the violence painless by say sedating a would be rapist with the stun setting on your laser gun, and you can easily also eliminate the suffering of imprisoning him, by modifying his brain with advanced tools. But changing a persons mind without their consent or by giving them a choice between 6 years imprisonment and modifying their brain has surely ju
3gwern12y
Abba Lerner, "The Economics and Politics of Consumer Sovereignty" (1972):
2[anonymous]12y
In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population. Also are you really that sure that people wouldn't want to live in a Neocameralist system? When you say efficiency I don't think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies. Further more consider this: That sounds pretty draconian. But we also know Singapore is a pretty efficiently run government by most metrics. Is Singapore an unpleasant place to life? If so why do so many people want to live there? If you answer economic opportunities or standard of living or job opportunities, well then maybe Moldbug does have a point in his very economic approach to it.
4asr12y
I had assumed we were talking about government for [biased, irrational] humans, not for perfect utilitarians or some other mythical animal. I was saying that routine application of too much violence will upset humans, not that it should upset them. I'm sure many people would live quite happily in Singapore. Clearly, it works for the Singaporians. But I don't think that model can be replicated elsewhere automatically, nor do I think Moldbug has a completely clear notion why it works. Moldbug talks about splitting up the revenue generation (taxation) from the social-welfare spending. This seems like a recipe for absentee-landlord government. And historically that has worked terribly. The government of Singapore does have to live there, and that's a powerful restraint or feedback mechanism. In the US (and I believe the rest of the world), the population would like to pay lower taxes, and pointing to the social welfare benefits is the thing that convinces them to pay and tolerate higher rates. I think once the separation between spending and taxation becomes too diffuse, you'll get tax revolts. Remember, we are designing a government for humans here -- short-sighted, biased, irrational, and greedy. So the benefits of unpleasant things have to be made as obvious as possible.
1Prismattic12y
I'm open to being corrected on this, since I don't have a good source for Singaporean immigration statistics, but my prior is that people who choose to live in Singapore are coming there from other places that are much more corrupt while also still being rather draconian (China, Malaysia). I'm pretty sure well-educated Westerners could get a well-paying job in Singapore, and the reason few move there is not, in fact, about economics.
-5Multiheaded12y
-14Multiheaded12y

At LW, religion is often used as a textbook example of irrationality. To some extent, this is correct. Belief in untestable supernatural is a textbook example of belief in belief and privileging the hypothesis.

However, religion is not only about belief in supernatural. A mainstream church that survives centuries must have a lot of instrumental rationality. It must provide solutions for everyday life. There are centuries of knowledge accumulated in these solutions. Mixed with a lot of irrationality, sure. Many religious people were pretty smart, for example... (read more)

8Nisan12y
Have you seen this sequence? It reveals how the LDS church gets things done: By providing a real community for its members, and making them feel like they belong by giving them responsibilities. I'm sure an aspiring-rationalist version of that would be even better. This is the super-secret rationality technique of churches. It's the reason religious people are happier than nonreligious people in the US. It's the domain where religious people are correct when they say that nonreligious people are missing out on something good. Now we just have to implement it. It's not something that we can do individually.
8TheOtherDave12y
I agree that religious organizations have developed many effective techniques for getting certain kinds of things done, and I endorse adopting those techniques where they achieve goals I endorse. I'm not sure I agree that this isn't already happening, though. Can you provide some examples of such techniques that aren't also in use outside of the religious organizations that developed them? Incidentally, the word "rationality" seems to contribute nothing to this topic beyond in-group signalling effects.
4dbaupp12y
This isn't obviously true. Once a belief system is established it is easily continued via indoctrination, especially when the indoctrination includes the idea that indoctrinating others is a Good thing.
2curiousepic12y
This TED talk is relevant: http://blog.ted.com/2012/01/17/atheism-2-0-alain-de-botton-on-ted-com/
0NancyLebovitz12y
Accedia, an overview of catholic (and other, if I remember correctly) writing about sloth, plus a personal memoir. As I recall, quite an interesting book, but not personally useful-- and this is backed up by the top three amazon reviews. The fact that such a seriously researched book doesn't turn up much that's easily useful (a more careful or motivated reader might have found something) suggests that there may not be much practical advice in the tradition. This is reminding me of Theodore Sturgeon's complaint that Christianity told people to be more loving, but didn't say anything about how. (From memory, I don't have a cite.)
[-][anonymous]12y70

When it comes to accepting evolution, gut feelings trump fact

“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts...

In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For th

... (read more)

A current thought experiment I'm pondering:

Scientists discover evidence that popularly discriminated against really does have all the claimed negative traits. The evidence is so convincing that everyone who hears it instantly agrees this is the case.

If you want to picture a group, I suggest the discovery that Less Wrong readers are evil megalomaniacs who want to turn you into paperclips.

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

I've heard... (read more)

I'm puzzled that you describe this as a hypothetical.

For example, the culture I live in is pretty confident that five-year-olds are so much less capable than adults of acting in their own best interests that the expected value to the five-year-olds of having their adult guardians make important decisions on their behalf (and impose those decisions against their will) is extremely positive.

Consequently we are willing to justify subjecting five-year-olds to profound inequalities.

This affects my ideas of equality quite a bit, and always has. It is indeed OK to discriminate "against" them, and to treat them differently legally, and to not invite them to dinner, and always has been.

[-][anonymous]12y250

How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?

We are actually as a society ok with discriminating against the vast majority of possible social groups. If this was not the case life as we know it would simply become impossible because we would have to treat everyone equally. That would be a completely crazy civilization to live in. Especially if it considered the personal to be political.

You couldn't like Alice because she is smart, since that would be cognitivist. You couldn't hang out with Alice because she has a positive outlook on life, because that would discriminate against the mentally ill (those who are currently experiencing depression for starters). You couldn't invite Alice out for lunch because you think she's cute, because that would be lookist. ect. ect.

Without the ability to discriminate between the people who have traits we find desirable or useful and those we don't, without a bad conscience, most people would be pretty miserable and perpetually repressed. Indeed considering humans are social creatures I'd say the repression and psychological damage would dwarf anything ever caused by even the most puritanical sexual norms.

1Multiheaded12y
See faul_sname's comment below; "discrimination" should really be tabooed with "prejudice based on weak prior evidence without any personal contact" in this discussion.

"Discrimination" usually just means "applying statistical knowledge about the group to individuals in the group" and is a no-no in our society. If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.

8[anonymous]12y
The problem is that one of the only ways to prove someone is indeed using statistical knowledge, on the handful of cases that we have forbidden it, is to analyse their patterns of behaviour, basically look at the recorded statistics of their interactions. Both the records and the results of such an analysis which can be easily faked and misinterpreted. Which means that if the forbidden statistical knowledge is indeed useful and reliable enough to be economical to use it, and someone else is very very serious about preventing it from being used, the knowledge will both be employed in a clandestine way and most of the economic gains from it will be eaten up by the cost of avoiding detection. This leads to a net loss of wealth. Say a for-profit company that spends 90% of the gains from forbidden knowledge on avoidance of detection, the governments spends half or a third of that amount to monitor the company. The company would be indirectly paying for government monitoring regardless if it used the knowledge or not. It is therefore irrational for the company to not use the particular forbidden set statistical knowledge in such a situation.
4[anonymous]12y
BTW To get the full suckiness hidden in the bland phrase "net loss of wealth" most people need some aid to fix their intuitions. Converting "wealth" to happy productive years or dead child currency sometimes works.
2TheOtherDave12y
(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase "using forbidden statistical knowledge." Once we convert everything to Expected Number of Happy Productive Years (for example), it's easier to ask whether we'd prefer system A, in which Sum(ENoHPY) = N1 and Standard Deviation(ENoHPY) = N2, or system B where Sum(ENoHPY) = (N1 - X) and Standard Deviation(ENoHPY) = N2- Y.
4[anonymous]12y
That is kind of the point of being a utilitarian. And remembering to consider opportunity cost let alone estimate it often is the hard part when it comes to policy.
1mstevens12y
I read an interesting article on the legal side of this in the USA, annoyingly despite being sure I'd saved it I can't find anything.
4vi21maobk9vp12y
There are two problems: statistical knowledge being easily faked or misinterpreted and life being a multiple-repetition game. It is hard to apply the knowledge of "many X are Y and it is bad" when X is easier to check than Y in such a way as to not diminish the return on investment of X who work hard to not be Y. The same with the positive case: if you think that MBA programs teach something useful and think "many MBAs have learnt the useful things from MBA program" then getting into the program and not learning starts making sense. And we have that effect! http://www.freakonomics.com/2011/10/12/why-do-only-top-mba-programs-practice-grade-non-disclosure/
2mstevens12y
But don't people talking about discrimination often claim that the statistical trends aren't there?
3fubarobfusco12y
Yes. For instance, the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences. The arrest rates indicate that the law-enforcement system "believes" that black Americans use illegal drugs more — a statistical trend which isn't there. Another way of thinking about these issues, rather than talking about "discrimination against ", is "privilege held by ". This can describe the same thing but in terms which can cast a different (and sometimes useful) light on it. For instance, one could say " people are harassed by police when they hang out in public parks." However, this could be taken as raising the question of what those people are doing in those parks to attract police attention — which would be privileging the hypothesis (no pun intended). Another way of describing the same situation, without privileging the hypothesis, is " people get to hang out in public parks without the police taking interest."
4Alicorn12y
Where does the data about the actual proportion come from, since it can't be the legal system's data?
6fubarobfusco12y
Having re-checked the above from, e.g. the National Survey on Drug Use and Health, done by the Department of Health & Human Services, I retract the claim that black Americans use drugs less than white Americans. Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don't get arrested for snorting coke two decades ago. The white:black ratio in the population as a whole is 5.7, according to the Census. In lifetime illegal drugs use, 6.6; in last-month illegal drugs users, 5.1. However, from the Census data on arrests, the white:black ratio in illegal drugs arrests is 1.9. Now, this doesn't break down by severity of alleged offenses, e.g. possession vs. dealing; or quantities; or aggravating factors such as school zones.
0Multiheaded12y
Sorry, I don't understand that. Does it simply mean that white people in general as seen here used to do more drugs some years/decades ago, but now their proportion dropped below that of blacks?
1fubarobfusco12y
Maybe but not necessarily. It would be consistent with, for instance, there being proportionally more white people who tried illegal drugs once and didn't continue using. Illegal drugs are an interesting place to try some Bayescraft.
0billswift12y
In fact your interpretation is wrong. It is not "the law-enforcement system "believes"" that blacks use more. It is that blacks are more often dealers, and it is easier to get a conviction or plea bargain as a user than as a dealer, since the latter requires intent as well as possession and will be fought harder because of higher penalties.
0TimS12y
I suspect that blacks are not over-represented as drug dealers. Rather, blacks live in urban areas, which can be policed at lower cost than rural areas for population density reasons.
-3Multiheaded12y
Hell, that seems to be an understatement to me. There's a particular reason that racial discrimination is by far the most taboo and reviled form of it, beyond the memory of Nazism; real current political groups - that are very nasty - are always hoping for the chance to pounce on the issue once they're allowed to get close to it.
7erratio12y
The practice in the US of alerting people in the neighbourhood to the presence of convicted child molesters (or was it rapists? I don't remember) seems to indicate that at least some people think that it's a great idea. I think that as we get better at testing people for sociopathy we're likely to move towards certain types of legal discrimination towards them too. None of this affects my personal ideas of equality though. I would prefer not to be friends with an evil megalomaniac in the same way that I would prefer not to be friends with a drug addict, but if I met an interesting person and then discovered that they were an evil megalomaniacal drug addict I wouldn't necessarily cut them out of my life, either.
1mstevens12y
As vague context, the whole area of equality and discrimination is something that nags me at me as not making enough sense. I hope with enough pondering to come up with a clear view on things, but it's failing so far.

What are some efficient ways to signal intelligence? Earning an advanced degree from a selective university seems rather cost intensive.

I figured someone would have said this by now, and it seems obvious to me, but I'm going to keep in mind the general principle that what seems obvious to me may not be obvious to others.

You said efficient ways to signal intelligence. Any signaling worth salt is going to have costs, and the magnitude of these costs may matter less than their direction. So one way to signal intelligence is to act awkwardly, make obscure references, etc.; in other words, look nerdy. You optimize for seeming smart at the cost of signaling poor social skills.

Some less costly ones that vary intensely by region, situation, personality of those around you, and lots and lots of things, with intended signal in parentheses:

  • Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)
  • Talk quickly.
  • Quote famous people all the time. (He quotes people; therefore he is well-read; therefore he is intelligent.)
  • In general, do things quickly. Eating, walking, reacting to fire alarms. (Smart people have less time for sitting around.)
  • During conversations, make fun of belie
... (read more)
7faul_sname12y
Definitely this. Tutoring is a very strong signal of intelligence, but is really a matter of learned technique. I was able to tutor effectively in Statistics before I had taken any classes or fully understood the material by using tutoring techniques I had learned by teaching other subjects (notably Physics). The most common question I found myself asking was "what rule do we apply in situations like this," a question you do not actually need to know the subject material to ask.
2dbaupp12y
I'd be interested if you were to expand on this.
2Emily12y
I'm not the OP of that comment, but as a linguistics student I can corroborate. I think there are a couple of reasons that occasionally throwing a relevant piece of linguistic information into a conversation can produce the smartness impression. Firstly, conversations never fail to involve language, so opportunities to comment on language are practically constant if you're attuned to noticing interesting bits and pieces. This means that even occasional relevant comments mean you're saying something interesting and relevant quite frequently. This is an advantage that linguistics has over, say, marine biology. Secondly, I have the impression that most people are vaguely interested in language and under the equally vague impression that they know just how it works -- after all, they use it all the time, right? So even imparting a mundane little piece of extremely basic linguistics can create the impression that you're delivering serious cutting-edge expert-level stuff: after all, your listener didn't know that, and yet they obviously know a pretty decent amount about language!
2Grognor12y
It has worked for me. People are impressed when I point out their own sentence structure, things like how many phonemes are in the word "she", etc. I don't know if this also helps signal intelligence, but I also rarely get confused by things people say. Instead of saying, "What?" I say "Oh, I get it. You're trying to say X even though you actually said Y." Also, I guess it seems like a subject only smart people are interested in. And not even most of them. Guess I got lucky in that regard.
1amcknight12y
It, of course, depends who you're signalling to. These sound to me like ways of signalling that you are intelligent to the unintelligent. (If that. They're good possibilities but I'm skeptical of about half of them.)
1[anonymous]12y
I perhaps should work on this one. It might improve my signal/noise ratio. Your list is quite wisely written.

In a Dark-Arts-y way, glasses?

(A brief search indicates there are several studies that suggest wearing glasses increases percieved intelligence (e.g. this and this (paywall)), but there are also some that suggest that it has no effect (e.g. this (abstract only)))

6Jayson_Virissimo12y
There definitely exists a stereotype that people that wear glasses are more intelligent. The cause of this common stereotype is probably that people that wear glasses are more intelligent.
4multifoliaterose12y
But what's the purported effect size?
[-][anonymous]12y130

Here's a few suggestions, some sillier than others, in no particular order:

  • Join organizations like Mensa
  • Look good
  • Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.
  • If your particular field has certifications you can get instead of a degree, these may be more cost-effective
  • Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)
  • Learn other languages--doing so not only makes you more employable, it can be a big status boost

Much depends on the audience one is signalling to.

Join organizations like Mensa

To stupid or average people, this is a signal of intelligence. To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.

Again this works as a signal to people who are at a remove from these activities, because the average player is smarter than the average human. People who themselves actually play, however, will have encountered many people who happen to be good at certain specific things that lend themselves to abstract strategy games, but are otherwise rather dim.

Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)

Agree with this one. It's especially useful because it has the opposite sorting effect of the previous two. Other intelligent people will pick up on it as a sign of intelligence. Conspicuously unintelligent people will fail to get it.

Learn other languages--doing so not on

... (read more)

To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".

My experience seems to support this. The desire to signal intelligence is often so strong that it eliminates much of the benefits gained from high intelligence. It is almost impossible to have a serious discussion about something, because people habitually disagree just to signal higher intelligence, and immediately jump to topics that are better for signalling. Rationality and mathematics are boring, conspiracy theories are welcome. And of course, Einstein was wrong; an extraordinarily intelligent person can see obvious flaws in theory of relativity, even if they don't know anything about physics.

Mensa membership will not impress people who want to become stronger and have some experience with Mensa. Many interesting people make the Mensa entry test, come to the first Mensa meeting... and then run away.

2Normal_Anomaly12y
My experience with Mensa was similar to yours. I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren't worth the time. There was far less original thought in Mensa then I had expected.
7khafra12y
Saying this about Mensa is a much better way to signal intelligence to other intelligent people than actually being a Mensa member.
4TheOtherDave12y
Well, it's worth being a little careful here. Saying dismissive things about an outgroup is an effective way to present myself as a higher-status member of the ingroup; that works as well for "us intelligent people" and "those Mensa dweebs" as any other ingroup/outgroup pairing. Which makes it hard to tell whether I'm really signalling intelligence at all.
2Normal_Anomaly12y
Yes, and I knew that when I said it. But it's also true.
7Viliam_Bur12y
Right now my question is: Is abandoning Mensa the most useful thing, or can it be used to increase rationality somehow? Seems to me that the selection process in Mensa has two steps. First, one must decide to make a Mensa entry test. Second, one must decide to be a Mensa member, despite seeing that Mensa is only good for signalling -- this is sometimes not so obvious to a non-Member. For example when I was 15, I imagined that Mensa would be something like... I guess like I now imagine the LW meetups. I expected there people who are trying to win, not only to signal intelligence to other members. So I conclude that people who pass the first filter are better material than people who pass both filters. A good strategy could be this: Start a local rationalist group. Become a member of Mensa, so you know when Mensa does tests. Prepare a flyer describing your rationalist group and give it to everyone that completes the Mensa test -- they will probably come to the first following Mensa meeting, but many of them will not appear again. This is what I want to do, when I overcome my laziness. Also I will give a talk in Mensa about rationality and LW, though (judging by reactions on our facebook group) most members will not be really interested.

The best ways to signal intelligence are to write, say, or do something impressive. The details depend on the target audience. If you're trying to impress employers, do something hard and worthwhile, or write something good and get it published. If you're a techie and trying to impress techies, writing neat software (or adding useful features to existing software) is a way to go.

if you are asking about signalling intelligence in social situations, I suggest reading interesting books and thinking about them. Often, people use "does this person read serious books and think about them" as a filter for smarts.

5sixes_and_sevens12y
Do something prohibitively difficult that not a lot of people are competent enough to do.
4amcknight12y
Of course, make sure it's something people "know" is hard, like rocket science.
-1sixes_and_sevens12y
I have to admit, I'm mystified as to why this one got downvoted.
0Multiheaded12y
Likely because it could be read as a sarcastic remark resolving to "become intelligent for real, and you wouldn't need to fake anything, you lazy cheating bastard". I wouldn't have downvoted for that, but such a reading had indeed occured to me at first, before I remembered that I'm at a website of a better sort.
0vi21maobk9vp12y
I guess there are two different questions: signalling intelligence to top-intelligence people and signalling intelligence to people above average and higher. In the first case, it is a good plan. In the second case, you would fail.
-1duckduckMOO12y
upvoted. I am also confused.
2Manfred12y
Be interested in lots of things that other people might not find interesting. I think it's the way that I personally signal intelligence the most. For example, if someone has a herpolhoder on their desk, I try to ask intelligent questions about it. Or if the rain on the window is dripping in nice straight lines because of the screen occasionally pressing against the glass, notice that.
2D_Alex12y
I have a different perspective on this compared with other commenters... Intelligence is very hard to fake. What's the best way to signal guitar playing skills? Play the guitar, and play it well! The efficient way to signal intelligence is: to do worthwhile things, intelligently!
2faul_sname12y
How can you tell if someone is doing things intelligently?
1D_Alex12y
Fair question, but difficult to answer in brief, I might try to do this later. For now let me answer with a couple of questions: How can you tell if someone is playing a guitar well? In general, can YOU tell the difference between someone doing things intelligently, and doing things unintelligently?
0Viliam_Bur12y
a) Listen to them playing. b) Do they have concerts, CDs, fans, other symbols of "being a successful guitar player"? Do they write blogs or books about guitar playing? Do people write guitar-playing-related blogs and books about them? The second option is less reliable and easier to fake, but it is an option that even a deaf person can use.
4Solvent12y
Speaking as a guitar and piano player. I can do things on guitar and piano that are fairly easy, but look very impressive to someone who doesn't play the instrument. You actually need to play an instrument before you can judge how good someone is accurately. (Obviously, it's pretty obvious if someone is distinctly bad. But distinguishing different levels of "good" is hard.)
0faul_sname12y
First question: A good guitar player a steady rhythm and hit the appropriate notes with appropriate volume and tone. At a higher level, they improvise in a way that sounds good. Sounding good seems to involve sticking to a standard scale with only a few deviations, and varying the rhythms. At the level above that, I really don't know. Second question: I really don't know, at least that generally. I think I may use proxies such as the ability to find novel (good) solutions to problems and draw on multiple domains, then aggregate them into one linear value that I call "intelligence". I am probably also influenced by the person's attractiveness and how close their solution is to the one I would have proposed. I would definitely like your take on this as well.
0multifoliaterose12y
Why are you asking?
0Prismattic12y
Depending on the selective university, an advanced degree might not cost much at all. Harvard, for example, only recently started paying the way of its undergraduates, but it has paid the way of its graduate students for a long time.

True, but free tuition or not, it's plenty costly in terms of opportunity.

(This is true to an almost hilarious extent if you're a humanities scholar like me: I'm not getting those ten (!!!!!!!) years of my life back.)

4Prismattic12y
Is that the reason for "grouchy"musicologist?
2grouchymusicologist12y
Haha, no. I'm only grouchy because people occasionally say ill-informed things about musicology. Other than that, I really like my job and my chosen field. I rarely think I'd be much happier if I had chosen to pursue some lucrative but non-musicological career.
2Solvent12y
What's it like being a musicologist? What do you spend your days doing? How many instruments do you play? What's better out of Mozart's Jupiter Symphony and Holst's Jupiter movement?

Well, I wrote a bit about what musicologists do here. In terms of research areas, I myself am the score-analyzing type of musicologist, so I spend my days analyzing music and writing about my findings. I'm an academic, so teaching is ordinarily a large part of what I do, although this year I have a fellowship that lets me do research full-time. Pseudonymity prevents me from saying more in public about what I research, although I could go into it by PM if you are really interested.

I am (well, was -- I don't play much any more) what I once described as a "low professional-level [classical] pianist." That is, I play classical piano really well by most standards, but would never have gotten famous. At a much lower level, I can also play jazz piano and Baroque harpsichord. I never learned to play organ, and never learned any non-keyboard instruments. Among professional musicologists, I'm pretty much average for both number of instruments I can play and level of skill.

As to pieces about Jupiter, I can only offer you my personal opinion -- being a musicologist doesn't make my musical preferences more valid than yours. Both pieces are great, and I had a special fondness for the H... (read more)

0[anonymous]12y
.

In Marcus Hutter's list of open problems relating to AIXI at hutter1.net/ai/aixiopen.pdf (this is not a link because markdown is behaving strangely), problems 4g and 5i ask what Solomonoff induction and AIXI would do when their environment contains random noise and whether they could still make correct predictions/decisions.

What is this asking that isn't already known? Why doesn't the theorem on the bottom of page 24 of this AIXI paper constitute a solution?

I've been incubating some thoughts for a while and can't seem to straighten them out enough to make a solid discussion post, much less a front page article. I'll try to put them down here as succinctly as possible. I suspect that I have some biases and blindspots, and I invite constructive criticism. In other cases, I think my priors are simply different than the LW average, because of my life experiences.

Probably because of how I was raised, I've always held the opinion that the path to world-saving should follow the following general steps: 1) Obtain ... (read more)

Might it not be even more effective to convince others to become ultra-rich and fund the organizations you want to fund? (Actually, this doesn't seem too far off the mark from what SIAI is doing).

2moridinamael12y
I agree completely. I stopped myself short of saying this in my first post because I wanted to keep it succinct. I would go a bit further to suggest that SIAI could be doing more than merely convincing people to take this path. For example, providing trustworthy young rationalists with a financial safety net in order to permit them to take more risks. (One tentative observation I've made is that nobody becomes wealthy without taking risk. The "self-made" wealthy tend to be risk-loving.)
2faul_sname12y
This is likely worth doing, but I am fairly sure that LWers are for the most part not wealthy enough to create this financial safety net. This seems like a concept that is worth a discussion post: what would LWers do if they had a financial safety net?
[-][anonymous]12y100

I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind.

Any arguments that legitimately push you towards that conclusion should be easily convertible into actual advice about how to become ultra-rich. I think you're underestimating the difficulty of turning vague good-sounding ideas into effective action.

0moridinamael12y
I think there's plenty of available advice on how to become ultra-rich. Just look at the Business section of any bookstore. The problem is that this advice typically takes you from a 0.001% chance of becoming ultra-rich, through sheer lucky accident or lottery, to a 0.1% chance, through strategy and calculated risks. I'm not arguing that it's not really hard and really improbable. However, folks tend to assess P(becoming wealthy by any means) ~ P(winning the lottery).
6Anatoly_Vorobey12y
What's ultra-rich? This claim isn't saying much unless you quantify it. Intuitively, I find both your claims - that most people only try to match their parents' tier, and that it's easy to become ultra-rich if you focus on it - to be wrong, but it'd be interesting to see more arguments or evidence in their favor.
3moridinamael12y
I don't know, a billion dollars? A quick Googling turns up a few papers which suggest that parental expectations largely define a child's level of educational and financial achievement. On a more intuitive level, I can only point out that the clear majority of Americans either don't go to college because their financial ambitions are satisfied by blue collar work, or they go to college in pursuit of a degree with a clear Middle Class career path attached to it. Do you know anybody whose stated goal is to be wealthy, rather than to be a doctor or an engineer or some specific career? I don't.
2Nick_Roy12y
Personally, I figure I'm not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!
2dbaupp12y
Doesn't becoming very wealthy for the purpose of saving the world (and then actually saving the world) count as singlehandedly solving all the problems?
1moridinamael12y
What I was getting at is that the cognitive effort required to actually solve a Millennium problem may be greater than the cognitive effort of making a billion dollars and hiring a thousand mathematicians to work in Millennium problems.
0faul_sname12y
Who's counting?
2dbaupp12y
Is this a joke? (Serious question, I can't tell. FWIW, I was using "count" as "fit the definition of".)
2faul_sname12y
Partly, but not entirely. I noticed that I was asking myself seriously if that counted, then wondered why it mattered if it fit the definition.

Wow, 66 comments in 1 day. It looks the idea of having a mid-month open thread was a good one.

2shminux12y
Seems like an indication that a third tier of posts, possibly karma-free, might be a good idea. Something like Stupid Questions, or Beginner's Corner, or Sandbox, or...
0Armok_GoB12y
I've been sporadically trying to get something like this done for AGES. There was even a forum made, but without official endorsement it got like 5 members and died within days.
2shminux12y
If you were to offer a tested contrib to the LW code base, Trike might agree to add it on a trial basis, provided EY&Co approve. Not sure what their policies are.
0Armok_GoB12y
No idea how to do that, and wont have for the foreseeable future... I just don't have the attention span for coding or hacking any more for medical reasons.

Stephen Law on his new book, Believing Bullshit:

Intellectual black holes are belief systems that draw people in and hold them captive so they become willing slaves of claptrap. Belief in homeopathy, psychic powers, alien abductions - these are examples of intellectual black holes. As you approach them, you need to be on your guard because if you get sucked in, it can be extremely difficult to think your way clear again.

Something has been bothering me about Newcomb's problem, and I recently figured out what it is.

It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.

In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.

4khafra12y
Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega's decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.
4amcknight12y
You probably know this, but just in case: In Newcomb's problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn't repeatedly observe backwards causality, even if they might be justified in thinking they did.
2faul_sname12y
How is that observably different from backwards causality existing? Perhaps we need to taboo the word "cause".
0TimS12y
It seems very intuitive to me that being very good at predicting someone's decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don't think Newcomb's problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.
1faul_sname12y
This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the 'same' universe if the simulation is identical to the 'real' universe?
4TimS12y
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible? But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of "pre-commitment." Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
3Alejandro112y
Newcomb's problem doesn't lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.
1shminux12y
In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.

How did Less Wrong get its name?

I have two disjunct guesses that are not mutually exclusive, but do not depend on each other:

  1. It was Michael Vassar's idea. He is my best guess for who came up with the name.
  2. It was inspired by this essay. This is my best guess for what inspired the name.

I don't know if either of these is true, or both, or whatever. I want to know the real answer.

Searching this site and Google has been useless so far.

9XFrequentist12y
EY polled Overcoming Bias readers on their favorite from a list of several options, and "Less Wrong" was the overwhelming winner. Not sure how the options were generated.
3Grognor12y
Source?
4XFrequentist12y
Memory.
1Solvent12y
I remember Eliezer's post announcing LW. He didn't give any explanation of why it was called that, he just said "tentatively titled Less Wrong." I'd be interested in hearing the answer to this. I suspect it was just a cool name that Eliezer came up with.

An unusual answer to Newcomb's problem:

I asked a friend recently what he would do if encountering Newcomb's problem. Instead of giving either of the standard answer, he immediately attempted to create a paradoxical outcome and, as far as I can tell, succeeded. He claims that he would look inside the possibly-a-million-dollars box and do the following: If the box contains a million dollars, take both boxes. If the box contains nothing, take only that box (the empty one).

What would Omega do if he predicted this behavior or is this somehow not allowed in the problem setup?

[-][anonymous]12y140

Not allowed. You get to look into the second box only after you have chosen. And even if both boxes were transparent, the paradox is easily fixed. Omega shouldn't predict what will you do (because that's assuming that you will ignore the content of the second box and Omega isn't stupid like that) but what will you do if box B contains a million dollars. Then it would correctly predict that your friend would two-box in that situation, so it wouldn't put the million dollars into the second box and your friend would take only the empty box according to his strategy. So yeah.

0tgb12y
That's a nice simple way to reword it. Thanks.
7Manfred12y
There actually is a variant where you're allowed to look into the boxes - Newcomb's problem with transparent boxes. And yes, it is undefined if you apply the same rules. However, there are two ways to re-define it. 1: Reduce the scope of the inputs. For example, Omega could operate on the following program: "If the contestant would take only one box when the million dollars is there, put the million dollars there." Before, Omega was looking at both situations, and now it's only looking at one. 2: Increase the scope of the program. There are two possible responses in two possible situations for a total of four inputs, so you just need to define Omega's response for all four. It's interesting that Omega now treats you differently depending on your thoughts, not just depending on which box you take, so this changes the genre of the problem.

So I was reading a book in the Ender's Game series, and at one point it talks about the idea of sacrificing a human colony for the sake of another species. It got me thinking about the following question. Is it rational to protect 20 "piggies" (which are morally equivalent to humans) and sacrifice 100 humans if the 20 piggies constitute 100% of their species' population and the humans represent a very very small fraction of the human race. At first, it seemed obvious that it's right to save the "piggies," but now I'm not so sure. Ha... (read more)

4shminux12y
Depends on your goal... If it is the survival of the human colony, then no. If it is the survival of the human race an the piggies hold a key to it, then yes (they do not, in this story). If it is the survival of the pequenino race, then yes. It does not make sense to ask which of the goals is rational, unless you can measure them against something else.
1ahartell12y
Right. Let's say that you just value "intelligent life," though, rather than the humans or pequeninos in particular. Say you're the hive queen. A piggy is equal to a human and the human race is equal to a human race. (I worry that I'm still missing the point and the question is moot without first resolving whether you value "diversity" in it's own right or not, and that such valuing is a preference independent of rational decision making. Still, I feel as if some preferences can be irrational.)

Does anyone know how one would go about suggesting a new feature for predictionbook.com? I think it would be better if you could tag predictions so that then you could see separate ratings for predictions in different domains. Like, "Oh look, my predictions of 100% certainty about HPMOR are correct 90% of the time but my predictions of 100% certainty about politics are right 70% of the time." Also, you could look at recent predictions for only a specific topic, or see how well calibrated another user is in a specific area.

0gwern12y
http://github.com/tricycle/predictionbook/issues As Anubhav pointed out, PB is not important to Trike since it's orders of magnitude less popular than LW (as useful as I may find it). If you really want tagging for per-domain calibration, you either need to get your hands dirty or put up a bounty.
0Anubhav12y
PB has a severe manpower shortage. New features not coming any time soon, AFAICT.

Moore's Law Won't Fade for Business Reasons

Some writers have claimed that excess computing power will reduce the effort put into designing new and more powerful chips. Even when most users can't make use of the additional power, fear of losing out to the competition will keep designers pushing. Eventually, it will become too expensive to keep developing the new technology, but we are a lot further from those limits.

0fubarobfusco12y
This sounds like Marx's "overproduction" thesis: competition drives producers to make more and more regardless of demand. Generally, that sort of thing hasn't happened. Specifically in the computer processing market: really, only gamers and datacenters buy the fastest available general-purpose processors. Other folks buy computers with an eye to convenience, portability, appearance, battery life, etc. rather than raw processing power.
0saturn12y
Both home and datacenter markets seem to be shifting away from raw power and towards energy efficiency (i.e. maximizing computing power per watt) which increases battery life and decreases datacenter costs. This might actually end up propping up Moore's law anyway, as the more efficient transistors get, the more of them can be put on the same chip without overheating. This will bottom out too, eventually, when a battery charge lasts longer than the device itself, or datacenter power and cooling costs become negligible.

Depressing article opposing life extension research is depressing. Brief summary: In the least convenient possible world, human research trials would be unethically exploitative. And this is presented as an argument against attempting to end aging.

I've found a video that would be really cool if it were true, but I don't know how to judge its truth and it sounds ridiculous. This talk by Rob Bryanton deals with higher spatial dimensions, and suggests that different Everett branches are separated in the 5th dimension, universes with different physical laws are separated in the sixth dimension, etc. I can't find much info about the creator online, but one site accuses him of being a crank.Can somebody who knows something about physics tell me if there is any grain of truth to this possibility?

0gwern12y
That reminds me of Tegmark's multi-level classification of multiverses, but that classification doesn't make sense as a spatial set of dimensions, IIRC.

In what ways do Frequentists and Bayesians disagree?

0Oscar_Cunningham12y
For a Bayesian a random quantity is just an unknown one. For example a coin not yet flipped is random (because I don't know which way it will land), and so is the population of Colorado (because I don't know what it is). Frequentists treat randomness as an inherent property of things, so that the coin flip would still be random (because it's not predetermined) but the population of Colorado isn't (because it's already fixed). So given the problem of estimating the population of Colorado, a Bayesian would just hand you back a probability distribution (i.e. tell you how probable each population was). This option wouldn't be available to the Frequentist, who would refuse to put a probability distribution on a variable that wasn't random. Instead the Frequentist would give you an estimate and then tell you that the algorithm that generated the estimate had desirable properties, like being "unbiased".

I am interested in guidance on coping with loved one's irrationality.

I wish it to be known that the next person to sign on as a beta for my fiction is entitled to the designation "pi".

4Solvent12y
I'd be happy to do so. I'm halfway through Summons at the moment, but will probably finish that today or tomorrow.
4Dorikka12y
Though I've also never beta'd before, I'm up to date on Elcenia and would be happy to try. If you want me to do so, just shoot me a PM. It'd also probably be a good idea to let me know what kind of feedback you're looking for.
3daenerys12y
I recently discovered, and devoured, Luminosity. Thank you for contributing to the "rationalist fiction" genre! I haven't started Elcenia yet, but if/when I get caught up, I'll let you know. I've never beta-d before, but I'd be happy to try!
2[anonymous]12y
What value is there in being this "pi"? Also, what's this fiction? (PS. Tau is the one true circle constant)
1Alicorn12y
Pi is a popular Greek letter. In the past this was the fiction, which makes me consider it potentially relevant here (fan density) but lately it is this instead. I'll designate a taubeta after acquiring a pibeta, rhobeta, and sigmabeta.

"My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,"

"I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,"

"Wot evah! I [believe] what I want!"

Question regarding the quantum physics sequence:

This article tells me that the amplitude for a photon leaving a half mirror in each of the two directions is 1 and i (for straight and taking a turn, respectively) for an amplitude of 1 of a photon reaching the half-mirror. This must be a simplification, otherwise two half mirrors in a line would result in amplitude of i photon turning at the first mirror, an amplitude of i photon turning at the second mirror, and an amplitude of 1 of photon passing through both. This means that the squared-modulus ratio is 1... (read more)

2Oscar_Cunningham12y
What dbaupp said. But in particular you square first and then add because arriving at a different time makes the possibilities distinguishable, and so there is no interference (you don't add the complex amplitudes).
0tgb12y
Ah good. This is a good explanation and I had been wondering how the different timing would affect it. Thanks to you and dbaupp.
2dbaupp12y
To get the ratio, one needs to add the squared moduli, so 1/2+1/4+..., and that gives 1.
5multifoliaterose12y
Why do you bring this up? For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.
1Craig_Heldreth12y
It would be easier to discuss the merits (or lack) of the book if you specify something about the book you believe lacks merit. The opinion that the book is overly hyped is a common criticism, but is too vague to be refuted. It was a bestseller. Of course many of those people who bought it are silly.
1multifoliaterose12y
I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.
-1Craig_Heldreth12y
Fair question, but not an easy one to answer. I signed up for the reading group along with the 2600 Redditors. It was previously posted about here. The book is an entry point to issues of Artificial Intelligence, consciousness, cognitive biases and other subjects which interest me. I enjoy the book every time I read from it, but I believe I am missing something which could be provided in a group reading or a group study. As I stated in the previous thread, I am challenged by the musical references. The last time I read music notation routinely was when I sang in a choir in middle school; many of the Bach references and other music references to terms such as fugue, canon, fifths & thirds, &c are difficult for me to grasp. If one of those 2600 redditors felt moved to build some youtube tutorials with a bouncing ball along and atop the Bach scores illustrating Hofstadter's arguments, then I presume many others besides myself would enjoy seeing them. Have you seen that Feynman video where he says he usually dislikes answering "why" questions? If not that, perhaps that Louis C. K. standup routine where he talks about his daughter asking "why?" It is a discussion prompt but it often does not point to anywhere. I have that feeling now that I am rambling.
0multifoliaterose12y
I know Bach's music quite well from a listener's perspective though not from a theoretician's perspective. I'd be happy to share some pieces recordings that I've enjoyed / have found accessible. Your last paragraph is obscure to me and I share your impression that you started to ramble :-).
0Grognor12y
http://predictionbook.com/predictions/5015

Utility functions do a terrible job of modelling our conscious wants and desires. Our conscious minds are too non-continuous to be modeled effectively. But our total minds are far more continuous, radical changes are rare which is why "character" and "personality" are recognizable over time and often despite our conscious desires, even quite strong conscious desires.

What is the rational case for having children?

One can tell a story about how evolution made us not simply to enjoy the act that causes children but to want to have children. But that's not a reason, that's a description of the desire.

One could tell a story about having children as a source of future support or cost-controlled labor (i.e. farmhands). But I think the evidence is pretty strong that children are not wealth-maximizing in the modern era.

And if there is no case for having children, shouldn't that bother us on "Our morality should add up to normal, ceteris parabis" grounds?

8jimrandomh12y
Rationality helps you map out the relations between actions and goals, and between goals and subgoals; and it can help us better understand the structure of the goals we already have. We can say that doing something is good because it helps achieve goals, or bad because it hinders them; and we can say that certain things are also goals (subgoals), if achieving them helps with our original goals. However, this has to bottom out somewhere; and we call the places where it bottoms out - goals that're valued in and of themselves, not just because they help with some other goal - terminal values. Rationality has nothing whatsoever to say about what terminal values you should have. (In fact, those terminal values are implicit when you use the word "should".) For people who want children, that is usually a terminal value. You cannot argue that it's good because it achieves something else, because that is not why people think it's good.
4TimS12y
You are right. And that's at least the second time I've made that mistake, so hopefully I'll learn from it. Let me ask the sociological question I should have asked: It appears that many of the folks invested enough in "rationality" to be active participants in LW not only don't have children, but think that having children is not a good goal. That constellation of beliefs suggests that there is some selection pressure that links those two beliefs. Should the existence of that selection pressure worry us on "Add up to normal" grounds?
1torekp12y
This seems to be a near-consensus here at LessWrong. But I'm not convinced that "it bottoms out in goals that're valued in and of themselves" follows from "this has to bottom out somewhere". I grant the premise but doubt the conclusion. I doubt that where-it-bottoms-out needs to be, specifically, goals -- it could be some combination of beliefs, habits, experiences, and/or emotions, instead. But you say, we call the places where it bottoms out goals ... (emphasis added). Of course, you can do that, and it's even true that people will pretty well understand what you mean. You can call these things goals, and do so without doing terrible violence to the language, but I'm not convinced that this is the most felicitous way of speaking about motivation and ethical learning. Whether these bottom-level items are best described as goals, or habits, or beliefs, or something quite different, depends on psychological facts which may not yet be in (sufficient) evidence.