All of Nominull's Comments + Replies

Yeah, I feel like in real world situations, hypothesizing time travel when things don't make sense is not likely to be epistemically successful.

Wasn't there a proverb about generalizing from fictional evidence? Especially from fiction that intentionally doesn't make sense?

1Gunnar_Zarncke
Generalization from fictional evidence
3dxu
I don't think the quote is talking about "hypothesizing" anything; I read it more as "You have to update on evidence whether that evidence fits into your original model of the world or not". Instead of "hypothesizing time travel when things don't make sense", it'd be more like a stranger appears in front of you in a flash of light with futuristic-looking technology, proves that he is genetically human, and claims to be from the future. In that case it doesn't matter what your priors were for something like that happening; it already happened, and crying "Impossible!" is as illegal a move in Bayes as moving your king into check is in chess. Not that such a thing is likely to happen, of course, but if it did happen, would you sit back and claim it didn't because it "doesn't make sense"?

That seems like a failure of noticing confusion; some clear things are actually false.

1khafra
No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.
1VAuroch
That may be, but if you label them 'impossible' and dismiss them, you won't gather more evidence to prove it. And if something you consider impossible has actually happened, you're missing an opportunity to improve your model significantly. This is in fact what happens in-context. With a preposterously-detailed description of observable events (via magic hypnosis; I didn't say the novel made sense), Gently concludes that something has happened which could not have happened as described, and that the only explanation which would explain the results involves time travel; the other person says that it's impossible, to which Gently replies this.
7CCC
I saw it as more of a warning about the limits of maps - when something happens that you think is impossible, then it is time to update your map, and not rail against the territory for failing to match it. (Of course, it is possible that you have been fooled, somehow, into thinking that something has happened which has, in actual fact, not happened. This possibility should be considered and appropriately weighted (given whatever evidence you have of the thing actually happening) against the possibility that the map is simply wrong.)
Nominull140

This post is a good example of why LW is dying. Specifically, that it was posted as a comment to a garbage-collector thread in the second-class area. Something is horribly wrong with the selection mechanism for what gets on the front page.

0Gvaerg
[deleted]

Underconfidence is a sin. This is specifically about gwern's calibration. (EDIT: or his preferences)

Not everyone is in the same situation. I mean, we recently had an article disproving theory of relativity posted in Main (later moved to Discussion). Texts with less value that gwern's comment do regularly get posted as articles. So it's not like everyone is afraid to post anything. Some people should update towards posting articles, some people should update towards trying their ideas in Open Thread first. Maybe they need a little nudge from outside first.

H... (read more)

eeuuah120

Let's talk about the fact that the top two comments on a very nice contribution in the open thread is about how this is the wrong place for the post, or how it is why LW is dying. Actually let's not talk about that.

4NancyLebovitz
Is there a named fallacy of using words which radically downplay or upplay the seriousness of a situation? Teenagers sometimes get thrown out of their families for coming out. This is more than an inconvenience, and affects more than their educational plans.
5Prismattic
And the worst argument in the world rears its ugly head once more.
9wedrifid
You present a compelling argument that "scamming money out of people because it would be inconvenient not to" can be an entirely ethical and appropriate course of action. Lumping a particular scenario already analysed on merit seems reasonable into a despised reference class serves to change the reference class, not the instance.
gjm130

This seems like a wilfully unfair description of Chris's position.

It's a scam if you take someone's money intending to do something other than what you tell them you'll do with it, or (maybe) intending to do it for very different reasons from the ones you give them, or with very different prospects of success. But Chris's hypothetical youngster is doing with the money exactly what his/or her parents expect (getting educated), with the same purpose and the same likely outcomes as if s/he were straight. Where's the scam?

And the donors in question aren't gene... (read more)

Nominull140

Does that actually work better than just setting a higher bar for significance? My gut says that data is data and chopping it up cleverly can't work magic.

0ChristianKl
How do you decide for how high to hang your bar for significance? It very hard to estimate how high you have to hang it depending on how you go fishing in your data. The advantage of the two step procedure is that you are completely free to fish how you want in the first step. There are even cases where you might want a three step procedure.

Cross validation is actually hugely useful for predictive models. For a simple correlation like this, it's less of a big deal. But if you are fitting a local linearly weighted regression line for instance, chopping the data up is absolutely standard operating procedure.

If I have no value as a human, I desire to believe I have no value as a human.

1NancyLebovitz
Would that desire be enough to qualify you as a human?
Nominull710

Are you planning to do any analysis on what traits are associated with defection? That could get ugly fast.

(I took the survey)

Kinsei140

Well, remember that that's a zero sum game within the community since it's coming out of Yvain's pocket. I was going to reflexivly cooperate, then I remembered that I was cooperating in transfering money from someone who was nice enough to create this survey, to people who were only nice enough to answer.

Nominull-10

Probably the only two things the True Patronus can look like are humans and snakes. Possibly flying squirrels?

-1TrE
What about parsley?

There's room for improvement, but this is just a rant. It's useless for the project of improvement, because he's attacking anything he can find a clever turn of phrase for, rather than the things that especially deserve attack.

It's not even useful to see where the PR failure is, because once something set him off, everything suddenly became a PR failure. Look for insight in saner places than this, please.

Nominull-30

It looks like he's turned the flawed methodology of the skeptic community (things like pattern-matching against surface features of known bullshit and mockery as an argument) on the skeptic community itself. I'd say "serves them right" except we're supposed to be virtuous identity-free robots who take no pleasure in or offense from anything.

-2MrMind
I agree wholeheartedly: I've been often misidentified as a skeptic, even if I have written several rebuttal of italian rationalists association methods. Nonetheless, "I'm a virtuous identity-free robots who take no pleasure in or offense from anything" is a great tag for a T-shirt.

It strikes me as a little awful to only care about bad people inasmuch as they're likely to become good people. Maybe I've been perverted by my Catholic upbringing, but I was taught to love everyone, including the sinners, including the people you'd never want to hang out with. This appeals to me in part because I sin and people don't want to hang out with me, and yet I want to be loved regardless.

It's possible that I am the weird one here, but shows with complex but evil characters such as Breaking Bad do seem largely popular. There is a large current i... (read more)

-2smk
Yes, shows like that are very popular, and I'm getting really sick of it. I don't understand it, but I don't really think that it's false sophistication. Or courageous self-examination.
3Eugine_Nier
I believe you're also supposed to hope/encourage the sinners to repent and stop sinning, i.e., you're supposed to root for them to become better people.
2Leonhart
I feel a little misrepresented, but that's my own fault. I think we'd have to do quite a bit more unpacking to continue this conversation - you seem to want to mean the same thing by "love", "care about" and "sympathize with", and I think they all come apart for me. Like, maybe (warning: I'm tired) "love" feels like a timeless relation to a particular person-moment, whereas "caring" is timeful and inherently about wanting a possible-future-person to be better than a present-person, including morally better - surely something like that has to be the substance of caring? Like, what else is caring supposed to do? Just give me warm fuzzies? I also think that I use different cognitive strategies to deal with real people I actually know, versus fictional characters (though I'm not necessarily endorsing that).

I think this is probably being too kind to my unhandicapped abilities. Rather than handicapping myself because I'm too powerful, I think the key issue is that I see things on a metalevel and analytically, such that I can notice that there is little difference between "social adeptitude" and "manipulation". And so, in order to avoid being manipulative, I consciously avoid developing social skills. I think reflectivity and pathological non-hypocrisy are the key dynamics, not inherent manipulative ability.

Right, but he seems to implicitly claim that characters who follow those disvirtues are necessarily unsympathetic. Some of us are sometimes disvirtuous.

2Leonhart
Well, yes, I'm often disvirtuous. I'm also often unsympathetic. These episodes reliably co-occur :) But seriously, I'm now confused and don't think I was addressing your point. Eliezer seemed to me to be talking mostly about "uninteresting", not "unsympathetic", though I'm not clear to what extent these are orthogonal for him. Can you unpack "sympathy" a bit? When I use it of Evil+Good character A, I think it means something like "I want to see A survive a bit longer, so that he/she can develop into character B, who is the happiest, healthiest, sanest extrapolation of A". I think Evil+Evil characters are unsympathetic/uninteresting in this sense; there's nothing there that I can extrapolate into someone I'd want to hang out with. My brain's come up with two other possible components of 'sympathy' that strike me as somehow bad ideas (not attributing them to you): "I share some disvalued traits in common with character A, so not liking them makes me somehow hypocritical" "I'll form an alliance with A for mutual defence against social opprobrium for our shared flaw X"
Nominull120

I suspect the thermostat is closer to the human mind than his conception of the human mind is.

Today we kneel only to hypocrisy.

Nominull-10

You're confusing "evil" with "unsympathetic". Maybe those mean the same thing to you, but we don't all have your unimpeachable moral character.

5Leonhart
I don't think he is. Perhaps "evil" here just means a object-level match against some entries in Nega-Frankena's list of of disvalues, including: death, apathy and stasis, sickness and enervation, pain and frustration of all or certain kinds, unhappiness, blight, malcontent, etc; untruth; delusion and lies of various kinds, incomprehension, folly; ugliness, discord, monstrosity in objects contemplated; numbing experience; morally bad dispositions or flaws; mutual contempt, hatred, enmity, defection; unjust distribution of goods and evils; mania and obsession in one's own life; helplessness and experience of impotence; pointless abnegation; enslavement; strife, terror; tedium and repetition; and bad reputation, disgrace, shame, etc.
Nominull160

Peace if possible, truth at all costs. -- Martin Luther

The fact that he started some really bloody wars over something that didn't even turn out to be true should maybe give us some pause before we endorse virtues like this.

-1fubarobfusco
His frankly obscene antisemitic fantasies don't speak very well for him, either.

That's harder to distinguish from the outside.

If you run in social circles where having well-calibrated beliefs is high-status, not gonna name any names.

0Qiaochu_Yuan
But it's easier to have well-calibrated beliefs about things that aren't politics. Also more useful (e.g. if those things are how to run a startup properly, or how to exercise properly, or...). Most people aren't in a position to test most political beliefs.

Wait, systematic cheating is far worse than systematic racism? That seems, uh, non-obvious to me.

Because when I introspect on my preferences it doesn't seem to hold.

1Fhyve
Can you give an example? I am having a hard time imagining preferences contradicting that axiom (which is a failure on my part).
4[anonymous]
Examples?

I don't trust the transitivity axiom of VNM utility. Thought I should mention this to make it clear that the "most of us" in your post is not a rhetorical device and there really are actual people who don't buy into the VNM hegemony.

8[anonymous]
Thanks for pointing that out. I did try to make it clear that the essay was about "if you trust VNM, here's what it means". I, for one, trust the transitivity axiom. It seems absurd to value going in circles, but only if you run into the right lotteries. Maybe you could give an example of a preference cycle you think is valuable, so they rest of us could see where our intuitions diverge?
2Fhyve
Out of curiosity, why don't you trust the transitivity axiom?

I dunno, I feel like judgments of awesomeness are heavily path-dependent and vary a lot from person to person. I don't hold out a lot of hope for the project of Coherent Extrapolated Volition, but I hold out even less for Coherent Extrapolated Awesomeness. So the vision of the future is people pushing back and forth, the chuunibyous trying to fill the world with dark magic rituals and the postmodernists wincing at their unawesome sincerity and trying to paint everything with as many layers of awesome irony as they can.

Also, from a personal perspective, I r... (read more)

0Multiheaded
Reference for those not keeping up with current anime.
6[anonymous]
This is the unextrapolated awesomeness. I think we would tend to agree on much more of what is awesome if we extrapolated. This is a serious bug. Non-exciting and comfortable can be awesome, even though the word doesn't bring it to mind. Thanks.

No. A is [1,3,5,7], B is [4,4,4,4]. A random member of A will be closer to a random member of B than to another random member of A.

1ygert
I probably would say that that is because your two sets A and B do not carve reality at its joints. What I think army1987 intended to talk about is "real" sets, where a "real" set is defined as one that carves reality at its joints in one form or another.
8[anonymous]
Partial application, actually... Currying is the transform on the function to make Partial application easy. What we are doing here is partially applying relations... Maximum nitpick
Nominull180

Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.

2twanvl
In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.
8kodos96
Passive-aggression level: Obi-Wan Kenobi
Nominull200

Given how anxious he is about the idea of romance I would think he would tend to shy away from anything that realistic. Snape is safe since a teacher/student relationship would be excluded on ethical grounds. Draco could actually happen, and so better not to think about.

I was basically quoting Quirrell from his first DDA lesson. He says that he's teaching defense against wizards because they can keep you from being able to run. From this I drew the conclusion that wizards can keep you from being able to run, and this is a problem you might have to worry about in practice, even when facing wizards less powerful than Voldemort.

0RolfAndreassen
Mmm. He says This does not immediately imply that the anti-Apparation jinx is something that can be cast quickly, or under combat conditions. Most spells seem to require line of sight; if Voldemort has to be able to see you to jinx you, he might as well cast AK. The only times we actually see anti-Apparation in action, it is applied to places - Hogwarts, Ministry of Magic, Azkaban - not individuals.

Apparation can be blocked. That's what makes Dark Wizards more dangerous than any other monster you might fight - you can't just Apparate away.

8RolfAndreassen
It can be blocked, yes, but this appears to be a fairly major jinx, the equivalent of a lot of capital equipment. Hogwarts is known to be anti-Apparation jinxed, as is Azkaban, but I don't recall any other places where it's mentioned. (Ministry of Magic, perhaps; implied by the workers there commuting in a fairly standard fashion instead of just Apparating from their homes.) It's not clear that it can be installed on an average wizard's home. Anyway, you'd think it would prevent inward, but not outward, Apparation by default; you don't want to be suddenly attacked but you might want to make a quick escape. Dumbledore captures (in that fanfic by Rowling, that is, not the canon) some Death Eaters by preventing their Apparation, but he has to duel them first, so he might as well have cast AK. All that aside, she's an experienced combat wizard with a few seconds to spare. If she can't Apparate, she still has the option og grabbing Harry, blowing a hole in the wall, and running. "Accio Broomstick", anyone? Or whatever flight spell Voldemort uses to go up the stairs without footsteps. Come to think of it, why were they hiding in an apparently average home without special defenses, and relying on mere secrecy? Put them in Hogwarts, with its layers upon layers of magical fortifications. Draco states that Hogwarts is an impregnable fortress, and presumably Voldemort thinks so too since he doesn't attack Dumbledore there and end the war at a stroke.

"One of the dark truths of the Killing Curse, son, is that once you've cast it the first time, it doesn't take much hate to do it again."

"It damages the mind?"

Again Moody shook his head. "No. It's the killing that does that. Murder tears the soul - but that's just the same if it's a Cutting Hex. The Killing Curse doesn't crack your soul. It just takes a cracked soul to cast." If there was a sad expression on the scarred face, it could not be read. "But that doesn't tell us much about Monroe. The ones like Dumbledore who'

... (read more)
0Cakoluchiam
"It takes a cracked soul to cast." and "Murder tears the soul." just says that if you've gotten to the point where you could cast it once, that particular pre-requisite is already accomplished, so the work to crack your soul is already put in. It doesn't say anything about removing the requirement of wanting someone dead. Though, so long as we're looking at evidence, if we take Quirrell at his word, then his ability to cast the spell despite not wanting his opponent dead is pretty strong evidence that the requirement is in fact removed. In fact, we already know that some "requirements" to cast spells are not set in stone: from that same scene, Harry cast the true patronus without the carefully practiced stance and wand twitches, instead merely "one desperate wish that an innocent man should not die -"—but the constant requirement in this case seems to be the thought that accompanies the casting of the spell, which is why I'm hesitant to believe the wish of death is removed from AK's casting requirement.

Please don't learn anything from the black arts threads. That's why they're called "black arts", because you're not supposed to learn them.

2Nornagest
Is that why? I wonder, sometimes. Given our merry band's contrarian bent, it occurs to me that calling something a "dark art" would be a pretty good way of encouraging its study while simultaneously discouraging its unreflective use. You'd then need to come up with some semi-convincing reasons why it is in fact too Dark for school, though, or you'd look silly. On the other hand it doesn't seem to be an Eliezer coinage, which would have made this line of thinking a bit more likely. "Dark Side epistemology" is, but has a narrow enough meaning that I'm not inclined to suspect shenanigans.
4almkglor
Although it might be good to be aware that you shouldn't remove a weapon from your mental arsenal just because it's labeled "dark arts". Sure, you should be one heck of a lot more reluctant to use them, but if you need to shut up and do the impossible really really badly, do so - just be aware that the consequences tend to be worse if you use them. After all, the label "dark art" is itself an application of a Dark Art to persuade, deceive, or otherwise manipulate you against using those techniques. But of course this was not done lightly.
2JoshuaZ
Well, one could certainly learn from the dark arts threads what not to do and what to be aware of to watch out for.
Nominull110

No, not "of course". It only implies that if they're rational actors, which of course they are not. They are deal-averse and if they see you trying to pump them around in a circle they will take their ball and go home.

You can still profit by doing one step of the money pump, and people do. Lots of research goes into exploiting people's circular preferences on things like supermarket displays.

1ygert
I think you are taking my point as something stronger than what I said. As you pointed out, with humans you can often money pump them once, but not more than that. So it can not truly be said that that preference is fully circular. It is something weaker, and perhaps you could call it a semi-circular preference. My point was that the thing that humans exhibit is not a “circular preference” in the fullest technical sense of the term.

There's a whole literature on preference intransitivity, but really, it's not that hard to catch yourself doing it. Just pay attention to your pairwise comparisons when you're choosing among three or more options, and don't let your mind cover up its dirty little secret.

4Nick_Tarleton
Can you give an example of circular preferences that aren't contextual and therefore only superficially circular (like Benja's Alice and coin-flipping examples are contextual and only superficially irrational), and that you endorse, rather than regarding as bugs that should be resolved somehow? I'm pretty sure that any time I feel like I have intransitive preferences, it's because of things like framing effects or loss aversion that I would rather not be subject to.
1A1987dM
That does happen to me from time to time, but when it does (and I notice that) I just think “hey, I've found a bug in my mindware” and try to fix that. (Usually it's a result of some ugh field.)
8lukeprog
Yup. Possible cause: motivations are caused by at least 3 totally different kinds of processes which often conflict.

People can't order outcomes from best to worst. People exhibit circular preferences. I, myself, exhibit circular preferences. This is a problem for a utility-function based theory of what I want.

0ygert
This would mean, of course, that humans can be money-pumped. In other words, if this is really true, there is a lot of money out there "on the table" for anyone to grab by simply money-pumping arbitrary humans. But in real life, if you went and tried to money-pump people, you would not get very far. But I accept a weaker form of what you are saying, that in the normal course of events when people are not consciously thinking about it we can exhibit circular reasoning. But in a situation where we actually are sitting down and thinking and calculating about it, we are capable of “resolving” those apparently circular preferences.
4[anonymous]
Interesting. Example of circular preferences?

You talk like you've solved qualia. Have you?

0MrMind
I don't know what others accept as a solution to the qualia problem, but I've found the explanations in "How an algorithm feels from the inside" quite on spot. For me, the old sequences have solved the qualia problem, and from what I see the new sequence presupposes the same.
7[anonymous]
Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.
CronoDAS140

"Qualia" is something our brains do. We don't know how our brains do it, but it's pretty clear by now that our brains are indeed what does it.

I'd pick the $1.03, so long as it was in the form of an electronic funds transfer and not more pennies to clutter up my pockets. I guess I probably qualify as a very strange person though?

29eB1
If you get rid of every possible transaction cost, even implicit costs of the inconvenience of dealing with the transaction and currency, then I would agree that many people would take the $1.03. In fact, I would too. It's only that these costs are neglected as not being real that I see a problem with. Utility of money just isn't that simple. For example, some people might actually prefer that the money be handed to them than that it be transferred to their bank account even at the same moment, because if they have found money in-hand, they won't feel guilty about using it to purchase something "for themselves" rather than paying bills. In fact, someone might prefer $50 handed to them in cash over $55 transferred to and immediately available in their bank account. Is that irrational, or are they just satisfying their utility function in light of their cognitive limits to control their feelings of guilt? In some ways people want it to be answered in an ideal scenario, as if it's a physics problem, but I don't think that's how most people answer questions like this. Most people read the question, imagine a particular scenario (or set of scenarios, maybe), and answer for that scenario. If you want people to answer in an ideal scenario than the question is underspecified. Also, it's not clear to me that you can make it an ideal scenario and retain the effect in question, because the more people look at it like a physics problem with a right answer, the less likely they are to answer in a way that's not in line with their behavior in normal circumstances which you are trying to predict.

My math was "do I need any more money than I have or could borrow in the next 60 days, no, ok I'll take the higher amount". I suppose this heuristic fails if the higher amount is higher by less than the interest rate I could earn over 60 days, but short term interest rates are effectively 0 right now.

Well, I tried to make a post once, got downvoted into oblivion, and decided not to put myself through that again. So yeah this happens for real, although perhaps in my case it is no big loss.

The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it's only because an online community is generally less important to people's lives than society at large.

A bad king can absolutely destroy an online community. Banning all the good people is actually one of the better things a bad king can do, because it can spark an organized exodus, which is just inconvenient. But by adding restrictions and terrorizing the community with the threat of bans, a bad king can make the good people self-deport. And then the community can't be revived elsewhere.

1Vaniver
There's a big difference between exile and prison, and the power of exile depends on the desirability of the place in question.
9[anonymous]
I admit, I have seen braindead moderators tear a community apart (/r/anarchism for one). I have just as often seen lack of moderation prevent a community from becoming what it could. (4chan (though I'm unsure whether 4chan is glorious or a cesspool)) And I have seen strong moderation keep a community together. The thing is, death by incompetent dictator is much more salient to our imaginations than death by slow entropy and september-effects. incompetent dictators have a face which makes us take it much more seriously than an unbiased assessment of the threats would warrant.

Well, I agree, but "mere" probably isn't a sensationalist enough way to describe a 7 point drop in IQ.

2A1987dM
Honestly, so long as the drop is due to lower-IQ people arriving rather than higher-IQ people leaving, I can't see why it's such a big deal -- especially if the “new” people mostly just lurk. Now, if the average IQ of only the people with > 100 karma in the last 30 days was also dropping with time...
4Kindly
Okay, I agree, maybe that was pushing it a little.

Maybe it just means Reddit-folk are surprisingly smart? I mean, IQ 130 corresponds to 98th percentile. The usual standard for surprise is 95th percentile.

Nominull350

The destruction of LW culture has already happened. The trigger was EY leaving, and people without EY's philosophical insight stepping in to fill the void by chatting about their unconventional romantic lives, their lifehacks, and their rational approach to toothpaste. If anything, I see things having gotten somewhat better recently, with EY having semi-returned, and with the rise of the hypercontrarian archconservative clique, which might be wrong about everything but at least they want to talk about it and not toothpaste.

0metatroll
tl;dr: The following is a non-profit fan-based parody. Less Wrong, the Singularity Institute, and the Centre for Applied Rationality are owned by Hogwarts School, Chancellor Ray Kurzweil, and the Bayesian Conspiracy. Please support the official release. Troll Wrongosphers with Baumeister and Eddington, not Benedict and Evola Wrongosophical trolling should be based on genuinely superior psychological insights ("Baumeister" for breakthroughs in social psychology such as those summarized in Vohs & Baumeister 2010) and on crackpot science that is nonetheless difficult to debunk ("Eddington" for the fundamental theory described in Durham 2006). Starting from reaction and religion, as many trolls still do, both (1) promotes unpleasant ideas like God and conservatism and (2) fails to connect with the pragmatic and progressive sensibility of 21st-century culture. Once young trollosophers are equipped with some of the best newthink and pseudoscience, then let them dominate the subversive paradigm. I'll bet they get farther than the other kind.
7John_Maxwell
Anna Salamon, 2009. So this "destruction" was at least semi-planned.
3[anonymous]
lulz. Why do I feel identity-feels for that phrase? I should watch out for that, but, That's what I thought a few months ago. Then everything turned inside out and I realized there is no god . What a feeling! Now I see people confidently rationalizing the cultural default, and realize how far we have to go WRT epistemic rationality.
1[anonymous]
If EY didn't intend for said "destruction" to happen, he should have chosen a website model more suitable to that end.

Uploads would at least have them in the first place to be attached to.

Nominull210

And we independently observe that almost no one can do good philosophy at all, so the theory checks out.

Nothing better than a hypothesis that makes correct empirical predictions!

-4diegocaleiro
Besides the sciences that Luke Mentioned, don't forget people also need to learn the subsets of philosophy which actually are consistent and compatible with science. In the case of philosophy of mind, I began a list here: http://lesswrong.com/lw/58d/how_not_to_be_a_na%C3%AFve_computationalist/ What seems needed is a groups of creative 150IQ people willing to take the MegaCourse and create good philosophy as fast as possible, so we can use it for whatever purposes. Probably that group should, like the best intellectual groups examineds by Domenico de Masi in his "Creativity and Creative Groups", get a place to be togheter, and work earnestly and honestly. Finally, they must be sharp in avoiding biases, useless discussions, and counterfactual intuitions. This gets more likely every minute....
Nominull150

Yeah, it sucks that you can't do good philosophy without knowing a ton of other stuff, but that's life. We don't listen to electrical engineers when they complain about needing to know nitty-gritty calculus, and that's a year of study for someone with an IQ over 150. Sometimes fields have prerequisites.

4Bugmaster
You could do good programming without knowing too much physics. You could probably do good physics without knowing too much machine learning, assuming you have someone in your department who does know machine learning. You could do good biology with chemistry alone, though that requires minimal physics, as well. But lukeprog's curriculum / reading list suggests that you can't do good philosophy without knowing math, machine learning, physics, psychology, and a bunch of other subjects. If that is true, then virtually no one can do good philosophy at all, because absorbing all the prerequisites will take a large portion of most people's lifetimes.
2[anonymous]
Correction: you don't. Those of us who teach EEs (really, any class of engineers), do.
0Wei Dai
Agreed! What? Surely lots of electrical engineers have IQ less than 150 (the average being approximately 126 ETA: actually that's the average for EE PhD student, but still). How did they pass their calculus courses?

I'd point to myself as a counterexample, I appreciate the DMV sticking it to those externality-creating motorists while I enjoy proper liberal low-emissions modes of transportation.

6Randy_M
But as that is not the purpose of the DMV, I find your appreciation only validates complaints. That is, you share the view that the DMV creates some amount of misery of automobile drivers, you just don't happen to object to that group being that miserable.
Nominull-40

Is it really a modern city without conservatives whining about poor service at the DMV? Although I guess if you got rid of all the clerks service would probably get even worse.

2Sengachi
I would argue that everybody complains about poor service at the DMV.
Nominull410

the past is a third-world country

JoshuaZ210

The past is in some respects worse than a third world country. In the United States around 1900, the life expectancy ranged from around 50 climbing steadily to reach around 60 around 1930 (curiously the Great Depression didn't cause a slump in life expectancy, although the rate of growth did slow). Source with related data(pdf). But, if one looks at current life expectancy in many countries in the developing world, most countries exceed the US-1900 numbers. Similar comparisons can be made for literacy and many other metrics of success. The middling develo... (read more)

Your mistake lies in using the word "I" like it means something. There is some mwengler-stuff, it has some properties, then there is a split and the mwengler-stuff is in two separate chunks. They both experience their "stream of consciousness" showing up in their particular branch, they both wonder how it is that they ended up in the one branch rather than the other.

Load More