But is arrogance justified in an epistemic sense? If you regularly score at, say, the 99.9th percentile on standardized tests or [insert-preferred-method-of-evaluation-here], does that entitle you to be dismissive of someone's arguments (even if you don't give any outward sign of it) until you see sufficient evidence that that person is likewise exceptional? (I'm not being rhetorical here; this is a question that I'm genuinely undecided on.)
I'd say there are cases where it's reasonable to dismiss others' opinions out of hand (apart from politeness etc.) BUT it takes more than "I'm much smarter than them"; there should be a factor like "I have all the evidence they have / I know all they know on that topic" and of course "I have good reasons to believe I'm smarter than them and know more".
And even then it's the kind of thing that's reasonable "on average", i.e. it can be a decent time-saving heuristic if needed, but it can still get wrong. Say Alice is studying for a Masters degree in physics and Bob, a high-schooler who's not exceptionally bright (Alice had better science grades then him in high school), disagrees with her on something about black holes. As a rule of thumb, Alice is probably right BUT it happens that Bob just spent the summer camping with a family friend, Calvin, who's a physicist and just wouldn't shut up about black holes, explaining a bunch of concepts and controversies to Bob the best he could. Now it's pretty likely that Bob is actually right (though Alice might be justified in not listening anyway, depending on how good Bob is at explaining his position).
Am I the only one who thinks that there's some kernel of truth in this? that many people's perception of 'quality' is very strongly influenced by the perceived social status of the creator?
I think that for the specific case of Harry Potter Fanfic, this hypothesis has been disproved by [Yudkowsky, 2010].
Though for "many people's perception of 'quality'", there's probably some truth there.
Yeah, I get the whole weirdtopia thing. But like Mark says, its probably not the best weird thing to be chosen.
And if people can't understand that and read any kind of far-off weirdness through the lens of this decade's petty tribal politics, then basically, fuck 'em.
In one way, I think this attitude is commendable - interlectual and artistic integrity and not having to kowtow to people who are offended. But at the same time, 'anyone who disagrees can just fuck off' ... its not the best PR. And I don't think 'not being scared of rape' is an important criteria for rationalists.
I've read a similar idea to legalised rape before, in the context of a future where it was considered extremely bad manners to refuse sex. I can kinda imagine this could work. But legalised violent rape...
What I imagine would happen is that one person would try to rape another, they would fight back, their friends would intervene, a full-blown bar fight would ensue, someone would smash a bottle, and people would end up in the hospital, or the mental asylum, or the morgue.
Or are they not allowed to fight back? Is it maybe just date rape which is legal, or can you for instance, kidnap and rape someone who is on their way to an important business meeting? Do people say "First on the agenda, Mrs Brown and Mr Black give their apologies that they are unable to attend the emergency meeting on disaster relief for Proxima centuri - alpha, as Mrs Brown has contracted space measles and Mr Black is otherwise engaged in being anally gang raped"?
I mean, even if everyone in the future are the sort of ultra-kinky people who enjoy being raped, and everyone is bisexual to avoid the problem of being raped by the wrong gender, it still doesn't make sense.
It might be worth rereading the passage in question:
The Confessor held up a hand. "I mean it, my lord Akon. It is not polite idealism. We ancients can't steer. We remember too much disaster. We're too cautious to dare the bold path forward. Do you know there was a time when nonconsensual sex was illegal?"
Akon wasn't sure whether to smile or grimace. "The Prohibition, right? During the first century pre-Net? I expect everyone was glad to have that law taken off the books. I can't imagine how boring your sex lives must have been up until then - flirting with a woman, teasing her, leading her on, knowing the whole time that you were perfectly safe because she couldn't take matters into her own hands if you went a little too far -"
"You need a history refresher, my Lord Administrator. At some suitably abstract level. What I'm trying to tell you - and this is not public knowledge - is that we nearly tried to overthrow your government."
"What?" said Akon. "The Confessors?"
"No, us. The ones who remembered the ancient world. Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees. When our children legalized rape, we thought that the Future had gone wrong."
Akon's mouth hung open. "You were that prude?"
The Confessor shook his head. "There aren't any words," the Confessor said, "there aren't any words at all, by which I ever could explain to you. No, it wasn't prudery. It was a memory of disaster."
"Um," Akon said. He was trying not to smile. "I'm trying to visualize what sort of disaster could have been caused by too much nonconsensual sex -"
"Give it up, my lord," the Confessor said. He was finally laughing, but there was an undertone of pain to it. "Without, shall we say, personal experience, you can't possibly imagine, and there's no point in trying."
The passage is very clearly about value dissonance, about how very different cultures can fail to understand each other (which is a major theme of the story). They don't go into details because the only reasons characters bring it up is to show how values have changed.
And sticking to a less-controversial example would have defeated the point. And for illustrating this point, I much prefer this approach (meta talk between characters about how much things have canged) than one that would go into details of how the new system worked.
Counterexamples: Bill Gates nor Arnold Schwarzenegger seem respectively 100% geek and 100% jock, and are among the most successful people on earth. Which seem to show you can be extremely successful without "striking a balance".
Going 100% geek seems like a perfectly viable strategy, especially if you mostly care about success at geeky things (which amazingly a lot of geeks do).
Which isn't to say there aren't any "geek failure modes" to avoid, but "try to strike a balance between geek and jock" doesn't seem like a very useful rule of thumb.
Well, it is true that I think that I think to post effectively on LW is to translate my rather diverse life experience to Americanese and it is leaky. But it is nowhere this leaky.
There are mind people and body people. Where it comes from is a good question, but this phenomenon goes back to tribal chieftains vs. shamans, warriors vs clergy and who knows where.
Ultimately it comes from the fact that man is a smart animal and these two aspects are in a constant tension. Satoshi Kanazawa have put it so that as IQ is a general problem-solver, it tends to suppress earlier adapted problem-solver instincts, making intelligent people not too good at things like common sense, read signs of romantic attraction, pick up social cues and so on.
So this problem already arose at the earliest tribal societies, of the triangular chieftain - shaman - warrior dynamic. The warriors were the perfectly primates, who represented what is in humans like in every animal, who respected physical health and strength, liked such challenges, liked tackling problems head-on preferably clubbing them over the head, liked direct and impulsive action and raised the fiercest of them as chieftains. And the shaman was the intellectual who represented what is specifically human in man, the intelligence, which was back then probably interpreted as suggestions from spirits and gods perhaps through a bicameral mind setup.
If Kanazawa is right that the tension is inherent, because not only stupid people are not smart but also smart people are not good at being instinctive and sensible, then I think that explains it. But even when not I think the basic tension can also be seen empirically.
Liberal-arts intelligentsia and hackers are of the same mind-people, ex-clergy, ex-shaman stock. The whole point is that with some influence from body people they are more likely to become hackers than postmodernists :-) (Intelligentsia is much more a French than Russian concept.)
And the get-shit-done comes from the same body-orientation as machismo or sports come from, as our heads can be in any kinds of clouds but our body is always in the here in now, it is through the body how minds have contact with reality. People who care about their ass sitting on an uncomfortable chair will be practical and fix the chair. People who are not interested in their ass being uncomfortable because the body is a mere lowly vessel of their minds won't.
So this problem already arose at the earliest tribal societies, of the triangular chieftain - shaman - warrior dynamic.
Do you have good anthropological evidence that this "dynamic" actually exists / existed, and corresponds to what you're referring to?
"How our proud ancestors lived" in popular culture is full of bad/old science, romantic notions, nationalist/political propaganda (in either direction), and I trust it as much as I trust talk of "positive energy".
There are a bunch of stories (books, movies, games...) set in a fictional past, and they are often made understandable by projecting modern stereotypes there (because nobody has a good idea what life was millennia ago, and more importantly people don't care, they prefer modern issues with an exotic backdrop). It seems that you're seeing the resemblance of modern social stereotypes with "shamans" or "warriors" in popular culture, and acting as if it revealed some kind of profound truth about mankind.
(note that I don't know that much history or anthropology myself)
Thinking aloud here:
Say I'm an agent that wants to increase u, but not "too strongly" (this whole thing is about how to formalize "too strongly"). Couldn't I have a way of estimating how much other agents who don't care about u might still care about what I do, and minimize that? i.e. avoid anything that would make other agents want to model my working as something more than "wants to increase u".
(back in agent-designer shoes) So we could create a "moderate increaser" agent, give it a utility function u and inform it of other agents trying to increase v, w, x, y, and somehow have it avoid any strategies that would involve "decision theory interaction" with those other agents; i.e. threats, retaliation, trade ... maybe something like "those agents should behave as if you didn't exist".
I'm not saying he should remove the anime references, although they do go straight over my head. I'm saying that getting rid of the legalised rape would involve cutting one or two sentences - a tiny amount of the story which probably generates a hugely disproportionate amount of criticism.
For what its worth, when I write fiction, I just write whatever inspire me, and then go back over it later and remove the bits which no-one else will get.
The bit on legalized rape is an important way of conveying that the future will seem weird and surprising and immoral to us, just like 2015 would seem weird and surprising and immoral to someone from a few centuries ago. I want my science-fiction to show how weird things are likely to be (even if the specific kind of weirdness is of course likely to be very wrong), I don't want it to be a bowdlerized soap opera with robots and lasers in the background.
And if people can't understand that and read any kind of far-off weirdness through the lens of this decade's petty tribal politics, then basically, fuck 'em. I don't want Eliezer or anybody else to bend backwards to avoid being misread by idiots.
And sure, it's bad PR, but it's a bit of a self-fulfilling policy, a bit like how "openly criticizing the Government" is a bad career move for a Chinese citizen.
I'm toying with the idea of programming a game based on The Murder Hobo Investment Bubble. The short version is that Old Men buy land infested with monsters, hire Murder Hobos to kill the monsters, and resell the land at a profit. I want to make something that models the whole economy, with individual agents for each Old Man, Murder Hobo, and anything else I might add. Rather than explicitly program the bubble in, it would be cool to use some kind of machine learning algorithm to figure everything out. I figure they'll make the sorts of mistakes that lead to investment bubbles automatically.
There are two problems. First, I have neither experience nor training with any machine learning except for Bayesian statistics. Second, it's often not clear what to optimize for. I could make some kind of scoring system where every month everyone who is still alive has their score increase by the log of their money or something, but that would still only work well if I just use scores from the previous generation, which is slower-paced than I'd like.
Old Men could learn whether or not Murder Hobos will work for a certain price, and whether or not they'll find more within a certain time frame, but if they buy a bad piece of land it's not clear how bad this is. They still have the land, but it's of an uncertain value. I suppose I could make it so they just buy options, and if they don't sell the land within a certain time period they lose it.
Murder Hobos risk dying, which has an unknown opportunity cost. I'm thinking of just having them base the expected opportunity cost of death on the previous generation, but then it would take them a generation to respond to the fact that demand is way down and they need to start taking risky jobs for low pay.
Does anyone have any suggestions? I consider "give up and do something else instead" to be a valid suggestion, so say that if you think it's what I should do.
Edit: I could have Murder Hobos workout expected opportunity cost of death by checking what portion of Murder Hobos of each level died the previous year and how long it's taking them to level up.
I'm toying with the idea of programming a game based on .
Are you missing a word there?
The impression I get when reading posts like these is that people should read up on the morality of self-care. If I'm not "allowed" to care for my friends/family/-self, not only would my quality if life decrease, it would decrease in such a way that would it harder and less efficient to actively care (e.g. donate) about people I don't know.
But is caring for yourself and your friends and family an instrumental value that helps you stay sane so that you can help others more efficiently, or is it a terminal value? It sure feels like a terminal value, and your "morality of self-care" sounds like a roundabout way of explaining why people care so much about it by making it instrumental.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
[Cross-Posted from the "Welcome to LW" thread] I'm a long-time user of LW. My old account has ~1000 karma. I'm making this account because I would like it to be tied to my real identity.
Here is my blog/personal-workflowy-wiki. I'd like to have 20 karma, so that I can make cross-posts from here to the LW Discussion.
I'm working on a rationality power tool. Specifically, it's an open-source workflowy with revision control and general graph structure. I want to use it as a wiki to map out various topics of interest on LW. If anybody is interested in working on (or using) rationality power tools, please PM me, as I've spent a lot of time thinking about them, and can also introduce you to some other people who are interested in this area.
EDIT: Added the missing links.
EDIT: First cross-post: Personal Notes On Productivity (A categorization of various resources)
Yep, I'm interested!