If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Generalized versions of arguments I've seen on Reddit and Facebook:
If you oppose a government policy that personally benefits you, you are a hypocrite who bites the hand that feeds you.
If you support the policy that benefits you, you are a greedy narcissist whose loyalty can be bought and sold.
If you have political opinions on policies that don't affect your well-being, you are meddler with no skin in the game. Without being personally affected by the policy, you cannot hope to understand.
A while back, David Chapman made a blog post titled "Pop Bayesianism: cruder than I thought?", expressing considerable skepticism towards the kind of "pop Bayesianism" that's promoted on LW and by CFAR. Yvain and I replied in the comments, which led to an interesting discussion.
I wasn't originally sure whether this was interesting enough to link to on LW, but then one person on #lesswrong specifically asked me to do so. They said that they found my summaries of the practical insights offered by some LW posts the most valuable/interesting.
I wish people here stopped using the loaded terms "many worlds" and "Everett branches" when the ontologically neutral "possible outcomes" is sufficient.
"Possible outcomes" is not ontologically neutral in common usage. In common usage, "possible" excludes "actual", and that connotation is strong even when trying to use it technically. "Multiple outcomes" might be an acceptable compromise.
This came up at yesterday's London meetup: activities for keeping oneself relatable to other human beings.
We were dissecting motives behind goals, and one of mine was maintaining interests that other people could relate to. I have more pedestrian interests, but they're the first to get dropped when my time is constrained (which it usually is), so if I end up meeting someone out in the wild, all I have to talk about is stuff like natural language parsing, utilitarian population ethics and patterns of conspicuous consumption.
Discussing it in a smaller group later, it turns out I'm not the only person who does this. It makes sense that insular, scholarly people of a sort found on LW may frequently find themselves withdrawn from common cultural ground with other people, so I thought I'd kick off a discussion on the subject.
What do you do to keep yourself relatable to other people?
EDIT: Just to clarify, this isn't a request for advice on how to talk to people. Please don't interpret it as such.
Richard Feynman was a theoretician as well as a 'people person'; if you read his writings about his experiences with people it really illustrates quite well how he managed to do it.
One tactic that he employed was simply being mysterious. He knew few people could relate to a University professor and that many would feel intimidated by that, so when in the company of laypeople he never even brought it up. They would ask him what he did and he would say, "I can't say." If pressed, he would say something vague like, "I work at the University." Done properly, it's playful and coy, and even though people might think you're a bit weird, they definitely won't consider you unrelatable.
In my opinion there's no need to concern yourself with activities that you don't like, as very few people are really actually interested in your interests. Whenever the topic of your interests comes up, just steer the conversation towards their life and their interests. You'll be speaking 10% of the time yet you'll appear like a brilliant conversationalist. If they ask you if you've read a particular book or heard a particular artist, just say no (but don't sound harsh or bored). You'll seem 'indie' and mysterious, and people like that. In practice, though, as one gets older, people rarely ask about these things.
It's a common mistake that I've seen often in intellectual people. They assume they have to keep up with popular media so that they can have conversations. That is not true at all.
While this seems like reasonable advice, I'm not sure it's universally good advice. Richard Feynman seemed to enjoy a level of charm many of us couldn't hope to possess. He also had a wide selection of esoteric interests unrelated to his field.
I would also claim that there's value in simply maintaining such an interest. During particularly insular periods where I'm absorbed in less accessible work, I find myself starting to exhibit "aspie" characteristics, losing verbal fluency and becoming socially insensitive. It's not just about having things to talk about, but maintaining my own faculties for relating to people.
Whenever the topic of your interests comes up, just steer the conversation towards their life and their interests. You'll be speaking 10% of the time yet you'll appear like a brilliant conversationalist.
This works.
I use the recaplets on Television without Pity to keep up with the basic plot and cliffhangers of tv shows I don't watch, but most of my friends do. That way I don't drop out of conversations just because they're talking about True Blood.
Note: the only problem this strategy has caused for me is that my now-bf assumed I was a GoT fan (instead of having read the books and TWOP'd the show recaps), invited me over to watch, and assumed I turned him down because I wasn't interested in him instead of being indifferent to the show. We sorted it out eventually.
The De Broglie-Bohm theory is a very interesting interpretation of quantum mechanics. The highlights of the theory are:
At first it might seem to be a cop-out to assume the reality of both the wavefunction and of actual point particles. However, this leads to some very interesting conclusions. For example, you don't have to assume wavefunction collapse (as per Copenhagen) but at the same time, a single preferred Universe exists (the Universe given by the configuration of the point particles). But that's not all.
It very neatly explains double-slit diffraction and Bell's experiments in a purely deterministic way using hidden variables (it is thus necessarily a non-local theory). It also explains the Born probabilities (the one thing that is missing from pure MWI; Elezier has alluded to this).
Among other things, De Broglie-Bohm theory allows quantum computers but doesn't allow quantum immortality - in this theory if you shoot yourself in the h...
It's absolutely the case that everything we are, evolved. But there's a certain gap between the hypothetical healthy field of evolutionary psychology and the one we actually have.
This sort of thing is why people make fun of ev psych. That's the 2008 study that claimed to find biological reasons for girls to like pink.
Of course, one bad study doesn't condemn a field - "peer reviewed" does not mean "settled science", it means "not-obviously-wrong request for comment." But this isn't a lone, outlier, rogue study - this shit's gathered 46 citations. (Compare citation averages for other fields.) (Edit: No, not all of the cites are positive.)
As it happens, we have full documentation that "girls=pink" dates back to the ... 1940s.
This sort of thing is why people make fun of ev psych. That's the 2008 study that claimed to find biological reasons for girls to like pink.
I think it deserves more fairness. The abstract only claims to have measured a "cross-cultural sex difference in color preference", making no claims about the sex difference's origin. They do speculate a bit about ev-psych in the body of the paper, but they begin this speculation with the words "We speculate" and then in the conclusion they say "Yet while these differences may be innate, they may also be modulated by cultural context or individual experience."
This, of course, isn't how it was reported in the mainstream media.
(By the way, thanks for actually linking to the paper you mentioned, it makes it a whole lot easier when people do this.)
When you're dying of malaria, I suppose you'll look up and see that balloon, and I'm not sure how it'll help you.
Bill Gates when asked whether he thought bringing internet to parts of the world would help solve problems.
Not very reassuring.
(Reddit comment: "You know what else doesn't cure malaria? Getting rid of the start menu.")
It's spam. The user's only contributions are this page and the FletcherEstrada user page.
One of the wiki admins will probably see this and do something about it.
(According to the MediaWiki documentation there's a way for a regular user to add a "delete label" to a page, but I couldn't figure out how.)
Edit:
Eliezer has deleted the spammy page and user.
It looks like the way to mark a page for deletion is to put the following text on the page:
{{delete}}
Just a fun little thing that came to my mind.
If "anthropic probabilities" make sense, then it seems natural to use them as weights for aggregating different people's utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice's and Bob's.
If the "anthropic probability" of an observer-moment depends on its K-complexity, as in Wei Dai's UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.
If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let's call it Felix.
All parallel versions of mankind will discover the same Felix, because it's singled out by being the simplest.
Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.
If you agree with that argument, you should start preparing for the arrival of Felix now. There's work to be done.
Where is the error?
That's the sharp version of the argument, but I think it's still interesting even in weakened forms. If there's a mathematical connection between simplicity and utility, and we humans aren't the simplest possible observers, then playing with such math can strongly affect utility.
Not sure if open thread is the best place to put this, but oh well.
I'm starting at Rutgers New Brunswick in a few weeks. There aren't any regular meetups in that area, but I figure there have to be at least a few people around there who read lesswrong. If any of you see this I'd be really interested in getting in touch.
A certain possible cognitive hazard, this webcomic strip, and the fact that someone has apparently made it privately known to someone else that it is desired by at least one person that I change my username due to apparent mental connections with that same cognitive hazard, all inspired me to think of the following scenario:
rot13'd for the protection of those who would prefer not to see it: Pbafvqre: vs ng nal cbvag lbh unir yrnearq bs gur angher bs gur onfvyvfx, gurer vf cebonoyl ab jnl sbe lbh gb gehyl naq pbzcyrgryl sbetrg vg jvgubhg enqvpny zvaq fhetre...
I'll be in NYC this Saturday giving a talk on strategies for having useful arguments (cohosted by the NYC LW meetup). For me, useful arguments tend to be ones where:
I'll be talking a bit about my experience running Ideological Turing Tests and what you can apply from them in day to day life. I'm also glad to answer questions about CFAR and/or the upcoming workshop in NYC in November.
I hope this is worth saying: I've been reading up a bit on philosophical pragmatism especially Peirce and I see a lot parallels with the thinking on LW, since it has a lot in common with positivism this is maybe not so surprising.
Though my interpretation of pragmatism seems to give a quite interesting critiquing the metaphor of "Map and territory", they seem to be saying that the territory do exist, just that when we point to territory we are actually pointing to how an ideal observer (that are somewhat like us?) would perceive the territory no...
Is there a name for the bias of choosing the action which is easiest (either physically or mentally), or takes the least effort, when given multiple options? Lazy bias? Bias of convenience?
I've found lately that being aware of this in myself has been very useful in stopping myself from procrastinating on all sorts of things, realizing that I'm often choosing the easier, but less effective of potential options out of convenience.
Thinking, Fast and Slow by Kahneman
A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.
NY Times just posted an opinion piece on radical life extension, http://www.nytimes.com/2013/08/08/opinion/blow-radical-life-extension.html?ref=opinion
At one point the piece says: "Half thought treatments allowing people to live to be 120 would be bad for society, while 4 in 10 thought they would be good. Two-thirds thought that the treatments prolonging life would strain natural resources."
Personally, I doubt very many of them thought at all.
What techniques have you used for removing or beating Ugh Fields, with associated +/- figures?
(A search of LW reveals very few suggestions for how to do this.)
"Indifferent AI" would be a better name than "Unfriendly AI".
It would unfortunately come with misleading connotations. People don't usually associate 'indifferent' with 'is certain to kill you, your family, your friends and your species'. People already get confused enough about 'indifferent' AIs without priming them with that word.
Would "Non-Friendly AI" satisfy your concerns? That gets rid of those of the connotations of 'unfriendly' that are beyond merely being 'something-other-than-friendly'.
We could gear several names to have maximum impact with their intended recipients, e.g. the "Takes-Away-Your-Second-Amendment-Rights AI", or "Freedom-Destroying AI", "Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI" etc. All strictly speaking true properties for UFAIs.
I'm going to be in Baltimore this weekend for an anime convention. I expect to have a day or so's leeway coming back. Is there a LW group nearby I might drop in on?
I've never been to a meetup, but it seems likely there is one in that area; I see one in DC but it's meeting on the last day of the con. The LWSH experience has left me more interested in seeing people face to face.
Can anyone recommend a book on marketing analytics? Preferably not a textbook but I'll take what I can get.
I have a technical background but I recently switched careers and am now working as a real estate agent. I have very limited marketing knowledge at this point.
Just curious: has anyone explored the idea of utility functions as vectors, and then extended this to the idea of a normalized utility function dot product? Because having thought about it for a long while, and remembering after reading a few things today, I'm utterly convinced that the happiness of some people ought to count negatively.
Can somebody explain a particular aspect of Quantum Mechanics to me?
In my readings of the Many Worlds Interpretation, which Eliezer fondly endorses in the QM sequence, I must have missed an important piece of information about when it is that amplitude distributions become separable in timed configuration space. That is, when do wave-functions stop interacting enough for the near-term simulation of two blobs (two "particles") to treat them independently?
One cause is spatial distance. But in Many Worlds, I don't know where I'm to understand thes...
Watching The Secret Life of the American Teenager... (Netflix made me! Honest!) Its one redeeming feature is the good amount of comic relief, even when discussing hard issues. Its most annoying feature is its reliance on the Muggle Plot.
...And its least believable feature is that, despite the nearly instant in-universe feedback that no secret survives until the end of the episode (almost all doors in the show are open, or at least unlocked, and someone eavesdrops on every sensitive conversation), the characters keep hoping that their next indiscretion will remain hidden.
I've been reading a little about the constructed puzzle-language Randall Munroe created to use in Time, and I'm getting increasingly interested in helping translate it. Anyone else interested in helping to crack it?
Useful links:
The original wiki page
A blog that has recently popped up with good insight
The entire corpus
Has there been discussion here before on Cholesterol/heart disese/statin medication?
There's a lot of conflicting information floating around that I've looked at somewhat. It seems like the contrarian position, for example here: http://www.ravnskov.nu/myth3.htm ,has some good points and points to studies more than (just) experts, but I'm not all that deep into it and there's a rather formidably held conventional wisdom that dietary saturated fat should be low or bloo cholesterol/LDL will be high and heart attacks will become more likely.
Edit: Yes, there has, as the search function reveals. And I've even commented to some of them...
If you had a Death Note, what would you do with it?
See if I could get some very old people or otherwise have terminal illnesses volunteer to have their names written in it. We can use that data to experiment more with the note and figure out how it works. The existence of such an object implies massive things wrong with our current understanding of the universe, so figuring that out might be really helpful.
I believe it canonically can't run out of pages, so I'd think hard about how to leverage infinite free paper into world domination.
I don't think you can infinitely fast pull out papers of the death note, so I doubt that you can produce more paper per hour than the average paper factory.
Then it turns out that Death Note smoke particles retain the magic qualities of the source. Writing one's name in dust with a fingertip becomes fraught with peril.
If I found something I thought was a Death Note I would spend a long, long time meditating on the question of how and in what way I'd gone insane.
After finding a volunteer with a terminal illness, I'd test the limits of it. E.g. "The person will either write a valid proof of P=NP or a valid proof that P!=NP and then die of a heart attack."
Already tested by Light in the manga, IIRC; the limits of skill top out before things like 'escape from maximum-security prison', so P=NP is well beyond the doable.
He tries it in the anime too. (I watched that episode yesterday.) He tries things like "draw a picture of L on your cell wall and then die of a heart attack" on some evil prisoner. It doesn't work.
This probably violates a forum rule. Though I will speculate that Light's plan of trying to kill all criminals he sees named probably does way more harm than good even if you ignore the fact that some are innocent.
That's likely to cause more collateral damage than merely taking out the leadership of one company. Cost/benefit analysis and whatnot.
Gambling on sporting events is probably another good way to use the Death Note for making money. It's probably far more ethical. Does the Death Note work on horses? If so, then you can bet on longshots while sabotaging the favorites by killing horses.
I commit to donating $50 to MIRI if EY or lukeprog watch this 4:15 video and comment about their immediate reaction.
Anyone else, feel free to raise the donation pool; get your fill of drama entertainment and assuage your guilty conscience with a donation!
I'll take the money. :)
IIRC this is a troll that followed me over from Common Sense Atheism. That video and a few others are fairly creepy, but The Ballad of Big Yud is actually kinda fun.
Feminism is what you get when you assume that all gender differences are due to society. The manosphere/"red pill"/whatever is what you get when you assume that all gender differences are due to biology. Normal-reasonable-person-ism is what you get when you take into account the fact that we're not sure yet.
Does this theory (or parts of it) seem true to you?
Feminism is one of those words that refers to such a diverse collection of opinions as to be practically meaningless.
For example, the kind of feminism that I tend to identify with is concerned with just removing inequalities regardless of their source and is also concerned with things like fat shaming, racism, the rights of the disabled, and other things that have nothing to do with gender, but there are certainly also people who identify as feminists and who would fit your description.
I'm pretty sure that some gender differences are due to society, and others are due to biology.
So feminism assumes that it is due to society that women can become pregnant and men can't? Most feminists I know are normal-reasonable-people on your dichotomy, though you also ignore the fact that the category of whether differences are desireable and whether they can be influenced are far more interesting and important than whether they are at present mostly due to society or biology. I know people have a strange tendency to act as if things due to society can be trivially changed by collective whim while biology is eternal and immutable, but however common such a view, it is clearly absurd. Medicine can make all sorts of adjustments to our biology, while social engineers have historically been more likely to have unintended effects or no effect at all than they have been to successfully transform their societies in the ways they desire.
This came up at yesterday's London meetup: activities for keeping oneself relatable to other human beings.
We were dissecting motives behind goals, and one of mine was maintaining interests that other people could relate to. I have more pedestrian interests, but they're the first to get dropped when my time is constrained (which it usually is), so if I end up meeting someone out in the wild, all I have to talk about is stuff like natural language parsing, utilitarian population ethics and patterns of conspicuous consumption.
Discussing it in a smaller group later, it turns out I'm not the only person who does this. It makes sense that insular, scholarly people of a sort found on LW may frequently find themselves withdrawn from common cultural ground with other people, so I thought I'd kick off a discussion on the subject.
What do you do to keep yourself relatable to other people?
EDIT: Just to clarify, this isn't a request for advice on how to talk to people. Please don't interpret it as such.
One strategy: Take insular, scholarly interest in a broadly popular subject. For example, I'm interested in APBRmetrics and associated theoretical questions about the sport of basketball. One nice plus to this hobby is that it also leaves me with pretty up-to-date non-technical knowledge about NBA and college basketball.