You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:JoshuaZ
17 June 2013 01:11:20AM
16 points
[-]
There's been more recent work suggesting that planets are extremely common. Most recently, evidence for planets in unexpected orbits around red dwarfs have been found. See e.g. here. This is in addition to other work suggesting that even when restricted to sun-like stars, planets are not just common, but planets are frequently in the habitable zone. Source(pdf). It seems at this point that any aspect of the Great Filter that is from planet formation must be declared to be completely negligible. Is this analysis accurate?
Is there a wiki or website that keeps track of things related to the Great Filter?
I guess I'm looking for something that enumerates all the possible major filters, and keeps track of data and arguments pertaining to various aspects of these filters.
Comment author:JoshuaZ
26 June 2013 07:14:57PM
0 points
[-]
I'm not aware of any such thing. It would be nice to have. There was an earlier Boston meetup a few years ago where a few of us tried to brainstorm future filters but we didn't really get anything that wasn't already known (I think jimrandomh mentioned that there's been similar attempts at other meetups and the like). The set of proposed filters in the past though is large. I've seen almost every major step in the evolution of life being labeled as a filter, and there's sometimes reference class tennis issues with them, especially when connected to developments that aren't as obviously necessary for intelligent life.
I have a notion that the proportion of sociopaths is a filter as the tech level goes up-- spam is a problem, though more of a dead-weight loss than a disaster. If we get to the point of home build-a-virus kits, it might be a civilization-stopper. Was this on the list?
Comment author:shminux
17 June 2013 06:16:40AM
1 point
[-]
It seems at this point that any aspect of the Great Filter that is from planet formation must be declared to be completely negligible. Is this analysis accurate?
I am not an astrophysicist, so not an authoritative voice here, but yes, almost every star is likely to contain a bunch of planets, some probably in an inhabitable zone. Even our close neighbors, the three Centauri stars and Vega have planets around them, or at least an asteroid belt hinting at planets. So, at least a couple of terms in the Drake equation are very close to unity.
A thought-terminating cliché is a commonly used phrase, sometimes passing as folk wisdom, used to quell cognitive dissonance. Though the phrase in and of itself may be valid in certain contexts, its application as a means of dismissing dissent or justifying fallacious logic is what makes it thought-terminating.
The term was popularized by Robert Jay Lifton in his 1956 book Thought Reform and the Psychology of Totalism. Lifton said, “The language of the totalist environment is characterized by the thought-terminating cliché. The most far-reaching and complex of human problems are compressed into brief, highly reductive, definitive-sounding phrases, easily memorized and easily expressed. These become the start and finish of any ideological analysis.”
Comment author:Oriane
23 June 2013 09:45:44PM
10 points
[-]
I was an intern at MIRI recently and I would like to start a new LW meetup in my city but as I am still new on LW, I do not have enough karma points. Could you please upvote this comment so that I can get enough karma to post about a meetup? lukeprog suggested I do this. I only need 2 points to post in the discussion part. Thanks to you all
Comment author:Viliam_Bur
25 June 2013 11:23:52AM
9 points
[-]
I just realized that willingness to update seems very cultish from outside. Literally.
I mean -- if someone joins a cult, what is the most obvious thing that happens to them? They update heavily; towards the group teachings. This is how you can tell that something wrong is happening.
We try to update on reasonable evidence. For example we would update on a scientific article more than on a random website. However, from outside is seems similar to willingness to update on your favorite (in-group) sources, and unwillingness to update on other (out-group) sources. Just like a Jehovah Witness would update on the Watch Tower, but would remain skeptical towards Mormon literature. As if the science itself is your cult... except that it's not really the science as we know it, because most scientist behave outside the laboratory just like everyone else; and you are trying to do something else.
Okay, I guess this is nothing new for a LW reader. I just realized now, on the emotional level, how willingness to update, considered a virtue on LW, may look horrifying to an average person. And how willingness to update on trustworthy evidence more than on untrustworthy evidence, probably seems like hypocrisy, like a rationalization for preferring your in-group ideas to out-group ideas.
Comment author:Viliam_Bur
25 June 2013 05:30:33PM
2 points
[-]
Almost surely, yes. If other people keep telling you crazy things, not updating is a smart choice. Not the smartest one, but it is a simple strategy that anyone can use, cheaply (because we can't always afford verification).
Comment author:[deleted]
25 June 2013 10:18:58PM
*
1 point
[+]
(10
children)
Comment author:[deleted]
25 June 2013 10:18:58PM
*
1 point
[-]
I worry that this is a case of finding a 'secret virtue' in one's vices: I think we're often tempted to pick some outstandingly bad feature of ourselves or an organization we belong to and explain it as the necessary consequence of a necessary and good feature.
My reason for thinking that this is going on here is that another explanation seems much more plausible. For one thing, you'd think the effect of seeing someone heavily update would depend on knowing them before and after. But how many people who think of LW this way think so because they knew someone before and after they became swayed by LW's ideas?
With Nancy, I think that the PR problem LW has isn't the impression people have that LWers converts of a certain kind. Rather, I think what negative impression there is is the result of an extremely fixable problem of presentation: some of the most prominent and popular ways of expressing core LW ideas come in the form of 1) 'litanies', or pseudo-asian mysticism. These are good ideas being given a completely unnecessary and, for many, off-putting gilding. No one here takes the religious overtone seriously, but outsiders don't know that. 2) they come in the form of explicit expressions of contempt for outsiders, such as 'raising the sanity waterline', etc.
Comment author:Viliam_Bur
26 June 2013 07:38:12AM
*
5 points
[-]
I admit that I honestly do consider many people insane; and I always did. Even the smarter ones seem paralyzed by some harmful memes. I mean, people argue about words that have no connection with reality, while in other parts of the world children are dying from hunger. Hoaxes of every kind circulate by e-mail, and it's hard to find someone I know personally who didn't send me them repeatedly (after being repeatedly explained that it was a hoax, and given a pointer to some sites collecting hoaxes). Smart people start speaking in slogans, when difficult problems need to be solved, and seem unable to understand where is the problem with this kind of communication. People doing bullshit that obviously doesn't and can't work, and insisting that you have to do it harder and spend more money, instead of just trying something else for a while and observing what happens. So much stupidity, so much waste. -- And the few people who know better, or at least are able to know better, are often afraid to admit it even to themselves, because the idea that we live in insane society is scary. So even they don't resist the madness; at best they don't join it, but they pretend they don't see it. This is how I saw the world decades before I found LW.
And yes, it is a bad PR. It is impolite towards the insane people, who may feel offended, and then try to punish us. But even worse, it is a bad strategy towards the sane people, who are not yet emotionally ready to admit that the rest of the world is not sane. Because it goes against our tribal instincts. We must agree with the tribe, whether it is right or wrong; especially when it is wrong. If you are able to resist this pressure, it's probably not caused by higher rationality, but by lower social skills.
So how exactly should we communicate the inconvenient truths. Because we are trying to communicate truthfully, aren't we? Should we post the information openly, and have a bad PR? Should we have a secret forum for forbidden thoughts, and appear cultish, and risk that someone exposes the information? Should we communicate certain thoughts only in person, never online? Seems to me that "bad PR" is the least wrong option.
Is there a way to disagree with the majority, be open to new members, and not seem dangerous? Perhaps we could downplay our ambitions; to stop talking about improving the world, and pretend that we are just some kind of Mensa, a few geeks solving their harmless Bayesian equations, unconnected with the real world. Or we could make a semi-secret discussion forum; it would be open to anyone after overcoming a trivial inconvenience, and it would not be indexed by google. Then the best articles (judged by quality and PR impact) would be published on a public forum. Perhaps the articles should not appear all in the same place: everyone (including Eliezer) would have their own blog with their own articles, and LW would just contain links to them (like Digg). This would be an inconvenience for publishing; but we could provide some technical help for people who have problem starting their own website. Perhaps we should split LW to multiple websites, concerned with different topics: artificial intelligence, effective philantrophy, rationality, community forum, etc. -- All these ideas are about being less open, less direct. Which is dishonest per se, but perhaps this is what good PR means: lying in socially accepted ways; pretending what other people want you to pretend.
This could be a separate topic probably. And first we would have to solve what we want to archieve, and only then discuss how.
Comment author:[deleted]
26 June 2013 10:00:53PM
*
3 points
[-]
I admit that I honestly do consider many people insane; and I always did.
I don't think you do, I think you consider most people to be (in some sense rightly) wrong or ignorant. Just the fact that you hold people to some standard (which you must do, if you say that they fail) means you don't think of them as insane. If you've ever known someone with depression or who is bi-polar disorder, you know that you can't tell them t snap out of it, or learn this or that, or just think it through. Even calling people insane, as an expression of contempt, is a way of holding them to a standard. But we don't hold actually insane people to standards, and we don't (unless we're jerks) hold them in contempt. You don't communicate the inconvenient truth to the insane. You don't disagree or agree with the insane. The wrong, the ignorant, the evil, yes. But not the insane.
No one here (and I mean no one) actually thinks the world is full of insane people. That's a bit of metaphor and hyperbole. If anyone seriously thought that, their behavior would be so radically strange (think 'I am Legend' or something), you'd probably find them locked up somewhere.
Is there a way to disagree with the majority, be open to new members, and not seem dangerous?
The claim that everyone else is insane doesn't sound dangerous, it sounds resentful. Dangerous is not a problem. I don't think we need to implement any of your ideas, because the issue is purely one of rhetoric. None of the ideas themselves are a problem, because there's no problem with saying everyone else is wrong so long as you have either 1) results, or 2) good, persuasive, arguments. And if all you've got is (2), tone matters, because you can only persuade people who listen to you. There's no reason at all to hide anything, or lie, or pretend or anything like that.
Comment author:Viliam_Bur
27 June 2013 12:55:24PM
1 point
[-]
Speaking about typical indviduals, ignorant is a good word, insane is not. As you say, it makes sense trying to explain things to an ignorant person, not to an insane person. Individuals can be explained things with some degree of success. I agree with you on this.
The difference becomes less clear when dealing with groups of people, societies. Explaining things to a group of people, that is more often (as an anthropomorphism) like dealing with an insane person. Literally, the kind of person that hears you and understands your words, but then also hears "voices in their head" telling them it's bad to think that way, that they should keep doing the stupid stuff they were doing regardless of the problems it brought them, etc. Except that these "voices" are the other people. -- But this probably just proves that societies are not individuals.
there's no problem with saying everyone else is wrong so long as you have either 1) results, or 2) good, persuasive, arguments
Yeah, having results would be good. The Friendly AI would be the best, but until then, we need some other kind of results.
So, an interesting task would be to make a list of results of the LW community that would impress outsiders. Put that into a flyer, and we have a nice PR tool.
Comment author:[deleted]
27 June 2013 08:18:19PM
0 points
[-]
The difference becomes less clear when dealing with groups of people, societies. Explaining things to a group of people, that is more often (as an anthropomorphism) like dealing with an insane person.
That's fair enough. I'd stay away from groups of people. Back in the day, they used to write without vowels, so that you could only really read something if you were either exceptionally literate or were being told what it said by a teacher. I say never communicate with more than a handful of people at once, but I suppose that's not possible a lot of the time.
Comment author:Error
27 June 2013 04:49:37PM
0 points
[-]
The difference becomes less clear when dealing with groups of people, societies. Explaining things to a group of people, that is more often (as an anthropomorphism) like dealing with an insane person.
Perhaps it would be less confusing to treat a society as if it were a single organism, of which the people within it are analogous to cells rather than agents with minds of their own. I'm not sure how far such an approach would get but it might be interesting.
until then, we need some other kind of results.
CFAR might be able to demonstrate such after a few more years of their workshops. I'm not sure how they're measuring results, but I would be surprised if they were not doing so.
Comment author:Viliam_Bur
28 June 2013 08:27:02AM
*
1 point
[-]
CFAR planned to do some statistics about how the minicamp attendees' lives have changed after a year, using a control group of people who applied to minicamps but were not admitted. Not perfect, but pretty good. And the year from the first minicamps is approximately now (for me it will be in one month). But the samples are very small.
With regards to PR, I am not sure if this will work. I mean, even if the results are good, only the people who care about statistical results will be impressed by them. It's a circular problem: you need to already have some rationality to be able to be impressed by rational arguments. -- Because you may also say: yeah, those guys are trying so hard, and I will just pray or think positively and the same results will come to me, too. And if they don't, that just means I have to pray or think positively more. Or even: statistics doesn't prove anything, I feel it in my heart that rationality is cold and can't make anyone happy.
Comment author:Viliam_Bur
29 June 2013 08:32:57PM
1 point
[-]
I agree. But optimizing for good storytelling is different from optimizing for good science. A good scientific result would be like: "minicamp attendees are 12% more efficient in their lives, plus or minus 3.5%". A good story would be "this awesome thing happened to an minicamp attendee" (ignoring the fact that equivalent thing happened to a person in the control group).
Maybe the best would be to publish both, and let readers pick their favourite part.
Has anyone written a worthwhile utilitarian argument against transhumanism? I'm interested in criticism, but most of it is infested with metaphysical and metaethical claims I can't countenance.
Comment author:DanArmak
24 June 2013 11:18:50PM
5 points
[-]
What proposition are you looking for an argument against?
Transhumanism can mean a lot of things: the transcending of various heretofore human limits, conditions, or behaviors - which are many and different from one another.
And for those things, you might refer to the proposition that they are possible, or likely, or inevitable; (un)desirable or neutral; ethically (in)permissible or obligatory; and so on.
Comment author:orthonormal
25 June 2013 06:21:07PM
*
2 points
[-]
I'm looking for utilitarian arguments against the desirability of changing human nature by direct engineering. Basically, I'm wondering if there's any utilitarian case for the "it's fundamentally wrong to play God" position in bioethics. (I'm being vague in order to maximize my chance of encountering something.)
Comment author:Kaj_Sotala
29 June 2013 07:35:23PM
2 points
[-]
Not sure. I've been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
Comment author:DanArmak
25 June 2013 08:47:49PM
0 points
[-]
Please be more specific and define "changes to human nature".
We already make many deliberate changes to people. We raise them in a culture, educate them, train them, fit them into jobs and social roles, make social norms and expectations into second nature for most people, make them strongly believe many things without evidence, indoctrinate them into cults and religions and causes, make them do almost anything we like.
We also make medical interventions that change human nature, which is to die of diseases easily treated today. We restore sight to the myopic and hard of hearing, and lately even to the blind and deaf. We even transplant complex organs.
We have changed the experience of human life out of all recognition with the ancestral state, and we have grown used to it.
Where does the line between human and transhuman lie? We can talk about any specific proposed change, and some will be bad and some will be good. But any argument that says all changes are inherently bad might also say that all the changes that already occurred have been bad as well.
Thanks for bringing back the bright-colored edges for new comments.
The additional thing I'd like to see along those lines is bright color for "continue this thread" and "expand comments" if they include new comments. I'd also like to see it for "comment score below threshold", but I can understand if that isn't included for social engineering reasons.
Personal account of physical and emotional problems encountered by the author which were reversed when he went back to eating animal products. Much discussion of vitamins and dietary fats, not to mention genetic variation. Leaves the possibility open that some people thrive on a vegetarian diet, and possibly on a vegan diet.
Comment author:Lightwave
16 June 2013 08:59:53AM
*
7 points
[-]
So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?
It seems like most interested people end up practicing concentration or insight meditation by default (as indeed you will, if you read and follow the book). I would also recommend eventually looking into loving-kindness meditation. I've been trying it for a couple of weeks and I think it might be much more effective for someone who just wants a tool to improve quality of life (rather than wanting to be enlightened or something).
Loving-kindness meditation was one of the most easily accessible effective techniques for subverting intrusive anxiety I experimented with during my recovery. (There were more effective techniques, but I couldn't always do them reliably.)
You need a New York Times account to read it, but setting one up only takes a couple of minutes. Here are some exerpts in any case.
Obese people almost always regain weight after weight loss:
So Dr. Hirsch and his colleagues, including Dr. Rudolph L. Leibel, who is now at Columbia University, repeated the experiment and repeated it again. Every time the result was the same. The weight, so painstakingly lost, came right back. But since this was a research study, the investigators were also measuring metabolic changes, psychiatric conditions, body temperature and pulse. And that led them to a surprising conclusion: fat people who lost large amounts of weight might look like someone who was never fat, but they were very different. In fact, by every metabolic measurement, they seemed like people who were starving.
Before the diet began, the fat subjects’ metabolism was normal — the number of calories burned per square meter of body surface was no different from that of people who had never been fat. But when they lost weight, they were burning as much as 24 percent fewer calories per square meter of their surface area than the calories consumed by those who were naturally thin.
Thin people who are forced to gain weight find it easy to lose it again:
...His subjects were prisoners at a nearby state prison who volunteered to gain weight. With great difficulty, they succeeded, increasing their weight by 20 percent to 25 percent. But it took them four to six months, eating as much as they could every day. Some consumed 10,000 calories a day, an amount so incredible that it would be hard to believe, were it not for the fact that there were attendants present at each meal who dutifully recorded everything the men ate.
Once the men were fat, their metabolisms increased by 50 percent. They needed more than 2,700 calories per square meter of their body surface to stay fat but needed just 1,800 calories per square meter to maintain their normal weight.
When the study ended, the prisoners had no trouble losing weight. Within months, they were back to normal and effortlessly stayed there.
The body's metabolism changes with weight loss and weight gain:
The implications were clear. There is a reason that fat people cannot stay thin after they diet and that thin people cannot stay fat when they force themselves to gain weight. The body’s metabolism speeds up or slows down to keep weight within a narrow range. Gain weight and the metabolism can as much as double; lose weight and it can slow to half its original speed.
Genes and weight:
.A few years later, in 1990, Dr. Stunkard published another study in The New England Journal of Medicine, using another classic method of geneticists: investigating twins. This time, he used the Swedish Twin Registry, studying its 93 pairs of identical twins who were reared apart, 154 pairs of identical twins who were reared together, 218 pairs of fraternal twins who were reared apart, and 208 pairs of fraternal twins who were reared together.
The identical twins had nearly identical body mass indexes, whether they had been reared apart or together. There was more variation in the body mass indexes of the fraternal twins, who, like any siblings, share some, but not all, genes.
The researchers concluded that 70 percent of the variation in peoples’ weights may be accounted for by inheritance, a figure that means that weight is more strongly inherited than nearly any other condition, including mental illness, breast cancer or heart disease.
Comment author:Multiheaded
27 June 2013 08:56:42PM
*
6 points
[-]
And here is the kind of attitude that, in my eyes, justifies all the anger and backlash against fat-shaming. Oh damn, I feel like I understand the SJW people more and more every time I see crap like this.
The harsh truth is that the obese are in a lot of trouble. They are less attractive in the workplace because of their combination of intelligence (or lack thereof) and personality. Work performance is best predicted by IQ scores and next best of Conscientiousness. Impulsive behavior on the other hand predicts crime and accidents. Most employers are probably not aware of the research linking obese people to these characteristics and outcomes, but they know from experience that employing an obese person is a financial risk with no apparent reward.
They should of course look at the individual, but not everyone can afford testing every potential employee. Nor can a doctor test his patients. But he can use his experience, which tells him that the obese person is much less likely to follow his professional advice. And even if they could check every individual it wouldn’t solve the problem because the reason the group has these characteristics is because so many individuals belonging to the group have them.
So, is there any way to help this group? My guess is that the best solution would be to introduce vice taxes and similar paternalistic measures. You can’t leave someone who is out of control to their own devices. The worst solution is the one used right now – blaming negative stereotypes and discrimination, when scientific research validates those exact stereotypes as well as provides perfectly rational reasons for discrimination.
The "harsh truth" is that people suffering from obesity need to be protected from such vile treatment somehow, and that need is not recognized at the moment. Society shouldn't just let some entitled well-off jerks with a fetish for authoritarianism influence attitudes and policy that directly affect vulnerable groups.
The "harsh truth" is that people suffering from obesity need to be protected from such vile treatment somehow,
I think you're right in general, but I don't think "protected from" is a good way to frame it, as though fat people are the passive recipients of attacks, and some stronger force has to come in to save them. (I'm not sure quite what you meant, or even if you were just angry about a bad situation and used the first phrase that came to mind.)
The world would be a much better place if the attacks stopped. I'm not sure what the best strategies are to get people to stop seeing fatness and thinness as moral issues. The long slow grind of bring the subject up again and again with whatever mix of facts and anger seem appropriate seems to be finally getting some traction.
Comment author:Multiheaded
29 June 2013 06:54:05PM
*
1 point
[-]
I don't think "protected from" is a good way to frame it, as though fat people are the passive recipients of attacks, and some stronger force has to come in to save them. (I'm not sure quite what you meant, or even if you were just angry about a bad situation and used the first phrase that came to mind.)
Absolutely. I just meant to say that there's a need for intersectionality and solidarity in such struggles, i.e. even people who aren't from marginalized groups that are directly targeted by shit-stains like Mr. Staffan here should still call such shit-stains out on their shit.
I found that quite hard to read. Even if poor impulse control were the sole cause of obesity, there would be no reason to attack the obese so nastily, instead of, for instance, suggesting ways that they might improve their impulse control. I find the way he relishes attacking them incredibly unpleasant.
In fact, the internet has quite a lot to say about improving impulse control.
Comment author:satt
29 June 2013 04:19:50PM
4 points
[-]
I reckon there's special pleading going on with the obese. Way more anger & snottiness gets directed at them (at least on the parts of the Internet I see) than at, say, smokers, even though smoking is at least as bad in every relevant way I can think of.
Comment author:Multiheaded
27 June 2013 10:57:26PM
*
-3 points
[-]
Hint hint: it matters less to some people whether the group they are trying to subjugate is delineated by economic class, race, gender, sexuality or body issues... as long as they get to impose their hegemony and see the "deviants" suffer. It's scary to see such a desire to dominate, control and punish.
Comment author:Multiheaded
28 June 2013 10:39:11AM
*
3 points
[-]
Perhaps we should dominate, control and punish those evil people who use the available Bayesian evidence when dealing with individuals.
Not nearly all such people are outright sadistic and power-hungry, but those who are can spin complex ideological rationalizations that push the "overton window" and allow the "good" bourgeois to be complicit with a cruel and unjust system.
See e.g. the "Reagan revolution" in America and the myth of the "welfare queen" that's a 3-for-1 package of racism, classism and sexism. I've read a bit about how it has been fuelling a "fuck you, got mine" attitude in poor people one step above the underclass; the system hasn't actually been kind to a white/male/lower-middle-class stratum, but it has given them someone to feel superior to. It's very similar to how the ideologues of the Confederacy explicitly advocated giving poor white men supreme rule over their household as a means of racial solidarity across the class divide.
I also predict that a lot of those evil people will be white, male, and wealthy, so we should focus on members of those groups.
False equivalence. Of course, any movement can degrade into an authoritarian-populist, four-legs-good-two-legs-bad version, given a vicious political atmosphere and polarized underlying worldviews, but... it happens to dominant/conservative ideologies, too! The dominant group just doesn't notice the resulting violence and victimization because from its privileged position it can afford an illusion of social peace.
If we agree that it's a danger of political processes in general rather than of specific movements, could we stop sneaking in implicit arguments that a particular ideology is safe from viciousness and indiscriminatory aggression?
Various processes of hierarchical discrimination are driven by legitimizing myths (Sidanius, 1992), which are beliefs justifying social dominance, such as paternalistic myths (hegemony serves society, looks after incapable minorities), reciprocal myths (suggestions that hegemonic groups and outgroups are actually equal), and sacred myths (the divine right of kings, as a religion-approved mandate for hegemony to govern). Pratto et al. (1994) suggest the Western idea of meritocracy and individual achievement as an example of a legitimizing myth, and argues that meritocracy produces only an illusion of fairness. SDT draws on social identity theory, suggesting that social-comparison processes drive individual discrimination (ingroup favouritism). Discriminatory acts (such as insulting remarks about minorities) are performed because they increase the actors' self-esteem.
Comment author:Viliam_Bur
28 June 2013 01:12:08PM
*
3 points
[-]
People of all kinds of political opinions are able to use myths to support their opinions. People of all kinds of political opinions can be power-hungry. People of all kinds of political opinions can declare other people evil and use hate against them for their own political advantage.
Can we agree on this, or can you tell me an example of a major political movement that does not do that? (Because you provided some specific examples, and I am too lazy to counter that with specific examples in the other direction, unless that really is necessary. I suppose we could just skip this part and agree that it is not necessary.)
Comment author:Viliam_Bur
29 June 2013 09:48:54PM
2 points
[-]
My assumption is that SJs are good at finding faults of everyone else, and completely blind to their own. (Which is actually my assumption for all political movements.) I don't consider SJs more untrustworthy that any other group of mindkilled people explaining why they are the good guys and their enemies are the bad guys.
Their amateur psychoanalysis lacks self-reflection. Those other people, they want to dominate, control and punish. That would obviously never happen to us! Now let me explain again why everyone who disagrees with us is evil and must be stopped...
Moderately surprising corollary: so society IS treating fat people in a horribly unjust manner after all. Those boring SJW types who have been going on and on about "fat-shaming" and "thin privilege"... are yet again more morally correct on average than the general public.
Am now mildly ashamed of some previous thoughts and/or attitudes.
Comment author:JQuinton
18 June 2013 02:13:02PM
7 points
[-]
What are we to make of the supposedly increasing obesity rate across Western nations? Is this a failure of measurement (e.g. standards for what count as "obesity" are dropping), has the Western diet changed our genetics, or something else altogether?
If it was mainly genetics, then I would think that the obesity rate would remain constant throughout time.
Comment author:satt
19 June 2013 02:13:12AM
7 points
[-]
What are we to make of the supposedly increasing obesity rate across Western nations? [...]
If it was mainly genetics, then I would think that the obesity rate would remain constant throughout time.
Environmental changes over time may have shifted the entire distribution of people's weights upwards without affecting the distribution's variance. This would reconcile an environmentally-driven obesity rate increase with the NYT's report that 70% of the variance is genetic.
Comment author:FiftyTwo
26 June 2013 11:18:30PM
2 points
[-]
The obvious cross comparison would be to look at populations distributions of weight and see if they share the same pattern shifted left or right based on the primary food source.
Comment author:zslastman
13 September 2013 02:57:29PM
0 points
[-]
Hypothesis possibly reconciling link between impulse control and weight, strong heritability of both, resistance to experimental intervention, and society scale shifts in weight:
Body weight is largely determined by the 'set point' to which the body's metabolism returns, hence resistance to intervention. This set point can be influenced through lifestyle, hence link to impulse control and changes across time/cultures. However this influence can only be exerted either a) during development and/or b) over longer time scales than are generally used in experiments.
This should be easy enough to test. Are there any relevant data on e.g. people raised in non-obesity ridden cultures and then introduced to one? Or on interventions with obese adolescents?l
I dunno, ask the OP. I was merely pointing out that in the event that obesity has a more or less significant hereditary/genetic component, the social stigma against it must be an even more horrible and cruel thing than most enlightened people would admit today.
(Consider, for example, just the fact that our attractiveness criteria appear to be almost entirely a "social construct" - otherwise it'd be hard to explain the enormity of variance; AFAIK the only human universal is a preference for facial symmetry in either gender. If society could just make certain traits that people are stuck with regardless of their will, and cannot really affect, fall within the norms of "beauty" in a generation or two... then all the "social justice"/"body positivity"/etc campaigns to do so might have a big potential leverage on many people's mental health and happiness. So it must be in fact reasonable and ethical of activists to "police" everyday language for fat-shaming/body-negativity, devote resources and effort to press for better representation in media, etc.
Yet again I'm struck by just how rational - in intention and planning, at least - some odd-seeming "activist" stuff comes across as on close examination.)
Comment author:[deleted]
18 June 2013 03:12:13PM
*
1 point
[+]
(2
children)
Comment author:[deleted]
18 June 2013 03:12:13PM
*
1 point
[-]
A possible hypothesis is that the genes encode your set point weight given optimal nutrition, but if you don't get adequate nutrition during childhood you don't attain it. IIRC something similar is believed to apply to intelligence and height and explain the Flynn effect and the fact that young generations are taller than older ones.
I've moved away slightly from SJW attitudes on various matters, since starting to read LW, Yvain's blog and various other things, however, I've actually moved closer to SJW attitudes to weight, since researching the issue. The fact that weight loss attempts hardly ever work in the long run, is what has changed my views the most.
Comment author:Multiheaded
18 June 2013 04:37:43PM
*
3 points
[-]
I've moved away slightly from SJW attitudes on various matters, since starting to read LW and Yvain's blog
[OT: just noting that one could be "away from SJW attitudes" in different directions, some of them mutually exclusive. For example, on some particular things (racial discrimination, etc) I take the Marxist view that activism can't help the roots of the problem which are endemic to the functioning of capitalism - except that I don't believe it's possible or sane to try and replace global capitalism with something better anytime soon, either... so there might be no hope of reaching "endgame" for some struggles until post-scarcity. Although activists should probably at least try and defend the progress made on them to-date from being optimized away by hostile political agendas.]
The fact that weight loss attempts hardly ever work in the long run, is what has changed my views the most.
Actually, I still suspect that the benefiits in increased happiness and mental health would still be better than the marginal efficiency of pressuring lots of people to try and lose weight even if it depended in large part on personal behaviour. And social pressure is notoriously indiscriminate, so any undesirable messages would still hit people who can't or don't really need to change.
Plus there are still all the socioeconomic factors outside people's control, etc.
Comment author:[deleted]
18 June 2013 03:15:36PM
*
2 points
[-]
Whether or not this result is correct, society is definitely shaming the wrong people: some perfectly healthy people (e.g. young women) are shamed for not being as skinny as the models on TV, and not much is being done to prevent morbid obesity in certain people (esp. middle-aged and older) who don't even try to lose weight.
(Edited to replace “adult men” with “middle-aged and older” and “eat less” with “lose weight”.)
Comment author:Multiheaded
18 June 2013 03:28:38PM
*
1 point
[-]
Yeah, and so it looks more and more that (as terribly impolite it might be to suggest in some circles on the Internet) we need much higher standards of "political correctness" and a way stronger"call-out culture" in some areas.
Most activists are neither saints nor superhumanly rational, of course - but at least in certain matters the general public might need to get out of their way and comply with "cultural engineering" projects, where those genuinely appear to be vital low-hanging fruit obscured by public denial and conformism.
I'm pretty sure that call out culture needs some work. It's sort of feasible when there's agreement about what's privileged and what isn't, but I'd respect it more if there were peace between transgendered people and feminists.
Comment author:TimS
19 June 2013 01:18:22AM
4 points
[-]
From a place of general agreement with you, looking for thoughts on how to go forward:
Are second-wave feminists more transphobic than a random member of the population? Or do you think second-wave hypocrisy is evidence that the whole second-wave argument is flawed?
Because as skeptical as I often am of third-wave as actually practiced, they are particularly good (compared to society as a whole) on transgendered folks, right?
I don't think the problem is especially about transphobia, I think it's about a harsh style of enforcing whatever changes people from that subculture want to make. They want to believe-- and try to enforce-- that the harshness shouldn't matter, but it does.
Comment author:Houshalter
19 August 2015 01:19:30AM
*
2 points
[-]
As were laboratory macaques, chimpanzees, vervet monkeys and mice, as well as domestic dogs, domestic cats, and domestic and feral rats from both rural and urban areas. In fact, the researchers examined records on those eight species and found that average weight for every one had increased. The marmosets gained an average of nine per cent per decade. Lab mice gained about 11 per cent per decade. Chimps, for some reason, are doing especially badly: their average body weight had risen 35 per cent per decade.
In fact, lab animals’ lives are so precisely watched and measured that the researchers can rule out accidental human influence: records show those creatures gained weight over decades without any significant change in their diet or activities.
I sometimes explain this to people with the following metaphor: severe weight gain is a common side effect of psychiatric drug Clozaril. The average Clozaril user gains fifteen pounds, and on high doses fifty or a hundred pounds is not unheard of. Clozaril is otherwise very effective, so there have been a lot of efforts to cut down on this weight gain with clever diet programs. The journal articles about these all find that they fail, or “succeed” in the special social science way where if you dig deep enough you can invent a new endpoint that appears to have gotten 1% better if you squint. This Clozaril-related weight gain isn’t magic – it still happens because people eat more calories – but it’s not something you can just wish away either.
Imagine that some weird conspiracy is secretly dumping whole bottles of Clozaril into orange soda. Since most Americans drink orange soda, we find that overnight most Americans gain fifty pounds and become very obese.
Looking for any relevant research or articles on the causes of obesity, or effectiveness of interventions.
Comment author:gwern
18 June 2013 08:42:42PM
1 point
[-]
I'm not too clear on how to interpret hierarchical model coefficients, but they do give at least one description of effect size, on pg6:
These associations revealed clinically meaningful differences in weight.
For example, participants who scored in the top 10% of Neuroticism's
Impulsiveness weighed, on average, over 11 Kg more than those
who scored in the lowest 10% of this trait. Likewise, participants
who scored high on Conscientiousness's Order weighed about 4.5 Kg less than those who scored low on Order.
and pg8:
In addition, the emotional aspects of impulsivity (N5: Impulsiveness and E5: Excitement-Seeking) were also associated with greater increases in adiposity. For example, on average, at age 30, those who scored one standard deviation above the mean on impulsivity had a BMI that was approximately 2.30 points higher than those who scored one standard deviation below the mean on this trait. By age 90, this gap increased to a 5.22 BMI point difference (see Figure 3).
It is unclear to what extent weight is genetic rather than environmentally set at a later stage in development.
Given that in adulthood adipocyte number stays constant, and weight changes are predominantly accompanied by changes in adipocyte volume, one may conclude that at some critical point in development the final fat cell number is attained and after this point no fat cell turnover occurs. Analysis of adipocyte turnover using carbon-14 dating (for a detailed methodological description, see Ref. [5]), however, has recently shown that this is not the case, but rather that adipocytes are a dynamic and highly regulated population of cells. New adipocytes form constantly to replace lost adipocytes, such that approximately every 8 years 50% of adipocytes (...) are replaced (emphasis added).
I am unable to find whether fat cell count can be changed over this 8 year time scale, though my biochemistry professor was inclined to that hypothesis.
Obesity can be characterised into two main types, hyperplastic (increase in adipocyte number) and hypertrophic (increase in adipocyte volume). Obese and overweight individuals may exist anywhere along the cellularity scale, however on average certain trends appear. Hypertrophy, to a degree, is characteristic of all overweight and obese individuals. Hyperplasia, however, is correlated more strongly with obesity severity.
Heredity and weight:
at present, it is impossible to conclude whether the average increase in adipocyte number seen in obese and severely obese individuals is the result of adult adipocyte recruitment or rather a reflection of a population of people predisposed (by their pre-adulthood fat cell number) to be become obese/severely obese.
The long-term weight loss cited in this review used a 1-2 year followup, during which time only <16% of adipocytes could have turned over.
it is clear that fat cell number does not decrease in adulthood, even following long-term weight loss. (emphasis added) In line with this, hyperplastic obese individuals have a poorer treatment outcome following diet-induced weight loss than hypertrophic individuals (when controlled for fat mass). Often for hyperplastic obese individuals, treatments other than diet and exercise are necessary if substantial and permanent weight loss is to be achieved. Successful, but invasive therapies include surgery to reduce the amount of calories ingested (e.g. gastric bypass) and/or surgical removal of fat tissue (e.g. reconstructive surgery or liposuction). The recent discovery of a high turnover of adipocytes in adult human white adipose tissue (approximately 10% annually) now establishes an additional therapeutic target for the pharmacological intervention of obesity [1].
After having my old 90 x 60 whiteboard stashed down the side of my bed since I moved in, nearly two years ago, I finally got around to mounting it a couple of weeks ago. I am amazed at how well it compliments the various productivity infrastructure I've built up in the interim, to the point where I'm considering getting a second 120 x 90 whiteboard and mounting them next to each other to form an enormous FrankenBoard.
A couple of whiteboard practices I've taken to:
Repeated derivation of maths content I'm having trouble remembering. If there's a proof or process I'm having trouble getting to stick, I'll go through it on the board at spaced intervals. There seems to be a kinaesthetic aspect to using the whiteboard that I don't have with pen and paper, so even if my brain is struggling to to remember what comes next, my fingers will probably have a good idea.
Unlike my other to-do list mechanisms, if I have a list item with a check box on the whiteboard, and I complete the item, I can immediately draw in a "stretch goal" check box on the same line. This turns into an enormous array of multicoloured check-boxes over time, which is both gratifying to look at and helpful when deciding what to work on next.
Firstly, a hypothesis: I am highly visual and like working with my hands. This may contribute considerably to any unusual benefit I get out of whiteboards.
So, advantages:
A whiteboard is mounted on the wall, and visible all of the time. I'm going to be reminded of what's written on it more frequently than if it's on a piece of paper or in a notebook. This is advantageous both for reminder/to-do items and for material I'm trying to learn or think about.
Instant erasure of errors. Smoosh and it's gone. I find pencil erasers cumbersome and slow, and generally dislike pencil as a writing medium, so on paper my corrected errors become a mess of scribbled obliteration.
Being able to work with it like an artistic medium. If I'm working with graphs (either in the sense of plotted functions or the edge-and-node variety), I can edit it on the fly without having to resort to messy scribbles or obliterating it and starting again.
Not accumulating large piles of paper workings of varying (mostly very low) importance. I already have an unavoidably large amount of paper in my life, and reducing the overhead of processing it all is valuable.
The running themes here seem to be "I generate a lot of noisy clutter when I work, both physically and abstractly, and a whiteboard means I generate less".
Comment author:kalium
19 June 2013 06:57:10PM
1 point
[-]
The physically larger my to-do list is, the more satisfying it feels to cross something off it. Erasing also works much better on whiteboard than with pencil and paper.
It's a term made popular by Kickstarter. If you achieve your initial goal and have resource left over, your "stretch goal" is what you do with the extra.
Comment author:Dorikka
16 June 2013 06:53:37PM
1 point
[-]
I will sometimes write things on a chalkboard that I'm trying to understand. I only have access to chalkboards, but I think that i would prefer them regardless -- the chalk feels more substantial.
I no longer use whiteboards if I can help it; while I trained back my fine motor control after my stroke sufficiently well to perform most other related activities, writing legibly on a vertical surface in front of me is not something I specifically trained back and doesn't seem to have "come for free" with other trained skills.
When I used them, I mostly used them for collaborative thinking about (a) to-do lists and (b) system engineering (e.g., what nodes in the system perform/receive what actions; how do those actions combine to form end-to-end flows).
I far prefer other tools, including pencil and paper, for non-collaborative tasks along those lines. And these days I mostly work on geographically distributed teams, so even collaborative work generally requires other tools anyway.
Comment author:lukeprog
25 June 2013 05:59:21AM
*
5 points
[-]
While researching a forthcoming MIRI blog post, I came across the University of York's Safety Critical Mailing List, which hosted an interesting discussion on the use of AI in safety-critical applications in 2000. The first post in the thread, from Ken Firth, reads:
...several of [Vega's] clients seek to use varying degrees of machine intelligence - from KBS to neural nets - and have come for advice on how to implement them in safety related systems... So far we have usually resorted to the stock answer "you don't — at least not for safety critical functions", but this becomes increasingly difficult to enforce, even if your legal and moral ground is sound. Customers are increasingly pleading the need for additional functionality and for utility to have precedence over safety (!!)
The thought of having to apply formal proofs to intelligent systems leaves me cold. How do you provide satisfactory assurance for something that has the ability to change itself during a continuous learning process? I can only assume that one would resort to black box testing, with all its inherent shortcomings and uncertainties - in particular, a black-box test would only apply to the version tested, and not to subsequent evolutions...
My fear is that the longer we ignore this problem, the more likely that users will simply ignore the safety community and press on regardless (precedents from US naval combat systems and commercial operating systems??). Can anyone offer pragmatic advice to customers who are likely to use IKBS anyway? Personally I think that I prefer... [to avoid] putting unvalidatable systems in the safety-critical firing line. But for how long can we continue to achieve this?
I encountered this thread via an also-interesting technical report, Harper (2000).
Comment author:Elithrion
19 June 2013 08:04:33PM
*
5 points
[-]
[I made a request for job finding suggestions. I didn't really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]
For job searching, focus less on sending out applications and more on asking [professors | friends | friends of friends | mentors | parents | parents' friends] if they know of anyone who's hiring for [relevant field]. When they say no, ask if they know anyone else you should talk to. To generalize from one example, every job I've ever worked has come from some sort of connection. I found my current position through my mom's dance instructor's husband.
For figuring out what to do with your long-term future, there's not much I can say without knowing your goals, but might or might not be relevant. If so, they're willing to advise you one-on-one.
Comment author:JQuinton
18 June 2013 02:18:18PM
5 points
[-]
I would like to get better at telling stories in conversations. Usually when I tell a story, it's very fact-based and I can tell that it's pretty boring, even if it wasn't for me. Are there any tips/tricks/heuristics I can implement that can transform a plain fact-based story into something more exciting?
It's okay to lie a little bit. If you're telling the story primarily to entertain, people won't mind if you rearrange the order of events or leave out the boring bits.
Open with a hook. My style is to open with a deadpan delivery of the "punchline" without any context, e.g. "Quit my job today." This cultivates curiosity.
Keep the end in mind. I find that this avoids wandering. It helps if you've anchored the story by "spoiling" the punch line. We all have that friend who tells rambling stories that don't seem to have a point. That said -
Don't bogart the conversation. If you're interrupted, indulge the interruption, and bring the conversation back to your story if you can do so gracefully. It's easy to get fixated on your story, and to become irritated because everybody won't shut up. People detect this and it makes you look like an ass. Sometimes it works to get mock-irritated - "I was telling a story, dammit!" - if doing so feels right. Don't force it.
Don't get bogged down in quoting interactions verbatim. Nobody really cares what she said or what you said in what order.
Comment author:Viliam_Bur
23 June 2013 02:39:12PM
*
4 points
[-]
Don't care about getting all the details correctly. (Your first and last points.)
I know a person whose storytelling is painful to listen, because sooner or later they run into some irrelevant detail they can't remember precisely, and then spend literally minutes trying to get that irrelevant detail right, despite the audience screaming at them that the detail is irrelevant and the story is already too long, so they should quickly move to the point.
Perhaps this could be another good advice: Start with short stories. Progress to longer ones only when you are good with the short ones.
Comment author:Nisan
18 June 2013 07:37:29PM
3 points
[-]
A good piece of advice lukeprog gave me is to structure your story around an emotional arc. E.g. a story about an awesome show you went to is also a story about what you felt before, during, and after the show. A story about the life-cycle of a psychoactive parasite is also a story about a conflict between the clever parasite and the tragic host; or a story about your feelings of fascination and horror when you first learned about the parasite.
Comment author:wadavis
20 June 2013 07:48:17PM
2 points
[-]
Join a pen and paper RPG group, it is the old trick of if you want to be better at something, spend a lot of time doing it. Easy story telling practice sessions every week.
I have started writing a Death Note fanfiction where the characters aren't all as dumb as a bag of bricks (or one could say a rationalist fic) and... I need betas. The first chapter is available on http://www.fanfiction.net/s/9380249/1/Rationalising-Death and the second is pretty much written, but the first is confirmedly "funky" in writing and since I'm not a native English speaker I'm not sure I can actually pinpoint what exactly is wrong with it. Also I'd love the extra help.
Anyone interested? My email for contact is pedromvilar@gmail.com (and I also have a tumblr account, http://scientiststhesis.tumblr.com )
Comment author:gwern
18 June 2013 08:31:57PM
4 points
[-]
Initial observations: you are cribbing too heavily from MoR, your Light is too much like Harry, the focus on utility seems silly, and jumping straight to crypto reasoning for randomness is completely unmotivated by anything foregoing.
Comment author:Kaj_Sotala
16 June 2013 08:33:04AM
*
5 points
[-]
Since I'm used to hearing Dutch Book arguments as the primary way of defending expected utility maximization, I was intrigued to read this passage (from here):
The Dutch book argument concerns the long-term consistency of accepting bets. If probabilities are assigned to bets in a way that goes against the principles of CP [Classical Probability] theory, then this guarantees a net loss (or gain) across time. In other words, probabilistic assignment inconsistent with CP theory leads to unfair bets (de Finetti et al. 1993). [...]
These justifications are not without problems. Avoiding a Dutch book requires expected value maximization, rather than expected utility maximization, that is, the decision maker is constrained to use objective values rather than personal utilities, when choosing between bets. However, decision theorists generally reject the assumption of objective value maximization and instead allow for subjective utility functions (Savage 1954). This is essential, for example, in order to take into account the observed risk aversion in human decisions (Kahneman & Tversky 1979). When maximizing subjective expected utility, CP reasoning can fall prey to Dutch book problems (Wakker 2010).
The Wakker 2010 reference is to a book; searching it for "dutch book" gets me the footnote
In a later chapter on expected utility we will show that a Dutch book and arbitrage are possible as soon as there is risk aversion (Assignment 3.3.6).
And looking up assignment 3.3.6 gets
Assignment 3.3.6. This assignment demonstrates that risk aversion implies
arbitrage. More generally, it shows that every deviation from risk neutrality implies
arbitrage.
You may assume that risk neutrality for all 50–50 prospects implies complete risk
neutrality. Show that arbitrage is possible whenever there is no risk neutrality, for
instance as soon as there is strict risk aversion. A difficulty in this assignment is
that we have defined risk neutrality for decision under risk, and arbitrage has been
defined for decision under uncertainty. You, therefore, have to understand §2.1–2.3
to be able to do this assignment.
Since I don't really have the time or energy to work my way through a textbook, I thought that I'd ask people who understood decision theory better: exactly what is the issue, and how serious of a problem is this for somebody using the Dutch Book argument to argue for EU maximization?
Dutch Book arguments as the primary way of defending expected utility maximization
The von Neumann-Morgenstern theorem isn't a Dutch book argument, and the primary purpose of Dutch book arguments is to defend classical probability, not expected utility maximization. von Neumann-Morgenstern also assumes classical probability. Jaynes uses Cox's theorem to defend classical probability rather than a Dutch book argument (he says something like using gambling to defend probability is uncouth).
I don't really understand what issue the first reference you cite claims exists. It doesn't seem to be what the second reference you cite is claiming.
Comment author:ciphergoth
29 June 2013 07:58:44AM
*
4 points
[-]
All students including liberal arts students at Singapore's new Yale-NUS College will take a new course in Quantitative Reasoning which John Baez had a hand in designing.
Baez writes that it will cover topics like this:
innumeracy, use of numbers in the media.
visualizing quantitative data.
cognitive biases, operationalization.
qualitative heuristics, cognitive biases, formal logic and mathematical proof.
formal logic, mathematical proofs.
probability, conditional probability (Bayes’ rule), gambling and odds.
decision trees, expected utility, optimal decisions and prospect theory.
sampling, uncertainty.
quantifying uncertainty, hypothesis testing, p-values and their limitations.
statistical power and significance levels, evaluating evidence.
Comment author:CoffeeStain
18 June 2013 05:41:51AM
*
4 points
[-]
Anybody with tips for beginning an evaluation for the purpose of choosing between future career and academic choices? As far as I can tell, my values are as commonly held as the next fellow:
Felt Purpose - A frequent occurrence of situations that demonstrate, in unique ways, the positive effects of my past actions. I see this as being somewhere in the middle of a continuum, where on one end I'd have only rational reason to believe my actions were doing good (effective charity), and on the other only the feeling as if I were doing good, but less rational evidence (environmental volunteerism?).
Utilitarian Benefit - I do need a fair bit of the left-hand-side of the above continuum. Even if no feeling might leave me depressed, too much feeling without enough rational evidence would leave me feeling hollow and wrong. I might also expect an increase in altruism through personal development that will push me further in the direction of effectiveness.
Academic Fun - The feeling of discovery, that my work is on not-commonly-trodden path, and the realized ability to make novel contribution.
Social Fun - Being surrounded by people with widely varying backgrounds, providing direct opportunity to partake in new social situations, a fun going beyond anthropological interest. Ability to make friends of an intelligent, kind, and uplifting sort.
Artistic Outlet - The feeling that long-held and heavily inspected aspects of my psyche are finding expression, probably combined with the feeling that this expression is understood by like minds, and that those minds are being helped by it.
Financial Freedom - Money for me is less about buying objects so much as the freedom to do new things. Like travelling and inviting as many people as I want. Also, income reflects my real economic output, which is valuable in itself, with the benefit that I can put the profit toward effective charities.
Of course, this listing of values is the beginning of my self-evaluation. What I'm less keen on is where to find a listing of my options. I am a 24 year old computer science graduate, currently in the video game industry as a pipeline and graphics engineer working on AAA titles (as opposed to independent games). I have saved up for myself about 80,000 USD, and increase this savings at roughly $40,000 per year at my current job. I would have only minor qualms about relocating (within the country). I view myself as having a high aptitude for learning but a very limited working set. I tend to solve hard problems very cleverly and thoroughly, but find it difficult to maintain work on multiple hard problems at the same time.
Current options I've considered:
Switch jobs in-industry to reclaim novelty, and/or achieve a pay increase ("Got to move sideways to move upward.")
Switch jobs out-of-industry (with retained CS focus, perhaps) to broaden interests, continue learning.
Return to academia (What study? What university?)
Form a startup with already-formed acquaintances. (Make an indie game? Other solvable problems with my skillset?)
Combination of above.
??
Would love comments, although interestingly typing this out has itself been a great help.
Comment author:Vaniver
01 July 2013 07:25:19PM
1 point
[-]
The impression I get is that games programming is underpaid and overworked relative to other styles of programming, because games are fun and the resulting halo effect dramatically increases the labor supply for the games industry as a whole. You may be able to make more working on Excel than Halo, but that's an guess from the outside with only a bit of anecdotal backing. (This may not be true for your particular skillset; my impression is that the primary consumers of intense graphics are games and animation firms.)
This also would trade off felt purpose- even if you have trouble convincing yourself games are worthwhile, you'll have a harder time convincing yourself Excel is worthwhile- for income, which may not be the right move, and would depend on the actual numbers involved. (It might be that decreasing your felt purpose by 1 on a ten point scale is not be worth an additional $10k a year, but is worth an additional $50k a year, to use arbitrary numbers as an example.)
My understanding is that grad school in computer science is only worthwhile if you want to be a professor (which I don't think will fit your criteria as well as working in industry) or you're looking for co-founders. Another thing to consider along similar lines is software mentorship programs for undergraduate students (here's one in Austin, I imagine there's probably one in Seattle)- it's a great way for you to meet people that might be cofounder material, and see how they work, as well as getting social fun (and possibly academic fun).
Just over two years ago, I was diagnosed with ALS, also known as Lou Gehrig's Disease. In short, that means that my mind will increasingly become trapped in my body as the motor neurons continue to die, and the muscles atrophy and waste away, until my diaphragm dies, bringing me with it.
...
But yes, there is a silver lining to this all, such as it is. Kim Suozzi made a similar plea to the Internet a year ago today, and came up with the brilliant idea of freezing her body in the hopes of a distant advanced technology being able to revive her someday. Her body now rests at liquid nitrogen temperatures.
Pretty much the usual if someone looks closely at a commonly held belief about a medical issue. The usual dramatic belief that fertility drops off sharply at 35 is based on French birth records from 1670 to 1830. Human fertility is very hard to study. Women who are trying to have their first child at age forty may be less fertile than average. And so on.
Comment author:kalium
19 June 2013 07:06:22PM
3 points
[-]
I have concluded that many of the problems in my life are the result of being insufficiently impulsive. As soon as I notice a desire to do something, I more or less reflexively convince myself that it is a bad idea to do just now. How can I go about increasing my impulsivity? I want to change this as a persistent character trait, so while ethanol works in the short run it is not the solution I am looking for.
As soon as I notice a desire to do something, I more or less reflexively convince myself that it is a bad idea to do just now.
This sounds more like low expectancy than lack of impulse. Impulse can make you jump off your feet to do something you want to do, but it can just as quickly distract you from doing what you want to do. Check this out. Perhaps what you might need is to increase optimism and not impulsiveness.
As for the desire of doing something, try to convince yourself that it doesn't need to be perfectly thought out before doing it. For example, if you are starting a business you could be bogged down by trying to perfectly plan everything out and end up doing nothing. Instead give yourself 24 hours to start a business. Its an unreasonable request, but you would be surprised at how far you can get.
Comment author:lsparrish
19 June 2013 05:35:56AM
3 points
[-]
reasonably easy to put together.
Confirmed. Tasty too. I got the supplies on the way home from work today, sans olive oil (which I had already) and potassium (which I ordered online). It's not the cheapest way in the world to eat -- it cost around $80 including the potassium. Most of the supplies will last 30 days, but some (oat flour, cocoa, and soy protein) will run out sooner. The potassium should last longer. A 30-day supply of everything would probably be around $100-$120.
Comment author:tgb
17 June 2013 02:27:55AM
3 points
[-]
Over the past several months, I have been practicing a new habit: whenever I have a 'good' idea, I write it down. ('Good' being used very loosely.)
This is a very simple procedure but it seems to have several benefits. First, I began noticing that I remembered having a 'good' idea but not being able to remember what it was. I now notice this behavior much more strikingly and it causes some small amount of distressing thinking about what I might have forgotten. Writing it down relieves that worry. Second, I can refer back to it later. So far nothing significant has come out of this, but I like having the option and I have gone back to note that some of my once-though 'good' ideas don't hold up on second thought. This is useful information about yourself to have. Third, it encourages me to have more good ideas. For a while I tried to write down one a day (possibly using a long-term 'cached' idea I had been floating for a while if I didn't have anything good that day). Fourthly, writings things down even just for myself helps me to really get a clear idea of it.
I'm sure there are many suggestions which are similar to this or encompass it. Obviously this is similar to having a journal and probably shares some of the benefits. This has the advantage of being extremely simple and takes hardly any time at all and is only done when it has an obvious benefit.
Comment author:beoShaffer
17 June 2013 02:49:55AM
1 point
[-]
Getting Things Done suggests writing down everything you need to do as soon as you realize you need to do it, and this can include following up on good ideas.
I'm a little torn on that one-- on one hand it adds convenience most of the time, but it makes it less convenient to check on recent karma. The latter is something I feel like doing now and then, but it's possible I'm saner if it isn't convenient.
It was never an ideal way to check on recent karma, though it was better than nothing. I'd quite like something similar to Stack Overflow's Reputation view.
I am particularly aware of it right now because I've been watching my 30-days karma drop slowly and steadily for the last couple of days, but I have no idea what in particular people want less of.
That said, I suspect that's just because I'm getting individual downvotes across a wide set of comments in the 0+/-2 range, and changing the way that information is displayed won't really help me answer that question any better than the current system does.
I dislike the change, as it's harder to get an impression about a new user based on their user page now, the comments by other users are getting in the way, and it's not possible to tune them out. Also, the change has broken user RSS feeds.
I'm finding the new version usable-- the green edge might allow for faster scanning, but the new version isn't bad with a little practice.
On the other hand, if there are multiple new comments in a thread, I find that I miss the alternating white/pale blue way of distinguishing comments. It would be nice to have two pastels instead of one.
Comment author:Viliam_Bur
23 June 2013 02:20:35PM
*
2 points
[-]
The visual difference between a new comment and an old comment should be greater than the difference between two old comments.
How about using two pastel colors for the old comments... and using the white background for the new comments?
It would also be nice to have e.g. a small green "NEW" text in the corner of new comments, so I can quickly find a few new comments in a long discussion by using the "Find Next" functionality of my browser. (Because I don't have a functionality to search a comment based on its background color.)
Just after the PRISM scandal broke, Tyler Cowen offered a wonderful, wonderful tweet:
I’d heard about this for years, from “nuts,” and always assumed it was true.
There is a model of social knowledge embedded in this tweet. It implies a set of things that one believes to be true, a set of things one can admit to believing without being a “nut”, and an inconsistency between the two. Why the divergence? Oughtn’t it be true that people of integrity should simply own up to what they believe? Can a “marketplace of ideas” function without that?
It’s obvious, of course, why this divergence occurs. Will Wilkinson points to an economy of esteem, but there is also an economy of influence. There are ideas and modes of thought that are taboo in the economy of influence, assertions that discredit the asserter. Those of us who seek to matter as “thinkers” are implicitly aware of these taboos, and we navigate them mostly by avoiding or acceding to them. You can transgress a little, self-consciously and playfully, as Cowen did in his tweet. If you transgress too much, too earnestly, you are written off as a nut or worse. Conversely, there are ideas that are blessed in the economy of influence. These are markers of “seriousness”, as in Paul Krugman’s perceptive, derisive epithet “Very Serious People”. This describes “thinkers” whose positions inevitably align like iron filings to the pull of social influence, indifferent to evidence that might impinge upon their views. Most of us, with varying degrees of consciousness, are pulled this way and that, forging compromises between what we might assert in some impossible reality where we observed social facts “objectively” and the positions that our allegiances, ambitions, and taboos push us towards. Individually, there is plenty of eccentricity, plenty of noise. People go “off the reservation” all the time. But pub[l]ic intellectualizing is a collective enterprise. What matters is not what some asshole says, but the conventional wisdom we coalesce to. When the noise gets averaged out, the bias imposed by the economy of influence is hard to overcome. And the economy of influence pulls, always, in directions chosen by incumbent holders of wealth and power, by people with capacity to offer rewards and to mete out punishment.
Related to this, there are a couple of professional philosophers around that are starting to take conspiracy theories seriously. Not just in the manner of critically analysing them, but also in the sense of how to actually make inferences about the existence of a conspiracy, how to contrast official theories and conspiracy theories, and how to reason with disinformation present.
You can read the rules and sign up at the link, but, essentially, you answer the questions twice: once honestly, and the second time as you think an atheist or Christian (whichever you're not) plausibly would. Then we read through the true and faux atheist answers and try to spot the fakes and see what assumptions players and judges made.
Comment author:aime15
16 June 2013 11:15:36PM
4 points
[-]
Politicians have a lot of power in society. How much good could a politician well-acquainted with x-risk do? One way such a politician could do good is by helping direct funds to MIRI. However, this is something an individual with a lot of money (successful in Silicon Valley or on Wall Street) could do as well.
Should one who wants to make a large positive impact on society go into politics over more "conventional" methods of effective altruism (becoming rich somewhere else or working for a high-impact organization)?
Should one who wants to make a large positive impact on society go into politics over more "conventional" methods of effective altruism (becoming rich somewhere else or working for a high-impact organization)?
If you think about it, this is quite a striking statement about the LW community's implicit beliefs.
The implicit assumption is that people don't go into politics because they want (like, really, effectively, goal-oriented, outcome-based want) to make large positive impacts on society. We read a statement with that assumption quite plainly baked into it, and it doesn't seem weird. The fact it doesn't seem weird does itself seem kind of weird.
Sam Nunn was a US senator who wanted to buy surplus nuclear weapons from Russia, rather than risk them wandering off. He was unable to convince the rest of the government to pay for it, but he was able to convince the government to let Buffet and Turner pay for it. He has since decided that he can do more to save the world outside of government.
Added: But, he was rumored to be Secretary of Defense under Gore. So he thought some positions of government were more useful than others.
One way such a politician could do good is by helping direct funds to MIRI.
How?
I wonder if a good way to do good as a politician would be to try to be effective and popular during your term, then work as a lobbyist afterwards and lobby for causes you support (like x-risk reduction, prediction market legalization, and whatnot).
Comment author:aime15
17 June 2013 08:30:13AM
1 point
[-]
How?
I suspect my model of the method used to allocate government funding may be oversimplified/incorrect altogether, but I am under the impression that those serving on the House Science Committee have a significant say in where funds are allocated for scientific research. Given that some members of this committee do not believe in evolution and do not believe in man-made climate change, it seems that the potential social good of becoming a successful politician could be very high.
I suspect my model of the method used to allocate government funding may be oversimplified/incorrect altogether, but I am under the impression that those serving on the House Science Committee have a significant say in where funds are allocated for scientific research.
My impression is that the House Science Committee is too high to aim for. A more plausible scenario would be MIRI convincing someone at the NSF to give them grants.
Plenty for LW here-- not just that English is a steadily declining fraction of online material, but the difficulties of finding out what proportion of the web is in what language, and the process by which more and more of the web is in more and more languages.
Comment author:cousin_it
23 June 2013 08:08:13AM
*
2 points
[-]
I just found this nice quote on The Last Conformer which is supposed to prove that betting on major events is qualitatively different from betting on coinflips:
I wouldn't even offer bets on this kind of probability because that would just invite better informed people to take my money.
It seems to me that the problem exists for coinflips as well. If I flip a coin and don't show you the result, your beliefs about the coin are probably 50/50. But if I offer you to bet at 50/50 odds that the coin came up heads, you'll probably refuse, because I know which way the coin came up and you don't.
According to the Dutch book argument for rationality, we are supposed to accept either side of any bet offered at the odds corresponding to our beliefs. In my example, that idea breaks down, because getting the offer is evidence that you shouldn't take the bet. But then how do we formulate the Dutch book argument?
Comment author:pengvado
23 June 2013 11:28:51AM
*
2 points
[-]
The selection effect you mention only applies to offering bets, not accepting them. If Alice announces her betting odds and then Bob decides which side of the bet to take, Alice might be doing something irrational there (if she didn't have a bid-ask spread), but we can still talk about dutch books from Bob's perspective. If you want to eliminate the effect whereby Bob updates on the existence of Alice's offer before making his decision, then replace Alice with an automated market maker (setup by someone who expects to lose money in exchange for outsourcing the probability estimate). Or assume some natural process with a naturally occurring payoff ratio that isn't determined by the payoff frequencies nor by anyone's state of knowledge.
I am very interested in higher order theory of mind (ToM) tests for adults to differentiate those with high theory of mind quotient if you will. My hypothesis is that people with strong theory of mind are better at sales – I have an interest in both. Most tests I find online are meant to test children and for Asperger's Syndrome, what I want are complex questions and problems.
I recently saw a highly upvoted comment on reddit that stated "...Mifune, destroying the top black belts..." and cited this video However, I believe OP has misread the situation. Mifune is highly respected and is in a room full of spectators that fully expect him to come out on top when sparing, or at least would feel embarrassed for Mifune if he didn't. The pretense is that everyone is trying their hardest to throw Mifune but he is so good he just twirls around them and that it is not a demonstration - it's real sparing. OP and those who upvoted, failed to put themselves accurately in the role of the students in that room and think " gee I totally don't want to be that guy that throws the old man down." The students are conforming, whether they believe it or not.
Any 4 year old can pass the false belief test but the Mifune video is a lot more subtle and complex. There are intentions involved, there is also each student's knowledge of other student's intentions, conformity, and self-delusion. The huge man being thrown by Mifune might say that he really believed he was trying his hardest not the be thrown, but ToM isn't just about what others believe it's also about accurately predicting other peoples actions, why they did it, and what they think they believe they did it for.
I am trying to compile a list of such examples, and would greatly appreciate it if anyone could add to this conversation by agree/disagreeing with what I have said, and especially, provide some examples of more complex theory of mind problems.
Comment author:FiftyTwo
25 June 2013 04:26:32PM
1 point
[-]
Political thrillers as a genre and some aspects of real life politics are a lot about theory of mind. The multilevelled effect of thinking how someone will act based on how you or a third party will act, ad infinitum.
I'm not sure exactly what you mean by theory of mind though. It seems a different skill to model how a theoretical rational agent would behave in a certain situation (as we do when discussing the prisoners dilemma or related logic puzzles) or modeling how a particular human being will behave (e.g. Alice tends to underestimate herself, Bob is overly cautious).
There's this very interesting trope being forged on TVT, and I found it very interesting from a rationalist standpoint, especially the examples involving God... What an asshole.
Comment author:wedrifid
25 June 2013 04:17:25PM
1 point
[-]
Showing Compassion To Designated Enemies; A Punishable Offense
There's this very interesting trope being forged on TVT, and I found it very interesting from a rationalist standpoint, especially the examples involving God... What an asshole.
Interesting link. Again from a rationalist standpoint it seems to be the correct move (at times, with the same conditions that apply to punishment in general as well as a few others).
Comment author:Ritalin
25 June 2013 10:09:40PM
*
1 point
[-]
As someone who's studied The Art Of War intensively, and to whom "defeat means friendship" (as long as the opposition does feel thoroughly defeated) is a matter of course, I find that incentivizing unforgiveness and wanton destruction (I mean, seriously, they even had to kill the cattle? How is that in any way practical or rational?) is not only aesthetically dissonant, but wasteful and silly as hell. Going out of one's way to ensure some don't get a proper ritual, or otherwise kicking the defeated while they are down, also strikes me as disgusting, wasteful, and, frankly, cartoonishly over the top.
They should make a piece of fiction with a villain whose actions mirror those of YHWH perfectly. Have him blow a fortress to pieces and then demand that everything that breathes within be slaughtered. Have him kill some of his followers for disobeying some arbitrary rule, then have him kill many more just because they complained about it.
With his mind.
That's not an omnibenevolent deity, that's a fucking Dungeon Keeper.
See how many people notice the references. See how many identify this overlord as a vile villain before being informed he's patterned after The God.
Comment author:wedrifid
25 June 2013 11:28:50PM
2 points
[-]
As someone who's studied The Art Of War intensively, and to whom "defeat means friendship" (as long as the opposition does feel thoroughly defeated) is a matter of course,
Do you remember the other parts too? The parts that don't feel so warm and fuzzy? Or other effective military strategies? Defeat rather seldom means friendship when it comes to pre-established enemies, whether in The Art of War or outside of it. The generals discussed in The Art of War commanded their soldiers to kill other soldiers (and their leaders) and conquer strategic resources. The Art of War gives instructions on how to do so more effectively, with more compliance from soldiers and, where possible, in such a way that the enemy does not fight back significantly.
I find that incentivizing unforgiveness and wanton destruction (I mean, seriously, they even had to kill the cattle? How is that in any way practical or rational?) is not only aesthetically dissonant, but wasteful and silly as hell. Going out of one's way to ensure some don't get a proper ritual, or otherwise kicking the defeated while they are down, also strikes me as disgusting, wasteful, and, frankly, cartoonishly over the top.
Yes, God is silly as well as a dick (and non-existent). But looking at the strategy from the perspective of instrumental rationality interest rather than from the perspective of indignant atheism the cattle-killing silliness is not especially relevant. Most people agree that it is better to keep the cattle than kill them. What is more interesting is just when it is instrumentally rational to apply force (be that political, economic or physical) against those that are assisting a particularly troublesome enemy.
The details matter a lot of course. There are cases where it obvious that is instrumentally rational to kill those who are assisting the enemy while there are cases where that would be outright self destructive. On one side of the line there is the sole provider of particularly advanced weaponry to the enemy, who does not trade with you and who has no significant social alliances and on the other side there is an welfare charity who provides medical assistance indiscriminately worldwide, is loved by all and protected by an alliance of powerful nations. In between things are less simple.
Comment author:Ritalin
25 June 2013 11:56:30PM
2 points
[-]
Do you remember the other parts too? The parts that don't feel so warm and fuzzy?
I don't recall ever mentioning anything fuzzy or warm; It's simply a pragmatic matter of taking the human factor into account. You simply try to fight and destroy as little as possible because it's expensive, risky, and creates ill will in the long tern. Napoleon and the armies of the Spanish Empire are excellent examples of how to win every battle, piss everyone off, and never win the peace.
Of course, if you do need to crush, kill, destroy, do it quickly and decisively, with no hesitation or pussyfooting. Therein lies the difference between being respectably compassionate, and being a sentimental fool begging to be abused.
The Art of War isn't just about winning battles or wars, it's about winning the peace that comes afterwards; it's not just about beating your foe, but about getting them to stay beaten, and, in fact, help you out.
And, in particular, Scorched Earth tactics are extremely costly, and they are only effective in very specific circumstances.
As for the Bible examples cited there, I do not see how they are practical in any way, shape, or form. I can see the point of some of the other examples, but most of them are about helping out or standing up for someone who has been reduced to complete harmlessness and can't be a threat anymore. This is most egregious when the defeated enemy in question is a freaking corpse.
Comment author:[deleted]
26 June 2013 12:49:39AM
*
4 points
[-]
As for the Bible examples cited there, I do not see how they are practical in any way, shape, or form.
God forbid I find myself defending the morality of the Hebrew Bible, but it seems to me you're making a claim here (i.e. by implication, that the behavior of the Israelites was impractical/silly/evil) from a very poor epistemic position. The details of warfare, religious ritual, and the politics of conquest of that period are thin, and it's not even clear that the story in Joshua (for example) represents an historical event (we have, and expect, no archeological record of this war), and so the practical purpose of the passage you cite may be entirely other than recommending or reporting a certain mode of warfare. Even if it is making such a report or recommendation, we simply lack the details to evaluate it.
Essentially, we have no idea whether or not what's being described is silly or cunning or even moral or immoral (unless maybe we're deontologists, but I doubt that). Speaking so confidently about something where confidence is so ill warrented is often a symptom of being mind-killed on a particular subject. And we should expect that as atheists, we are very, very likely to be mind-killed about biblical historiography (maybe right, as well, but mind-killed nonetheless).
Pointing out the horrors of the bible is a worthwhile way to put the morality of theists in tension with their holy book. But once that rhetorical work is done, the epistemically cautious consequentialist has absolutely no business throwing invective at the bible without a thorough study of the period being written about. Lord knows why you'd bother with that though.
Comment author:Ritalin
26 June 2013 10:54:44AM
*
0 points
[-]
" the behavior of the Israelites was impractical/silly/evil"
I made no such claim, unless you mean the fictional Hebrews in the book rather than the collection of tribes the Romans diaspora'd many, many centuries later. Even then, what would you expect of the poor guys, when they're being terrorized and cajoled into being evil by Kira-on-steroids?
However, from what we know of the behaviour of the Judeans under Roman rule, practicality and pragmatism weren't top priorities for them, and their Scripture probably wasn't a very sane source of advice on that matter.
More relevantly, some people still (claim to) take their ethics from the Bible, the Old Testament is very popular in the USA, and rhetorics of an Angry God Striking Down Evil and Smiting The Heathen, and of Lambs not Going Astray from Flocks (what a horrifying metaphor, being compared to cattle, of the stupidest sort no less!) are still floating around, influencing the way politics is done, whispering in the subconscious and blaring in the loudspeakers.
That is, of course, not the only culture that was influenced by a book that not only advocates genocide, but demands that, when it be done, it be carried out thoroughly. The Massacre of St. Barthelemy, some of the actions of Cromwell, and think of the horrifying, nauseating, vertiginous irony of Old Testament memes having been a factor, no matter how small, in the intellectual genesis of the Final Solution.
Comment author:[deleted]
26 June 2013 11:45:59AM
*
2 points
[-]
unless you mean the fictional Hebrews in the book rather than the collection of tribes the Romans diaspora'd many, many centuries later.
I mean the Israelites! That's what these people, whose historicity is not in doubt, called themselves. They're the ones that wrote the bulk of the biblical material between around 900 and 587 BCE.
The Massacre of St. Barthelemy...
Maybe, though this strikes me as conjecture. I also don't see how it's related to claims you seem to be making about the authors of the bible and their people.
Comment author:Ritalin
26 June 2013 09:58:10PM
*
1 point
[-]
You know what, get back to me on the historicity of Hebrews after reading this. I'm not adverse to shifting my priors on that topic; please refer me to a work that does not refer to the Bible as a starting point for its hypotheses, if that's at all possible.
Until I have a Bible-independent framework on how to think of the ethnic conglomerate that claim to be the Descent of Israel, I prefer to assume it is all fiction as a working hypothesis, and start from there.
This is why also why I am reticent to call them Israelites, despite them calling themselves something like that, just the same way I wouldn't call Arabs Ismaelites; I doubt that Israel/Jacob, Isac, Ishmael or Abraham existed, and I doubt either group's direct descent from them. I certainly doubt that any human gained the title of Israel after wrestling with God and winning.
As for them calling themselves Israelites, allow me to be a little pedantic here; they called themselves B'nei Yisrael; Israelites is a greek term.
Comment author:[deleted]
26 June 2013 10:30:29PM
*
5 points
[-]
You know what, get back to me on the historicity of Hebrews after reading this.
I see the point that post is making, but I'm not just blowing air here. I have a degree in near-eastern history, and I studied with an archeologist who works on this period. None of us were theists, or remotely interested in defending or even discussing any modern religion. The historical books of the Hebrew bible are a relatively reliable historical record, so far as we can tell, but the fact is we just don't have that much detail about the period in which it was written, so mostly we just don't know. Too many of our sources are (as EY points out) singular. However the historicity of the first-temple (900-587 BCE) Israelites very roughly as we find them in the HB is not really subject to much doubt. There are people who argue that the whole bible was written much later, and the history of Israel was just made up, but this theory is taken about as seriously by archeologists and historians as is ID by biologists. Needless to say, pretty much everything from the Torah that's plausible (like the period of slavery) is pretty much unconfirmable. And no one takes seriously the implausible stuff, like Abraham or Noah.
I'm throwing authority at you here for two reasons. 1) the real argument consists in taking you through a bunch of archeology and historiography and I don't feel like taking the time and 2) neither do you. You don't, I suspect, actually care at all about first temple Israelite culture. You care about how modern Abrahamic religions are false and politically destructive. Granted! But that claim doesn't have anything to do with history, and thinking that arguments against modern theists constitute an understanding of an ancient culture is not justifiable.
My real point however was one of caution. You're exactly right to point out that by the standards of Christians or Jews or Muslims, the god of the Hebrew bible is savage. But you have no empirical standing to make claims about the morality or practicality of first temple Israelites, because pretty much no one does.
I doubt that Israel/Jacob, Isac, Ishmael or Abraham existed, and I doubt either group's direct descent from them.
Yeah, who knows. But I call them Israelites because they called themselves that. I see no reason to make a point of it. And 'Israelites' may happen to have been a Greek term, but today it's just the way you translate that Hebrew phrase into English.
Comment author:Metus
16 June 2013 01:15:42PM
2 points
[-]
I was thinking about writing down everything I know. After reflecting a few seconds on that I realized what a daunting task I haveset out to do. Has anyone tried this or a suggestion how I should go about this if at all?
An excerpt from the introduction (tldr: beware of eating yourself):
This book is about how to make a complete map of everything you think for as
long as you like.
Whether that's good or not, I don't know- keeping a map of all your thoughts
has a freezing effect on the mind. It takes a lot of (albeit pleasurable) work,
but produces nothing but sight.
If you do the things described in this book, you will be immobilized for
the duration of your commitment.The immobilization will come on gradually,
but steadily. In the end, you will be incapable of going somewhere without
your cache of notes, and will always want a pen and paper w/ you. When you
do not have pen and paper, you will rely on complex memory pegging devices,
described in "The Memory Book". You will never be without record,
and you will always record.
You may also articulate. Your thoughts will be clearer to you than
they have ever been before. You will see things you have never seen before.
When someone shows you one corner, you'll have the other 3 in mind. This is
both good and bad. It means you will have the right information at the right
time in the right place. It also means you may have trouble shutting up. Your
mileage may vary.
I've settled for (1) keeping a structured list of all books from which I've learned something worthwhile, and (2) a log for current ideas (with no more than a few short entries a week). This is sufficient to locate and efficiently relearn most barely-remembered ideas when they become relevant again.
Comment author:roystgnr
17 June 2013 05:39:52PM
1 point
[-]
Writing down everything you know seems pretty pointless. Writing down everything you fear forgetting might give you a smaller but more useful list, since it lets you cull out anything you're in no danger of forgetting (e.g. all the arithmetic facts) as well as anything you wouldn't care if you forgot (e.g. the vast majority of knowledge in your head).
I actually sort of do this, in a private git repository where (alongside lists of interesting typically-paragraph-sized quotes) I keep lists of interesting (typically-sentence-fragment-sized) topic names. Sometimes the names serve as mnemonics that merely remind me of an interesting fact I once encountered but haven't thought about recently (e.g. "Rai stones"). Sometimes I'll skim through the lists and encounter a topic that I've completely forgotten about (e.g. "burying the corpse effect") and I'll quickly Google to see why I once thought it was so interesting to begin with. A little organization helps. E.g. "burying the corpse effect" was in my "economicsbits" file under the hierarchy "Financial markets, investment", "Market manipulation", "Cornering the market", so it was easy to tailor web searches to lead me to results from economists rather than morticians.
I don't know if this is useful for anything more than entertainment. TimS had a very good question here.
Comment author:Michelle_Z
16 June 2013 05:28:28PM
1 point
[-]
I started something like this awhile ago. I was trying to write papers for one of my classes and couldn't find a reference I needed. After about the third time this happened, I figured I ought to make some kind of searchable list of references with summaries about what they contain, and links to the file. I use a google document now, with summaries of books I've read and notes from my classes, in addition to references. What I really want is something like workflowy where I can collapse bulleted points. Workflowy would be fine, but I'd be worrying about going over their limit and having to pay for it, since I have a lot of bullet points. In the meantime, I use google docs' "table of contents" feature so I have that orderly list I want.
I don't put "everything" in it. My general rule is that it has to be either useful, something I'd likely forget, or something interesting. I also link to everything so I don't have to search my history.
In last year's survey, someone likened Less-Wrong rationalism to Ayn Rand's Objectivism. Rand once summed up her philosophy in the following series of soundbites: "Metaphysics, objective reality. Epistemology, reason. Ethics, self-interest. Politics, capitalism." What would the analogous summary of the LW philosophy be?
In the end, I found that the simplest way to sum it up was to cite particular thinkers: "Metaphysics, Everett. Epistemology, Bayes. Ethics, Bentham. Politics, Vinge."
A few comments:
For metaphysics... I considered "multiverse" since it suggests not only Everett, but also the broader scheme of Tegmark, which is clearly popular here. Also, "naturalism", but that is less informative.
For epistemology... maybe "Jaynes" is an alternative. Or "overcoming cognitive bias". Or just "science"; except that the Sequences contain a post saying that Bayes trumps Science.
For ethics... "Bentham" is an anodyne choice. I was trying to suggest utilitarianism. If there was a single well-known thinker who exemplifies "effective altruism", I would have gone with them instead... Originally I said "CEV" here; but CEV is really just a guess at what the ethics of a friendly AI should be.
For politics... Originally, I had "FAI" as the answer here. That may seem odd - friendly AI is not presented as a political doctrine or opinion - but the paradigm is that AGI will determine the future of the world, and FAI is the answer to the challenge of AGI. These are political concerns, even if the ideal for FAI theory would be to arrive at conclusions about what ought to happen, that become as uncontroversial and "nonpartisan" as the equations of gravity. I chose Vernor Vinge as the iconic name to use here; I suppose one could use Kurzweil. Alternatively, one could argue that LW's cardinal metapolitical framework is "existential risk", rather than FAI vs UFAI.
I wonder whether more people will think of Julian Jaynes rather than E. T. Jaynes if you just rattle "Everett, Jaynes, Bentham, Vinge" at them. This does seem like a very nice ultra-concise description though.
Odd brain exercise I find entertaining: Using only knowledge about this universe, try to determine what kind of universe would be most likely to simulate our kind of universe.
For example, to my eye, general relativity, plus the propensity of pi to pop up in odd places, implies something like a hierarchy-relative polar coordinate system is the standard mathematical model in our host universe, as opposed to the Cartesian coordinate system we tend to default to. So, what would a universe look like such that this is the most intuitive way of considering data? Seems most likely that such a view is most likely to arise in a nonoriented universe; gravity would be unlikely, as gravity provides a natural plane of orientation, so something like universal attractive and repulsive forces probably wouldn't exist.
Comment author:shminux
24 June 2013 08:10:05PM
1 point
[-]
So with the PRISM program outed, the main thrust of discussions is about its legality and consequences. But what interests me is a rather non-political issue of general competence. One would think that the NSA and in general any security agency would take risk assessment and mitigation seriously. And having its cover blown ought to be somewhere close to the top of the list of critical risks. Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin. And given the Bradley Manning security breach, one would expect even a half-decent internal security officer to be rather paranoid. And who knows what other low-ranking admins quietly did and probably are doing with what information and for what purpose and in what organizations.
I am wondering, is it reasonable to assume that the people responsible for the integrity of a spy agency are this inept? Or is what we see now is somewhere low on the list of risks and is being handled according to plan? Or, if you go deeper in the conspiracy mode, is orchestrated for some non-obvious reason?
Personally, I hope it's the last possibility, because I'd take competency over ineptitude anytime, nefarious purposes or not.
Comment author:beoShaffer
24 June 2013 09:08:10PM
*
1 point
[-]
Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin.
So, I'm not an expert but going from a couple of news articles and HN discussion I get the impression that Snowden actually did require that level of access to do his job and that its enough of a sellers market for people with his general class of IT skills that you can't really get technically competent people if you add to many additional constraints.
So, I just moved to Europe for two years and finally got finantial independence from my (somewhat) Catholic parents and I want to sign up for cryonics. Is there international coverage? Is there anything I should be aware of? Are there any steps I should be taking?
What is the proper font, spacing, and so forth for a LessWrong article?
The proper formatting for a LessWrong article is the formatting that you see in most LessWrong articles.
The way to achieve that is to use for formatting only the tools provided in the LessWrong editor (except for the button labelled "HTML" — use that only if you absolutely must). If you find it more convenient to write your article in an external editor, make sure that when you copy and paste it across, you copy only the text without the formatting. One way to do that is to prepare the text in an editor that does not support formatting, such as any editor that might be used to type program code.
Comment author:[deleted]
25 June 2013 10:48:08PM
1 point
[+]
(1
child)
Comment author:[deleted]
25 June 2013 10:48:08PM
1 point
[-]
I'm using a Mac and copying from Pages into TextWrangler and from TextWrangler to the LessWrong editor does not appear to have worked. What might be going wrong?
Also, information both about people's reaction to nonstandard formatting and how go about getting standard formatting if you're working from an external editor should really be included in the LessWrong FAQ.
I'm using a Mac and copying from Pages into TextWrangler and from TextWrangler to the LessWrong editor does not appear to have worked. What might be going wrong?
That's strange. One gotcha is that if you accidentally paste in some formatted text, then delete it, the LW editor remembers the font and line settings at that point and will apply them to any unformatted text that is then pasted in. To make absolutely sure of eliminating all formatting from the editor, select everything and hit backspace twice. The first will delete the text and the second will delete its memory of the formatting. Then paste in the unformatted text, making sure that you actually copied it out of TextWrangler after pasting it in there.
In the last resort, click the HTML button and see if the HTML has anything like:
Comment author:AlexSchell
25 June 2013 01:18:41PM
*
1 point
[-]
Bayes' theorem written a certain way is surprisingly effective and easy to use in Fermi estimates of population parameters and risks. Unless you are already quite well versed in intuitive Bayes, this is likely of interest.
Hastie & Dawes (2010, p. 108) describe the "Ratio Rule", a helpful way of writing out Bayes' theorem that is useful for the quick estimation of an unknown proportion:
Pr(A|B) / Pr(B|A) = Pr(A) / Pr(B)
(Ratio of conditional probabilities equals ratio of unconditional probabilities.)
To steal their example, it's often reported that most 'hard' drug users also use (or started out with) pot, and this is often taken to support the notion that pot is a gateway to hard drugs. Hastie & Dawes point out that for the purposes of evaluating the 'gateway' claim, what we really want is not the reported value of Pr(has used pot | has used hard drugs) but rather Pr(has used hard drugs | has used pot) [*]. Suppose that Pr(pot|hard) ~ 0.9. We know that Pr(pot) ~ 0.5 (fraction of Americans who've used pot at some point), and we estimate that Pr(hard) is lower by a factor of, say 2.5 - 5, so the ratio Pr(pot) / Pr(hard) is between 2.5 and 5. By the ratio rule, Pr(pot|hard)/Pr(hard|pot) = 2.5 - 5, so Pr(hard|pot) is between about 0.2 and 0.4.
Another example: recently I found that the annual risk of dying by suicide for a young to middle-aged male is about 0.02%, as high as the annual risk of a middle-aged male dying in a car accident (!). I figured that taking into account my not having a history of mental illness should decrease this risk. Googling revealed that Pr(mental illness|suicide) = 0.9, and Pr(mental illness) is between 0.06 and 0.25 depending on the severity criteria you use. I want x = Pr(suicide|not mental illness), so I set up the ratios (assuming that the population proportions can be interpreted as annual risk):
Comment author:DanArmak
24 June 2013 10:38:03PM
1 point
[-]
A meetup is coming up on July 4th in Tel Aviv. I want to post about it, but I've never done a meetup post before. Are there any non-obvious guidelines I should follow? A template? What about the map?
Comment author:PhilipL
25 June 2013 01:36:33PM
3 points
[-]
Click the 'Add Meetup' button in your user-box thing at the top of the sidebar, and fill in the fields with appropriate info. The map is automatically inserted based on the address you entered and Google Maps.
Comment author:cousin_it
21 June 2013 02:27:54PM
*
1 point
[-]
Is there a general way to answer questions like this, which often occur in economics and the social sciences:
"Does institution X play a part in keeping parameter Y stable? It looks like parameter Y has been really stable for awhile now. Is institution X doing a good job, or is it completely useless?"
Well, to go ahead and state the incredibly obvious: in cases where institution X is not equally well-established globally, one thing to look at is variations in X among different nations, geographic regions, populations, etc. (depending on the kind of thing X is). If Y remains equally stable across the board while X varies, that's evidence that X doesn't have much to do with Y.
Comment author:cousin_it
21 June 2013 03:40:14PM
*
3 points
[-]
Part of the problem is that changes in X might be aimed at keeping Y stable when some other factor Z varies. See Milton Friedman's Thermostat:
If the driver is doing his job right, and correctly adjusting the gas pedal to the hills, you should find zero correlation between gas pedal and speed, and zero correlation between hills and speed.
For example, it's hard to answer the question "do armies stop invasions?" by using correlations, because the rulers can adjust the strength of the army in response to the risk of getting invaded, so the resulting risk depends mostly on the risk tolerance of the rulers.
Comment author:[deleted]
17 June 2013 03:01:26PM
-3 points
[-]
All right, since I was told on my very first, and abortive, discussion thread that I should post a larger summary or excerpt of the link I had on there if I wanted to comport with LW's norms, let me do that here instead (since my karma is now too low to make another discussion post).
So I've written a long article summarizing a life philosophy which asserts the significance of a certain kind of meditative self-expression for grasping human freedom and understanding the significance of pain and suffering in human life.
Any LessWrong readers interested in thinking about the meaning of life, meditation, psychology, philosophy, spirituality, art, or in better understanding and handling their own minds should be interested.
Here is the largest excerpt I could post without the comment being rejected as too long:
The next time you stub your toe or otherwise hurt yourself, take a moment to become curious about exactly what the pain is like. What exactly does it feel like? Is it stabbing? Does it radiate? Is it blunt or sharp? Does it come and go? Is it cold or hot? Does it remind you of someone, or something, or some place?
As soon as you suspend the pain in your mind, the pain immediately changes. It becomes interesting. Like Keanu Reeves might stop a bullet in the air in The Matrix, you stun the pain by paying it conscious attention and then examining it like a scientist or artist might. It becomes fascinating. And then, as you describe it, its character changes more and more. It becomes sharp, specific, and beautiful. It might still be pain, but still, even as pain, it is no longer painful in the same way. Now it is a jewel. You see within it organization, ideas, intelligence.
Through the process of reflection and then expression, we can transform pain into beauty. This is true not just of physical pain, but of all pain, and indeed, of any experience. This is the essence of human freedom and power.
The most interesting and fundamental question in the world is what we’re doing here in this life. What’s the point? I spent years thinking about this question — going through psychology and western and eastern philosophy, and asking this question over and over.
I think I have an answer, at least a certain kind of partial answer. It’s certainly not totally original. Yet it is not often seen, not often heard.
My problem is how to explain it in words. I have tried many formulations on paper and in my head and none of them seem quite right. So I’ve decided to share several of them with you here, and hope you get the point. I’m trying to indicate a sensibility about the world — a way of relating to it, of seeing it, of dealing with it. What I’m trying to say cannot be wholly communicated in words (though can anything?). I need you to get the feel of it, to have the shift in perspective without which none of this will really make sense.
There’s a zen story about the monk who points to the moon. And the disciple keeps looking at his finger. ‘No, no, up there!’ the monk tries to say, but the disciple cannot understand the concept of pointing. That’s the kind of barrier I feel I’m up against.
Let me give you another example: to someone new to wine, wine tastes like wine. Maybe red wine tastes like red wine, and white wine tastes like white wine. To someone who drinks a little more, and thinks about what they drink, perhaps they start to identify sour and bitter, dryness and acidity. But to the wine connoisseur and critic, the vocabulary and the experience expand. They start to be able to detect and name notes of musk, florality, and minerality. They distinguish the taste of the wine at the front of the palate from the taste at the back. They comprehend the history and the heritage of the wine, its lineage in the soil, the effect of the sun and the rain on the grapes that made it. They taste and appreciate the various nuances of fermentation.
For the connoisseur, the wine unfolds into a much more complex, in-depth experience. It happens not just because the person drinks a lot of wine, but because they pay attention, and because they analyze the wine, and come up with labels, and break down and express their experiences.
The same way the experience of the wine reveals layer after layer with increased attention and thought, so too can the same general idea apply to life. Any particular experience you’ve had without thinking about it you’ve barely even lived. It passed by and vanished, and you missed a lot in it, a lot like a rookie misses almost everything interesting in the wine she’s tasting. If you take an experience of yours, pay it attention, and express what it is like, you will find that the experience starts to refine itself. It becomes complex, multi-layered, rich, fascinating, interesting, beautiful. It ceases to be one big blob and starts to become a multitude.
This revelation of layers of intelligence, of pattern but also of chaos—interesting chaos—is the reward for this expression.
Expression is the key.
Mere observation is not enough. Simply remembering an experience is not enough. You just remember the same pale, shallow memory you had before. But if you remember an experience and then 1) think deeply about it, 2) try to honestly and originally express exactly what it was like for you, and 3) put this expression in some form (music, poetry, film, or even just a few sentences in your journal) then 4) that will allow you to see the experience in a new light. It will force you to choose the important aspects of the experience. Those aspects of the experience will come into focus. Like a near-sighted man putting on glasses for the first time, the experience will become dramatically sharper.
Of course, expression inevitably distorts. Even a good map is partly wrong. It is still illuminating. A map has to distort and simplify to be useful. Similarly, every expression breaks down experience in a way that is partly wrong. One kind of expression will highlight certain facets of an experience; another expression will highlight other facets. Experiences can be expressed in an endless variety of ways.
This sensibility I’m trying to communicate results in the appreciation of “subtlety.” To a casual listener of music, when someone plays a key on a piano, they hear it as a single note. To a musician or a music critic or an audiophile, though, the note has at least three parts. The first is the attack, when the key is struck and a tiny hammer literally pings against a tight string inside the piano. The second is a middle portion of the note. The third is the decay, as the note fades. Each of these is different. And in fact even within each of these parts the note changes. Expression is like an instrument that allows you to see the worlds within every world.
Observing and expressing any experience streams down beautiful ideas that allow you to see it in a new way. The experience discloses connections to other experiences, patterns within it, intelligence.
To appreciate the finer and finer details of these changes — to see distinctions and discern refinement where once there was only sameness — is the spirit of subtlety. It is to see not just a thing but the presence of the space surrounding that thing. It is the spirit of the Japanese tea ceremony.
It is the spirit of not trying to overwhelm with a simple rush of pleasure, but to see deeper and deeper and quieter and quieter parts of something. It is why John Cage created an entirely silent piece — he wanted to make a statement about this spirit of subtlety, that looks for the shyest and most reluctant details. It is the spirit of, when you’re hungry, not just gobbling up food, but making food that tastes good, and then, looks good. Taking your time to do that prolongs the hunger, but then allows you to explore that hunger in a more and more elegant, artistic way.
The Magic Equation: Desire = Pain
And if you want to see these subtleties, desire is crucial. You do not fully control your mind anymore than you fully control the weather. You are at all times in a mental landscape, and the most important feature of this landscape is desire. We always want something — or perhaps to avoid something — and this focuses and defines our attention. We can use this desire as the starting point of our attention and expression.
Desire, which unfulfilled is the same thing as pain, is what allows you to appreciate anything. So the connoisseur realizes that desire is a precious thing. It should not be used up too early. It is what allows you to be interested in something. As soon as you’ve had an orgasm, interest in sex decreases. It is the desire for sex, the ache, the hunger, that can motivate you to explore subtler and subtler realms of sexuality: to be interested in those realms. And that is why celibacy shows up so often in the world’s mystical traditions. It provides the motivation to seek sexuality not in physical bodies, but in knowledge and contemplation. The subtlest sexual objects are ideas.
The artistic mindset I am trying to communicate sees emotional or physical pain — unfulfilled desire — as a precious, specific energy that we can capture like a firefly in a jar, to follow its spirals and whirls. We can use it by investigating the desire itself. The desire is an experience. We can attend to it, note its intricacies.
Comment author:[deleted]
17 June 2013 04:50:54PM
*
4 points
[-]
Absolutely. To start with, I give a simple concrete suggestion in the first paragraph above about how to deal with physical pain.
Another concrete suggestion might be: any time you feel annoyed or angry, express in words exactly what the annoyance or anger is like, using metaphors, and going back and forth between your words and your experience to make sure you've captured the experience in as accurate and original -- or non-cliched -- a way as you possibly can.
A broader way to say the same thing might be: focus on those experiences that cause you emotional disturbance and express them, as accurately and as originally as possible, into an artistic medium of your choice (words, music, painting, whatever), using metaphors appropriate to that medium to convey what your experience is like.
If you do that, my contention is that you will find that your negative experiences bear within them a wealth of beauty.
There's more to it than that, but those are a couple of concrete suggestions.
Comment author:TimS
17 June 2013 04:57:56PM
*
7 points
[-]
Constructive suggestion: Write more like this, less like what you posted about.
Substantively, I think one could substitute any emotion or sensation and get the same advice. Thus:
Had good sex? Write a poem exploring the feelings you experienced - it will enhance the positive experience of the sex.
Which I expect is true. But pain is generally no fun, and it isn't clear that you think avoiding pain is worth the effort.
When I stub my toe, I'm not doing something wrong by first choosing to figure out why I stubbed my toe and what to change to avoid that in the future. And once I've done that, I'm not sure I have time to do what you suggested.
Comment author:[deleted]
18 June 2013 03:52:01PM
1 point
[+]
(0
children)
Comment author:[deleted]
18 June 2013 03:52:01PM
1 point
[-]
Beware signing any written agreement you haven't read or understood.
I have flouted this advice almost every time I installed software or signed up to a website over the last couple decades, and AFAICT I have never had much trouble as a result.
Comment author:[deleted]
17 June 2013 05:09:29PM
*
7 points
[-]
This may be overly harsh, but:
This essay is nonsense. There's an easy trick for analyzing writing like this: As you read, mentally remove all of the emotionally charged words and connotations and see if the argument still makes sense. When we get rid of all the flowery language here, we end up with (admittedly uncharitable) things like, "Humans can think about pain and other experiences and use these thoughts to create art that others find pleasurable" and "By paying close attention, you can gain more understanding of complex things (e.g. wine tasting)." None of this analysis even mentions the actual, causal reasons human beings suffer, or established theories about coping with suffering and creativity. As a result, I don't see anything particularly insightful or useful.
This is reminding me of the Enneagram. The idea is that people have basic habitual ways of relating to the universe-- all the standard ways (the Enneagram has nine of them) are useful but incomplete, and all of them can go bad or be refined into something very valuable.
Accurate perception is important, but so is action.
Comment author:Kawoomba
27 June 2013 11:15:13PM
0 points
[-]
Christopher Gardner, a nutrition scientist at Stanford University who was not involved in the new study, said that after decades of research but little success in fighting obesity, “it has been disappointing that the message being communicated to the American public has been boiled down to ‘eat less and exercise more.’”
"An underlying assumption of the ‘eat less’ portion of that message has been ‘a calorie is a calorie,’" he said. But the new research "sheds light on the strong plausibility that it isn’t just the amount of food we are eating, but also the type"”
(If you ever have trouble accessing a NYTimes article (there was a script which doesn't work anymore), having exceeded your monthly allotment, remember you can just google the title then follow a link from e.g. Google News, which won't count against your quota.)
Comments (313)
There's been more recent work suggesting that planets are extremely common. Most recently, evidence for planets in unexpected orbits around red dwarfs have been found. See e.g. here. This is in addition to other work suggesting that even when restricted to sun-like stars, planets are not just common, but planets are frequently in the habitable zone. Source(pdf). It seems at this point that any aspect of the Great Filter that is from planet formation must be declared to be completely negligible. Is this analysis accurate?
Is there a wiki or website that keeps track of things related to the Great Filter?
I guess I'm looking for something that enumerates all the possible major filters, and keeps track of data and arguments pertaining to various aspects of these filters.
I'm not aware of any such thing. It would be nice to have. There was an earlier Boston meetup a few years ago where a few of us tried to brainstorm future filters but we didn't really get anything that wasn't already known (I think jimrandomh mentioned that there's been similar attempts at other meetups and the like). The set of proposed filters in the past though is large. I've seen almost every major step in the evolution of life being labeled as a filter, and there's sometimes reference class tennis issues with them, especially when connected to developments that aren't as obviously necessary for intelligent life.
I have a notion that the proportion of sociopaths is a filter as the tech level goes up-- spam is a problem, though more of a dead-weight loss than a disaster. If we get to the point of home build-a-virus kits, it might be a civilization-stopper. Was this on the list?
I second this question. Are we now completely certain of this rarity?
I am not an astrophysicist, so not an authoritative voice here, but yes, almost every star is likely to contain a bunch of planets, some probably in an inhabitable zone. Even our close neighbors, the three Centauri stars and Vega have planets around them, or at least an asteroid belt hinting at planets. So, at least a couple of terms in the Drake equation are very close to unity.
I just found out that there exists an earlier term for semantic stopsigns: a thought-terminating cliché.
I was an intern at MIRI recently and I would like to start a new LW meetup in my city but as I am still new on LW, I do not have enough karma points. Could you please upvote this comment so that I can get enough karma to post about a meetup? lukeprog suggested I do this. I only need 2 points to post in the discussion part. Thanks to you all
Confirmed.
I just realized that willingness to update seems very cultish from outside. Literally.
I mean -- if someone joins a cult, what is the most obvious thing that happens to them? They update heavily; towards the group teachings. This is how you can tell that something wrong is happening.
We try to update on reasonable evidence. For example we would update on a scientific article more than on a random website. However, from outside is seems similar to willingness to update on your favorite (in-group) sources, and unwillingness to update on other (out-group) sources. Just like a Jehovah Witness would update on the Watch Tower, but would remain skeptical towards Mormon literature. As if the science itself is your cult... except that it's not really the science as we know it, because most scientist behave outside the laboratory just like everyone else; and you are trying to do something else.
Okay, I guess this is nothing new for a LW reader. I just realized now, on the emotional level, how willingness to update, considered a virtue on LW, may look horrifying to an average person. And how willingness to update on trustworthy evidence more than on untrustworthy evidence, probably seems like hypocrisy, like a rationalization for preferring your in-group ideas to out-group ideas.
So does that make stubbornness a kind of epistemic self defence?
Almost surely, yes. If other people keep telling you crazy things, not updating is a smart choice. Not the smartest one, but it is a simple strategy that anyone can use, cheaply (because we can't always afford verification).
For what it's worth, the complaints I've heard about LW center around arrogance, not excessive compliance.
you could restate the arrogance as an expectation that others update when you say things
Likewise, especially of people talking about fields they are not experts in.
On the other hand, once you're in one it's the not-updating that gives it away.
I worry that this is a case of finding a 'secret virtue' in one's vices: I think we're often tempted to pick some outstandingly bad feature of ourselves or an organization we belong to and explain it as the necessary consequence of a necessary and good feature.
My reason for thinking that this is going on here is that another explanation seems much more plausible. For one thing, you'd think the effect of seeing someone heavily update would depend on knowing them before and after. But how many people who think of LW this way think so because they knew someone before and after they became swayed by LW's ideas?
With Nancy, I think that the PR problem LW has isn't the impression people have that LWers converts of a certain kind. Rather, I think what negative impression there is is the result of an extremely fixable problem of presentation: some of the most prominent and popular ways of expressing core LW ideas come in the form of 1) 'litanies', or pseudo-asian mysticism. These are good ideas being given a completely unnecessary and, for many, off-putting gilding. No one here takes the religious overtone seriously, but outsiders don't know that. 2) they come in the form of explicit expressions of contempt for outsiders, such as 'raising the sanity waterline', etc.
I admit that I honestly do consider many people insane; and I always did. Even the smarter ones seem paralyzed by some harmful memes. I mean, people argue about words that have no connection with reality, while in other parts of the world children are dying from hunger. Hoaxes of every kind circulate by e-mail, and it's hard to find someone I know personally who didn't send me them repeatedly (after being repeatedly explained that it was a hoax, and given a pointer to some sites collecting hoaxes). Smart people start speaking in slogans, when difficult problems need to be solved, and seem unable to understand where is the problem with this kind of communication. People doing bullshit that obviously doesn't and can't work, and insisting that you have to do it harder and spend more money, instead of just trying something else for a while and observing what happens. So much stupidity, so much waste. -- And the few people who know better, or at least are able to know better, are often afraid to admit it even to themselves, because the idea that we live in insane society is scary. So even they don't resist the madness; at best they don't join it, but they pretend they don't see it. This is how I saw the world decades before I found LW.
And yes, it is a bad PR. It is impolite towards the insane people, who may feel offended, and then try to punish us. But even worse, it is a bad strategy towards the sane people, who are not yet emotionally ready to admit that the rest of the world is not sane. Because it goes against our tribal instincts. We must agree with the tribe, whether it is right or wrong; especially when it is wrong. If you are able to resist this pressure, it's probably not caused by higher rationality, but by lower social skills.
So how exactly should we communicate the inconvenient truths. Because we are trying to communicate truthfully, aren't we? Should we post the information openly, and have a bad PR? Should we have a secret forum for forbidden thoughts, and appear cultish, and risk that someone exposes the information? Should we communicate certain thoughts only in person, never online? Seems to me that "bad PR" is the least wrong option.
Is there a way to disagree with the majority, be open to new members, and not seem dangerous? Perhaps we could downplay our ambitions; to stop talking about improving the world, and pretend that we are just some kind of Mensa, a few geeks solving their harmless Bayesian equations, unconnected with the real world. Or we could make a semi-secret discussion forum; it would be open to anyone after overcoming a trivial inconvenience, and it would not be indexed by google. Then the best articles (judged by quality and PR impact) would be published on a public forum. Perhaps the articles should not appear all in the same place: everyone (including Eliezer) would have their own blog with their own articles, and LW would just contain links to them (like Digg). This would be an inconvenience for publishing; but we could provide some technical help for people who have problem starting their own website. Perhaps we should split LW to multiple websites, concerned with different topics: artificial intelligence, effective philantrophy, rationality, community forum, etc. -- All these ideas are about being less open, less direct. Which is dishonest per se, but perhaps this is what good PR means: lying in socially accepted ways; pretending what other people want you to pretend.
This could be a separate topic probably. And first we would have to solve what we want to archieve, and only then discuss how.
I don't think you do, I think you consider most people to be (in some sense rightly) wrong or ignorant. Just the fact that you hold people to some standard (which you must do, if you say that they fail) means you don't think of them as insane. If you've ever known someone with depression or who is bi-polar disorder, you know that you can't tell them t snap out of it, or learn this or that, or just think it through. Even calling people insane, as an expression of contempt, is a way of holding them to a standard. But we don't hold actually insane people to standards, and we don't (unless we're jerks) hold them in contempt. You don't communicate the inconvenient truth to the insane. You don't disagree or agree with the insane. The wrong, the ignorant, the evil, yes. But not the insane.
No one here (and I mean no one) actually thinks the world is full of insane people. That's a bit of metaphor and hyperbole. If anyone seriously thought that, their behavior would be so radically strange (think 'I am Legend' or something), you'd probably find them locked up somewhere.
The claim that everyone else is insane doesn't sound dangerous, it sounds resentful. Dangerous is not a problem. I don't think we need to implement any of your ideas, because the issue is purely one of rhetoric. None of the ideas themselves are a problem, because there's no problem with saying everyone else is wrong so long as you have either 1) results, or 2) good, persuasive, arguments. And if all you've got is (2), tone matters, because you can only persuade people who listen to you. There's no reason at all to hide anything, or lie, or pretend or anything like that.
Speaking about typical indviduals, ignorant is a good word, insane is not. As you say, it makes sense trying to explain things to an ignorant person, not to an insane person. Individuals can be explained things with some degree of success. I agree with you on this.
The difference becomes less clear when dealing with groups of people, societies. Explaining things to a group of people, that is more often (as an anthropomorphism) like dealing with an insane person. Literally, the kind of person that hears you and understands your words, but then also hears "voices in their head" telling them it's bad to think that way, that they should keep doing the stupid stuff they were doing regardless of the problems it brought them, etc. Except that these "voices" are the other people. -- But this probably just proves that societies are not individuals.
Yeah, having results would be good. The Friendly AI would be the best, but until then, we need some other kind of results.
So, an interesting task would be to make a list of results of the LW community that would impress outsiders. Put that into a flyer, and we have a nice PR tool.
That's fair enough. I'd stay away from groups of people. Back in the day, they used to write without vowels, so that you could only really read something if you were either exceptionally literate or were being told what it said by a teacher. I say never communicate with more than a handful of people at once, but I suppose that's not possible a lot of the time.
Perhaps it would be less confusing to treat a society as if it were a single organism, of which the people within it are analogous to cells rather than agents with minds of their own. I'm not sure how far such an approach would get but it might be interesting.
CFAR might be able to demonstrate such after a few more years of their workshops. I'm not sure how they're measuring results, but I would be surprised if they were not doing so.
CFAR planned to do some statistics about how the minicamp attendees' lives have changed after a year, using a control group of people who applied to minicamps but were not admitted. Not perfect, but pretty good. And the year from the first minicamps is approximately now (for me it will be in one month). But the samples are very small.
With regards to PR, I am not sure if this will work. I mean, even if the results are good, only the people who care about statistical results will be impressed by them. It's a circular problem: you need to already have some rationality to be able to be impressed by rational arguments. -- Because you may also say: yeah, those guys are trying so hard, and I will just pray or think positively and the same results will come to me, too. And if they don't, that just means I have to pray or think positively more. Or even: statistics doesn't prove anything, I feel it in my heart that rationality is cold and can't make anyone happy.
I think that people who don't care about statistics are still likely to be impressed by vivid stories, not that I have any numbers to prove this.
I agree. But optimizing for good storytelling is different from optimizing for good science. A good scientific result would be like: "minicamp attendees are 12% more efficient in their lives, plus or minus 3.5%". A good story would be "this awesome thing happened to an minicamp attendee" (ignoring the fact that equivalent thing happened to a person in the control group).
Maybe the best would be to publish both, and let readers pick their favourite part.
I'm sure they'll be publishing both stories and statistics.
One more possibility: spin off instrumental rationality. Develop gradual introductions on how to think more clearly to improve your life.
Has anyone written a worthwhile utilitarian argument against transhumanism? I'm interested in criticism, but most of it is infested with metaphysical and metaethical claims I can't countenance.
What proposition are you looking for an argument against?
Transhumanism can mean a lot of things: the transcending of various heretofore human limits, conditions, or behaviors - which are many and different from one another.
And for those things, you might refer to the proposition that they are possible, or likely, or inevitable; (un)desirable or neutral; ethically (in)permissible or obligatory; and so on.
I'm looking for utilitarian arguments against the desirability of changing human nature by direct engineering. Basically, I'm wondering if there's any utilitarian case for the "it's fundamentally wrong to play God" position in bioethics. (I'm being vague in order to maximize my chance of encountering something.)
A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I've been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.
Please be more specific and define "changes to human nature".
We already make many deliberate changes to people. We raise them in a culture, educate them, train them, fit them into jobs and social roles, make social norms and expectations into second nature for most people, make them strongly believe many things without evidence, indoctrinate them into cults and religions and causes, make them do almost anything we like.
We also make medical interventions that change human nature, which is to die of diseases easily treated today. We restore sight to the myopic and hard of hearing, and lately even to the blind and deaf. We even transplant complex organs.
We have changed the experience of human life out of all recognition with the ancestral state, and we have grown used to it.
Where does the line between human and transhuman lie? We can talk about any specific proposed change, and some will be bad and some will be good. But any argument that says all changes are inherently bad might also say that all the changes that already occurred have been bad as well.
Thanks for bringing back the bright-colored edges for new comments.
The additional thing I'd like to see along those lines is bright color for "continue this thread" and "expand comments" if they include new comments. I'd also like to see it for "comment score below threshold", but I can understand if that isn't included for social engineering reasons.
Risks of vegetarianism and veganism
Personal account of physical and emotional problems encountered by the author which were reversed when he went back to eating animal products. Much discussion of vitamins and dietary fats, not to mention genetic variation. Leaves the possibility open that some people thrive on a vegetarian diet, and possibly on a vegan diet.
So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?
Mindfulness in Plain English is a good introduction to (one kind of) meditation practice.
It seems like most interested people end up practicing concentration or insight meditation by default (as indeed you will, if you read and follow the book). I would also recommend eventually looking into loving-kindness meditation. I've been trying it for a couple of weeks and I think it might be much more effective for someone who just wants a tool to improve quality of life (rather than wanting to be enlightened or something).
Loving-kindness meditation was one of the most easily accessible effective techniques for subverting intrusive anxiety I experimented with during my recovery. (There were more effective techniques, but I couldn't always do them reliably.)
Have you seen the previous LW posts on the subject?
Genes take charge and diets fall by the wayside.
You need a New York Times account to read it, but setting one up only takes a couple of minutes. Here are some exerpts in any case.
Obese people almost always regain weight after weight loss:
Thin people who are forced to gain weight find it easy to lose it again:
The body's metabolism changes with weight loss and weight gain:
Genes and weight:
And here is the kind of attitude that, in my eyes, justifies all the anger and backlash against fat-shaming. Oh damn, I feel like I understand the SJW people more and more every time I see crap like this.
http://staffanspersonalityblog.wordpress.com/2013/05/30/the-ugly-truth-about-obesity/
The "harsh truth" is that people suffering from obesity need to be protected from such vile treatment somehow, and that need is not recognized at the moment. Society shouldn't just let some entitled well-off jerks with a fetish for authoritarianism influence attitudes and policy that directly affect vulnerable groups.
...
Goddamn reactionaries everywhere.
I think you're right in general, but I don't think "protected from" is a good way to frame it, as though fat people are the passive recipients of attacks, and some stronger force has to come in to save them. (I'm not sure quite what you meant, or even if you were just angry about a bad situation and used the first phrase that came to mind.)
The world would be a much better place if the attacks stopped. I'm not sure what the best strategies are to get people to stop seeing fatness and thinness as moral issues. The long slow grind of bring the subject up again and again with whatever mix of facts and anger seem appropriate seems to be finally getting some traction.
Absolutely. I just meant to say that there's a need for intersectionality and solidarity in such struggles, i.e. even people who aren't from marginalized groups that are directly targeted by shit-stains like Mr. Staffan here should still call such shit-stains out on their shit.
I found that quite hard to read. Even if poor impulse control were the sole cause of obesity, there would be no reason to attack the obese so nastily, instead of, for instance, suggesting ways that they might improve their impulse control. I find the way he relishes attacking them incredibly unpleasant.
In fact, the internet has quite a lot to say about improving impulse control.
I reckon there's special pleading going on with the obese. Way more anger & snottiness gets directed at them (at least on the parts of the Internet I see) than at, say, smokers, even though smoking is at least as bad in every relevant way I can think of.
(Here're some obvious examples. At an individual level, smoking is associated with shorter life at least as much as obesity. At a global level, smoking kills more and reduces DALYs far more than high BMI. Like obesity, smoking is associated with lower IQ & lower conscientiousness. And so on.)
Hint hint: it matters less to some people whether the group they are trying to subjugate is delineated by economic class, race, gender, sexuality or body issues... as long as they get to impose their hegemony and see the "deviants" suffer. It's scary to see such a desire to dominate, control and punish.
(Related: check out the pingback on that post.)
Perhaps we should dominate, control and punish those evil people who use the available Bayesian evidence when dealing with individuals.
I also predict that a lot of those evil people will be white, male, and wealthy, so we should focus on members of those groups.
It's not scary if the good people are doing it, right? And, of course, by "good" I mean members of our tribe.
Not nearly all such people are outright sadistic and power-hungry, but those who are can spin complex ideological rationalizations that push the "overton window" and allow the "good" bourgeois to be complicit with a cruel and unjust system.
See e.g. the "Reagan revolution" in America and the myth of the "welfare queen" that's a 3-for-1 package of racism, classism and sexism. I've read a bit about how it has been fuelling a "fuck you, got mine" attitude in poor people one step above the underclass; the system hasn't actually been kind to a white/male/lower-middle-class stratum, but it has given them someone to feel superior to. It's very similar to how the ideologues of the Confederacy explicitly advocated giving poor white men supreme rule over their household as a means of racial solidarity across the class divide.
False equivalence. Of course, any movement can degrade into an authoritarian-populist, four-legs-good-two-legs-bad version, given a vicious political atmosphere and polarized underlying worldviews, but... it happens to dominant/conservative ideologies, too! The dominant group just doesn't notice the resulting violence and victimization because from its privileged position it can afford an illusion of social peace.
If we agree that it's a danger of political processes in general rather than of specific movements, could we stop sneaking in implicit arguments that a particular ideology is safe from viciousness and indiscriminatory aggression?
See also: social dominance theory.
(More on SDT)
People of all kinds of political opinions are able to use myths to support their opinions. People of all kinds of political opinions can be power-hungry. People of all kinds of political opinions can declare other people evil and use hate against them for their own political advantage.
Can we agree on this, or can you tell me an example of a major political movement that does not do that? (Because you provided some specific examples, and I am too lazy to counter that with specific examples in the other direction, unless that really is necessary. I suppose we could just skip this part and agree that it is not necessary.)
???
Is your assumption that any effort to limit cruelty will necessarily be cruel? Or that SJs in particular are especially untrustworthy? Something else?
My assumption is that SJs are good at finding faults of everyone else, and completely blind to their own. (Which is actually my assumption for all political movements.) I don't consider SJs more untrustworthy that any other group of mindkilled people explaining why they are the good guys and their enemies are the bad guys.
Their amateur psychoanalysis lacks self-reflection. Those other people, they want to dominate, control and punish. That would obviously never happen to us! Now let me explain again why everyone who disagrees with us is evil and must be stopped...
Am I missing a connection between your post and coffespoons' that makes your a response to his?
Moderately surprising corollary: so society IS treating fat people in a horribly unjust manner after all. Those boring SJW types who have been going on and on about "fat-shaming" and "thin privilege"... are yet again more morally correct on average than the general public.
Am now mildly ashamed of some previous thoughts and/or attitudes.
What are we to make of the supposedly increasing obesity rate across Western nations? Is this a failure of measurement (e.g. standards for what count as "obesity" are dropping), has the Western diet changed our genetics, or something else altogether?
If it was mainly genetics, then I would think that the obesity rate would remain constant throughout time.
Environmental changes over time may have shifted the entire distribution of people's weights upwards without affecting the distribution's variance. This would reconcile an environmentally-driven obesity rate increase with the NYT's report that 70% of the variance is genetic.
The obvious cross comparison would be to look at populations distributions of weight and see if they share the same pattern shifted left or right based on the primary food source.
Hypothesis possibly reconciling link between impulse control and weight, strong heritability of both, resistance to experimental intervention, and society scale shifts in weight:
Body weight is largely determined by the 'set point' to which the body's metabolism returns, hence resistance to intervention. This set point can be influenced through lifestyle, hence link to impulse control and changes across time/cultures. However this influence can only be exerted either a) during development and/or b) over longer time scales than are generally used in experiments.
This should be easy enough to test. Are there any relevant data on e.g. people raised in non-obesity ridden cultures and then introduced to one? Or on interventions with obese adolescents?l
I dunno, ask the OP. I was merely pointing out that in the event that obesity has a more or less significant hereditary/genetic component, the social stigma against it must be an even more horrible and cruel thing than most enlightened people would admit today.
(Consider, for example, just the fact that our attractiveness criteria appear to be almost entirely a "social construct" - otherwise it'd be hard to explain the enormity of variance; AFAIK the only human universal is a preference for facial symmetry in either gender. If society could just make certain traits that people are stuck with regardless of their will, and cannot really affect, fall within the norms of "beauty" in a generation or two... then all the "social justice"/"body positivity"/etc campaigns to do so might have a big potential leverage on many people's mental health and happiness. So it must be in fact reasonable and ethical of activists to "police" everyday language for fat-shaming/body-negativity, devote resources and effort to press for better representation in media, etc.
Yet again I'm struck by just how rational - in intention and planning, at least - some odd-seeming "activist" stuff comes across as on close examination.)
A possible hypothesis is that the genes encode your set point weight given optimal nutrition, but if you don't get adequate nutrition during childhood you don't attain it. IIRC something similar is believed to apply to intelligence and height and explain the Flynn effect and the fact that young generations are taller than older ones.
Flynn effect?
Sure. Fixed. Thanks.
I've moved away slightly from SJW attitudes on various matters, since starting to read LW, Yvain's blog and various other things, however, I've actually moved closer to SJW attitudes to weight, since researching the issue. The fact that weight loss attempts hardly ever work in the long run, is what has changed my views the most.
[OT: just noting that one could be "away from SJW attitudes" in different directions, some of them mutually exclusive. For example, on some particular things (racial discrimination, etc) I take the Marxist view that activism can't help the roots of the problem which are endemic to the functioning of capitalism - except that I don't believe it's possible or sane to try and replace global capitalism with something better anytime soon, either... so there might be no hope of reaching "endgame" for some struggles until post-scarcity. Although activists should probably at least try and defend the progress made on them to-date from being optimized away by hostile political agendas.]
Actually, I still suspect that the benefiits in increased happiness and mental health would still be better than the marginal efficiency of pressuring lots of people to try and lose weight even if it depended in large part on personal behaviour. And social pressure is notoriously indiscriminate, so any undesirable messages would still hit people who can't or don't really need to change.
Plus there are still all the socioeconomic factors outside people's control, etc.
Whether or not this result is correct, society is definitely shaming the wrong people: some perfectly healthy people (e.g. young women) are shamed for not being as skinny as the models on TV, and not much is being done to prevent morbid obesity in certain people (esp. middle-aged and older) who don't even try to lose weight.
(Edited to replace “adult men” with “middle-aged and older” and “eat less” with “lose weight”.)
Yeah, and so it looks more and more that (as terribly impolite it might be to suggest in some circles on the Internet) we need much higher standards of "political correctness" and a way stronger "call-out culture" in some areas.
Most activists are neither saints nor superhumanly rational, of course - but at least in certain matters the general public might need to get out of their way and comply with "cultural engineering" projects, where those genuinely appear to be vital low-hanging fruit obscured by public denial and conformism.
A social justice style which includes recruiting imperfect allies rather than attacking them.
I'm pretty sure that call out culture needs some work. It's sort of feasible when there's agreement about what's privileged and what isn't, but I'd respect it more if there were peace between transgendered people and feminists.
From a place of general agreement with you, looking for thoughts on how to go forward:
Are second-wave feminists more transphobic than a random member of the population? Or do you think second-wave hypocrisy is evidence that the whole second-wave argument is flawed?
Because as skeptical as I often am of third-wave as actually practiced, they are particularly good (compared to society as a whole) on transgendered folks, right?
I don't think the problem is especially about transphobia, I think it's about a harsh style of enforcing whatever changes people from that subculture want to make. They want to believe-- and try to enforce-- that the harshness shouldn't matter, but it does.
This may offer some clues about a way forward.
IME "call out culture" feminists are very anti-transphobia. Second wave feminists aren't so interested in getting people to check their privilege.
http://aeon.co/magazine/health/david-berreby-obesity-era/
The study referenced appears to be from here: http://rspb.royalsocietypublishing.org/content/278/1712/1626.short
Here is one theory on an environmental cause of obesity: https://en.wikipedia.org/wiki/Obesogen
Here is a study that suggests Jet fuel causes obesity. And it's an epigenetic effect: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3587983/
Another interesting link I'd like to save here: http://slatestarcodex.com/2015/08/04/contra-hallquist-on-scientific-rationality/
Looking for any relevant research or articles on the causes of obesity, or effectiveness of interventions.
On the other hand, here's a study that shows a very strong link between impulse control and weight. I'm not really sure what to believe anymore.
The impulse control they use is a facet of Conscientiousness; and we already know Conscientiousness is highly heritable...
Yes, but it is still potentially useful to know how much of the heritability is metabolically vs. behaviorally manifested.
Also more generally, we should be careful about mixing different levels of causation.
Unless I'm missing something, they don't describe the size of the effects of personality that they found, just the strength of the correlations.
I'm not too clear on how to interpret hierarchical model coefficients, but they do give at least one description of effect size, on pg6:
and pg8:
Adipocyte count is essential to maintaining weight.
It is unclear to what extent weight is genetic rather than environmentally set at a later stage in development.
I am unable to find whether fat cell count can be changed over this 8 year time scale, though my biochemistry professor was inclined to that hypothesis.
Heredity and weight:
The long-term weight loss cited in this review used a 1-2 year followup, during which time only <16% of adipocytes could have turned over.
More grist for the hypothetical Journal of Negative Results
Scientist wants to publish replication failure. Nature won't accept the article (even as a letter). So scientist retracts previous letter written in support of the non-replicated study.
Hypothetical?
How do other people use their whiteboards?
After having my old 90 x 60 whiteboard stashed down the side of my bed since I moved in, nearly two years ago, I finally got around to mounting it a couple of weeks ago. I am amazed at how well it compliments the various productivity infrastructure I've built up in the interim, to the point where I'm considering getting a second 120 x 90 whiteboard and mounting them next to each other to form an enormous FrankenBoard.
A couple of whiteboard practices I've taken to:
Repeated derivation of maths content I'm having trouble remembering. If there's a proof or process I'm having trouble getting to stick, I'll go through it on the board at spaced intervals. There seems to be a kinaesthetic aspect to using the whiteboard that I don't have with pen and paper, so even if my brain is struggling to to remember what comes next, my fingers will probably have a good idea.
Unlike my other to-do list mechanisms, if I have a list item with a check box on the whiteboard, and I complete the item, I can immediately draw in a "stretch goal" check box on the same line. This turns into an enormous array of multicoloured check-boxes over time, which is both gratifying to look at and helpful when deciding what to work on next.
What are the advantages over pencil-and-paper? I can think of a couple, but would like to hear what a more frequent user says.
Firstly, a hypothesis: I am highly visual and like working with my hands. This may contribute considerably to any unusual benefit I get out of whiteboards.
So, advantages:
A whiteboard is mounted on the wall, and visible all of the time. I'm going to be reminded of what's written on it more frequently than if it's on a piece of paper or in a notebook. This is advantageous both for reminder/to-do items and for material I'm trying to learn or think about.
Instant erasure of errors. Smoosh and it's gone. I find pencil erasers cumbersome and slow, and generally dislike pencil as a writing medium, so on paper my corrected errors become a mess of scribbled obliteration.
Being able to work with it like an artistic medium. If I'm working with graphs (either in the sense of plotted functions or the edge-and-node variety), I can edit it on the fly without having to resort to messy scribbles or obliterating it and starting again.
Not accumulating large piles of paper workings of varying (mostly very low) importance. I already have an unavoidably large amount of paper in my life, and reducing the overhead of processing it all is valuable.
The running themes here seem to be "I generate a lot of noisy clutter when I work, both physically and abstractly, and a whiteboard means I generate less".
The physically larger my to-do list is, the more satisfying it feels to cross something off it. Erasing also works much better on whiteboard than with pencil and paper.
Aid in demonstrating things to others, social aesthetic value as a decoration, and personal aesthetic value. Also, erasing is way faster.
Sounds like a good system! What's a "stretch goal", if you don't mind sharing?
It's a term made popular by Kickstarter. If you achieve your initial goal and have resource left over, your "stretch goal" is what you do with the extra.
I will sometimes write things on a chalkboard that I'm trying to understand. I only have access to chalkboards, but I think that i would prefer them regardless -- the chalk feels more substantial.
I no longer use whiteboards if I can help it; while I trained back my fine motor control after my stroke sufficiently well to perform most other related activities, writing legibly on a vertical surface in front of me is not something I specifically trained back and doesn't seem to have "come for free" with other trained skills.
When I used them, I mostly used them for collaborative thinking about (a) to-do lists and (b) system engineering (e.g., what nodes in the system perform/receive what actions; how do those actions combine to form end-to-end flows).
I far prefer other tools, including pencil and paper, for non-collaborative tasks along those lines. And these days I mostly work on geographically distributed teams, so even collaborative work generally requires other tools anyway.
While researching a forthcoming MIRI blog post, I came across the University of York's Safety Critical Mailing List, which hosted an interesting discussion on the use of AI in safety-critical applications in 2000. The first post in the thread, from Ken Firth, reads:
I encountered this thread via an also-interesting technical report, Harper (2000).
[I made a request for job finding suggestions. I didn't really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]
For job searching, focus less on sending out applications and more on asking [professors | friends | friends of friends | mentors | parents | parents' friends] if they know of anyone who's hiring for [relevant field]. When they say no, ask if they know anyone else you should talk to. To generalize from one example, every job I've ever worked has come from some sort of connection. I found my current position through my mom's dance instructor's husband.
For figuring out what to do with your long-term future, there's not much I can say without knowing your goals, but might or might not be relevant. If so, they're willing to advise you one-on-one.
I would like to get better at telling stories in conversations. Usually when I tell a story, it's very fact-based and I can tell that it's pretty boring, even if it wasn't for me. Are there any tips/tricks/heuristics I can implement that can transform a plain fact-based story into something more exciting?
It's okay to lie a little bit. If you're telling the story primarily to entertain, people won't mind if you rearrange the order of events or leave out the boring bits.
Open with a hook. My style is to open with a deadpan delivery of the "punchline" without any context, e.g. "Quit my job today." This cultivates curiosity.
Keep the end in mind. I find that this avoids wandering. It helps if you've anchored the story by "spoiling" the punch line. We all have that friend who tells rambling stories that don't seem to have a point. That said -
Don't bogart the conversation. If you're interrupted, indulge the interruption, and bring the conversation back to your story if you can do so gracefully. It's easy to get fixated on your story, and to become irritated because everybody won't shut up. People detect this and it makes you look like an ass. Sometimes it works to get mock-irritated - "I was telling a story, dammit!" - if doing so feels right. Don't force it.
Don't get bogged down in quoting interactions verbatim. Nobody really cares what she said or what you said in what order.
Don't care about getting all the details correctly. (Your first and last points.)
I know a person whose storytelling is painful to listen, because sooner or later they run into some irrelevant detail they can't remember precisely, and then spend literally minutes trying to get that irrelevant detail right, despite the audience screaming at them that the detail is irrelevant and the story is already too long, so they should quickly move to the point.
Perhaps this could be another good advice: Start with short stories. Progress to longer ones only when you are good with the short ones.
Watch stand-up comedy. There's lots of it on YouTube.
Just listening to and imitating the cadence of how professional comics speak is enough to boost one's funniness by 2.3 Hickses.
A good piece of advice lukeprog gave me is to structure your story around an emotional arc. E.g. a story about an awesome show you went to is also a story about what you felt before, during, and after the show. A story about the life-cycle of a psychoactive parasite is also a story about a conflict between the clever parasite and the tragic host; or a story about your feelings of fascination and horror when you first learned about the parasite.
Join a pen and paper RPG group, it is the old trick of if you want to be better at something, spend a lot of time doing it. Easy story telling practice sessions every week.
I have started writing a Death Note fanfiction where the characters aren't all as dumb as a bag of bricks (or one could say a rationalist fic) and... I need betas. The first chapter is available on http://www.fanfiction.net/s/9380249/1/Rationalising-Death and the second is pretty much written, but the first is confirmedly "funky" in writing and since I'm not a native English speaker I'm not sure I can actually pinpoint what exactly is wrong with it. Also I'd love the extra help.
Anyone interested? My email for contact is pedromvilar@gmail.com (and I also have a tumblr account, http://scientiststhesis.tumblr.com )
Initial observations: you are cribbing too heavily from MoR, your Light is too much like Harry, the focus on utility seems silly, and jumping straight to crypto reasoning for randomness is completely unmotivated by anything foregoing.
Definitely interested. I'll send you an email.
Since I'm used to hearing Dutch Book arguments as the primary way of defending expected utility maximization, I was intrigued to read this passage (from here):
The Wakker 2010 reference is to a book; searching it for "dutch book" gets me the footnote
And looking up assignment 3.3.6 gets
Since I don't really have the time or energy to work my way through a textbook, I thought that I'd ask people who understood decision theory better: exactly what is the issue, and how serious of a problem is this for somebody using the Dutch Book argument to argue for EU maximization?
The von Neumann-Morgenstern theorem isn't a Dutch book argument, and the primary purpose of Dutch book arguments is to defend classical probability, not expected utility maximization. von Neumann-Morgenstern also assumes classical probability. Jaynes uses Cox's theorem to defend classical probability rather than a Dutch book argument (he says something like using gambling to defend probability is uncouth).
I don't really understand what issue the first reference you cite claims exists. It doesn't seem to be what the second reference you cite is claiming.
Wouldn't this trivially go away by redenominating outcomes in utilons instead of dollars with diminishing marginal returns?
All students including liberal arts students at Singapore's new Yale-NUS College will take a new course in Quantitative Reasoning which John Baez had a hand in designing.
Baez writes that it will cover topics like this:
John Baez, Quantitative Reasoning at Yale-NUS College
Anybody with tips for beginning an evaluation for the purpose of choosing between future career and academic choices? As far as I can tell, my values are as commonly held as the next fellow:
Felt Purpose - A frequent occurrence of situations that demonstrate, in unique ways, the positive effects of my past actions. I see this as being somewhere in the middle of a continuum, where on one end I'd have only rational reason to believe my actions were doing good (effective charity), and on the other only the feeling as if I were doing good, but less rational evidence (environmental volunteerism?).
Utilitarian Benefit - I do need a fair bit of the left-hand-side of the above continuum. Even if no feeling might leave me depressed, too much feeling without enough rational evidence would leave me feeling hollow and wrong. I might also expect an increase in altruism through personal development that will push me further in the direction of effectiveness.
Academic Fun - The feeling of discovery, that my work is on not-commonly-trodden path, and the realized ability to make novel contribution.
Social Fun - Being surrounded by people with widely varying backgrounds, providing direct opportunity to partake in new social situations, a fun going beyond anthropological interest. Ability to make friends of an intelligent, kind, and uplifting sort.
Artistic Outlet - The feeling that long-held and heavily inspected aspects of my psyche are finding expression, probably combined with the feeling that this expression is understood by like minds, and that those minds are being helped by it.
Financial Freedom - Money for me is less about buying objects so much as the freedom to do new things. Like travelling and inviting as many people as I want. Also, income reflects my real economic output, which is valuable in itself, with the benefit that I can put the profit toward effective charities.
Of course, this listing of values is the beginning of my self-evaluation. What I'm less keen on is where to find a listing of my options. I am a 24 year old computer science graduate, currently in the video game industry as a pipeline and graphics engineer working on AAA titles (as opposed to independent games). I have saved up for myself about 80,000 USD, and increase this savings at roughly $40,000 per year at my current job. I would have only minor qualms about relocating (within the country). I view myself as having a high aptitude for learning but a very limited working set. I tend to solve hard problems very cleverly and thoroughly, but find it difficult to maintain work on multiple hard problems at the same time.
Current options I've considered:
Would love comments, although interestingly typing this out has itself been a great help.
The impression I get is that games programming is underpaid and overworked relative to other styles of programming, because games are fun and the resulting halo effect dramatically increases the labor supply for the games industry as a whole. You may be able to make more working on Excel than Halo, but that's an guess from the outside with only a bit of anecdotal backing. (This may not be true for your particular skillset; my impression is that the primary consumers of intense graphics are games and animation firms.)
This also would trade off felt purpose- even if you have trouble convincing yourself games are worthwhile, you'll have a harder time convincing yourself Excel is worthwhile- for income, which may not be the right move, and would depend on the actual numbers involved. (It might be that decreasing your felt purpose by 1 on a ten point scale is not be worth an additional $10k a year, but is worth an additional $50k a year, to use arbitrary numbers as an example.)
My understanding is that grad school in computer science is only worthwhile if you want to be a professor (which I don't think will fit your criteria as well as working in industry) or you're looking for co-founders. Another thing to consider along similar lines is software mentorship programs for undergraduate students (here's one in Austin, I imagine there's probably one in Seattle)- it's a great way for you to meet people that might be cofounder material, and see how they work, as well as getting social fun (and possibly academic fun).
Aaron Winborn: Monday was my 46th birthday and likely my last. Anything awesome I should try after I die?
What's actually known about women's biological clocks?
Pretty much the usual if someone looks closely at a commonly held belief about a medical issue. The usual dramatic belief that fertility drops off sharply at 35 is based on French birth records from 1670 to 1830. Human fertility is very hard to study. Women who are trying to have their first child at age forty may be less fertile than average. And so on.
I have concluded that many of the problems in my life are the result of being insufficiently impulsive. As soon as I notice a desire to do something, I more or less reflexively convince myself that it is a bad idea to do just now. How can I go about increasing my impulsivity? I want to change this as a persistent character trait, so while ethanol works in the short run it is not the solution I am looking for.
This sounds more like low expectancy than lack of impulse. Impulse can make you jump off your feet to do something you want to do, but it can just as quickly distract you from doing what you want to do. Check this out. Perhaps what you might need is to increase optimism and not impulsiveness.
As for the desire of doing something, try to convince yourself that it doesn't need to be perfectly thought out before doing it. For example, if you are starting a business you could be bogged down by trying to perfectly plan everything out and end up doing nothing. Instead give yourself 24 hours to start a business. Its an unreasonable request, but you would be surprised at how far you can get.
Beeminder it. Seriously.
Zach Alexander just posted a reverse engineering of the Soylent recipe. It looks pretty legit, and reasonably easy to put together.
Confirmed. Tasty too. I got the supplies on the way home from work today, sans olive oil (which I had already) and potassium (which I ordered online). It's not the cheapest way in the world to eat -- it cost around $80 including the potassium. Most of the supplies will last 30 days, but some (oat flour, cocoa, and soy protein) will run out sooner. The potassium should last longer. A 30-day supply of everything would probably be around $100-$120.
Over the past several months, I have been practicing a new habit: whenever I have a 'good' idea, I write it down. ('Good' being used very loosely.)
This is a very simple procedure but it seems to have several benefits. First, I began noticing that I remembered having a 'good' idea but not being able to remember what it was. I now notice this behavior much more strikingly and it causes some small amount of distressing thinking about what I might have forgotten. Writing it down relieves that worry. Second, I can refer back to it later. So far nothing significant has come out of this, but I like having the option and I have gone back to note that some of my once-though 'good' ideas don't hold up on second thought. This is useful information about yourself to have. Third, it encourages me to have more good ideas. For a while I tried to write down one a day (possibly using a long-term 'cached' idea I had been floating for a while if I didn't have anything good that day). Fourthly, writings things down even just for myself helps me to really get a clear idea of it.
I'm sure there are many suggestions which are similar to this or encompass it. Obviously this is similar to having a journal and probably shares some of the benefits. This has the advantage of being extremely simple and takes hardly any time at all and is only done when it has an obvious benefit.
Getting Things Done suggests writing down everything you need to do as soon as you realize you need to do it, and this can include following up on good ideas.
To whoever fixed it so that we can see the parents of comments when looking at a user's comments, major props to you for being awesome.
Your props go to Lucas Sloan. Hail Lucas!
I'm a little torn on that one-- on one hand it adds convenience most of the time, but it makes it less convenient to check on recent karma. The latter is something I feel like doing now and then, but it's possible I'm saner if it isn't convenient.
It was never an ideal way to check on recent karma, though it was better than nothing. I'd quite like something similar to Stack Overflow's Reputation view.
Yeah, this has been requested before.
I am particularly aware of it right now because I've been watching my 30-days karma drop slowly and steadily for the last couple of days, but I have no idea what in particular people want less of.
That said, I suspect that's just because I'm getting individual downvotes across a wide set of comments in the 0+/-2 range, and changing the way that information is displayed won't really help me answer that question any better than the current system does.
30-day karma falling is probably old posts aging out. Unless your total karma is also falling?
I dislike the change, as it's harder to get an impression about a new user based on their user page now, the comments by other users are getting in the way, and it's not possible to tune them out. Also, the change has broken user RSS feeds.
Would it be a good solution to change the color of the parent comments to gray, so they would be easier to ignore?
I second Nancy and Vladimir in disliking it.
Maybe it could be made a user preference, the way it is for the Recent Comments page.
I would definitely like to see it as a user preference.
Also, how do people feel about the relatively subtle color change for new comments which has replaced the bright green edge?
I liked the green edge more.
I'm finding the new version usable-- the green edge might allow for faster scanning, but the new version isn't bad with a little practice.
On the other hand, if there are multiple new comments in a thread, I find that I miss the alternating white/pale blue way of distinguishing comments. It would be nice to have two pastels instead of one.
The visual difference between a new comment and an old comment should be greater than the difference between two old comments.
How about using two pastel colors for the old comments... and using the white background for the new comments?
It would also be nice to have e.g. a small green "NEW" text in the corner of new comments, so I can quickly find a few new comments in a long discussion by using the "Find Next" functionality of my browser. (Because I don't have a functionality to search a comment based on its background color.)
If you're going to do this at all, it should be a more-likely-unique string (eg "!new!" rather than "NEW").
I much prefer the new version. It's far easier to spot new comments.
http://www.interfluidity.com/v2/4435.html
Related to this, there are a couple of professional philosophers around that are starting to take conspiracy theories seriously. Not just in the manner of critically analysing them, but also in the sense of how to actually make inferences about the existence of a conspiracy, how to contrast official theories and conspiracy theories, and how to reason with disinformation present.
One of these individuals is Matthew Dentith, who did his PhD In defence of conspiracy theories on these topics (and is in the process of writing a full book on the matter). The other is David Coady.
I'm running an Ideological Turing Test at my blog, and I'm looking for players. This year's theme is sex and death, so the questions are about polyamory and euthanasia.
You can read the rules and sign up at the link, but, essentially, you answer the questions twice: once honestly, and the second time as you think an atheist or Christian (whichever you're not) plausibly would. Then we read through the true and faux atheist answers and try to spot the fakes and see what assumptions players and judges made.
Politicians have a lot of power in society. How much good could a politician well-acquainted with x-risk do? One way such a politician could do good is by helping direct funds to MIRI. However, this is something an individual with a lot of money (successful in Silicon Valley or on Wall Street) could do as well.
Should one who wants to make a large positive impact on society go into politics over more "conventional" methods of effective altruism (becoming rich somewhere else or working for a high-impact organization)?
If you think about it, this is quite a striking statement about the LW community's implicit beliefs.
I agree with something; I am just not sure enough whether I agree with you. Could you please make those implicit beliefs a bit more explicit?
The implicit assumption is that people don't go into politics because they want (like, really, effectively, goal-oriented, outcome-based want) to make large positive impacts on society. We read a statement with that assumption quite plainly baked into it, and it doesn't seem weird. The fact it doesn't seem weird does itself seem kind of weird.
Sam Nunn was a US senator who wanted to buy surplus nuclear weapons from Russia, rather than risk them wandering off. He was unable to convince the rest of the government to pay for it, but he was able to convince the government to let Buffet and Turner pay for it. He has since decided that he can do more to save the world outside of government.
Added: But, he was rumored to be Secretary of Defense under Gore. So he thought some positions of government were more useful than others.
How?
I wonder if a good way to do good as a politician would be to try to be effective and popular during your term, then work as a lobbyist afterwards and lobby for causes you support (like x-risk reduction, prediction market legalization, and whatnot).
I suspect my model of the method used to allocate government funding may be oversimplified/incorrect altogether, but I am under the impression that those serving on the House Science Committee have a significant say in where funds are allocated for scientific research. Given that some members of this committee do not believe in evolution and do not believe in man-made climate change, it seems that the potential social good of becoming a successful politician could be very high.
My impression is that the House Science Committee is too high to aim for. A more plausible scenario would be MIRI convincing someone at the NSF to give them grants.
http://qz.com/96054/english-is-no-longer-the-language-of-the-web/
Plenty for LW here-- not just that English is a steadily declining fraction of online material, but the difficulties of finding out what proportion of the web is in what language, and the process by which more and more of the web is in more and more languages.
I just found this nice quote on The Last Conformer which is supposed to prove that betting on major events is qualitatively different from betting on coinflips:
It seems to me that the problem exists for coinflips as well. If I flip a coin and don't show you the result, your beliefs about the coin are probably 50/50. But if I offer you to bet at 50/50 odds that the coin came up heads, you'll probably refuse, because I know which way the coin came up and you don't.
According to the Dutch book argument for rationality, we are supposed to accept either side of any bet offered at the odds corresponding to our beliefs. In my example, that idea breaks down, because getting the offer is evidence that you shouldn't take the bet. But then how do we formulate the Dutch book argument?
The selection effect you mention only applies to offering bets, not accepting them. If Alice announces her betting odds and then Bob decides which side of the bet to take, Alice might be doing something irrational there (if she didn't have a bid-ask spread), but we can still talk about dutch books from Bob's perspective. If you want to eliminate the effect whereby Bob updates on the existence of Alice's offer before making his decision, then replace Alice with an automated market maker (setup by someone who expects to lose money in exchange for outsourcing the probability estimate). Or assume some natural process with a naturally occurring payoff ratio that isn't determined by the payoff frequencies nor by anyone's state of knowledge.
I am very interested in higher order theory of mind (ToM) tests for adults to differentiate those with high theory of mind quotient if you will. My hypothesis is that people with strong theory of mind are better at sales – I have an interest in both. Most tests I find online are meant to test children and for Asperger's Syndrome, what I want are complex questions and problems.
I recently saw a highly upvoted comment on reddit that stated "...Mifune, destroying the top black belts..." and cited this video However, I believe OP has misread the situation. Mifune is highly respected and is in a room full of spectators that fully expect him to come out on top when sparing, or at least would feel embarrassed for Mifune if he didn't. The pretense is that everyone is trying their hardest to throw Mifune but he is so good he just twirls around them and that it is not a demonstration - it's real sparing. OP and those who upvoted, failed to put themselves accurately in the role of the students in that room and think " gee I totally don't want to be that guy that throws the old man down." The students are conforming, whether they believe it or not.
Any 4 year old can pass the false belief test but the Mifune video is a lot more subtle and complex. There are intentions involved, there is also each student's knowledge of other student's intentions, conformity, and self-delusion. The huge man being thrown by Mifune might say that he really believed he was trying his hardest not the be thrown, but ToM isn't just about what others believe it's also about accurately predicting other peoples actions, why they did it, and what they think they believe they did it for.
I am trying to compile a list of such examples, and would greatly appreciate it if anyone could add to this conversation by agree/disagreeing with what I have said, and especially, provide some examples of more complex theory of mind problems.
Political thrillers as a genre and some aspects of real life politics are a lot about theory of mind. The multilevelled effect of thinking how someone will act based on how you or a third party will act, ad infinitum.
I'm not sure exactly what you mean by theory of mind though. It seems a different skill to model how a theoretical rational agent would behave in a certain situation (as we do when discussing the prisoners dilemma or related logic puzzles) or modeling how a particular human being will behave (e.g. Alice tends to underestimate herself, Bob is overly cautious).
Does anyone want to make a small study group to read one of these books at a relatively slow pace?
I've been meaning to read these (which I learned about from LW) for a long time and just now have the time.
Causality looks like the best option: the entire first edition is freely avaiable on Pearls site here. There is an overview of 2nd ed. chapters here
I've been meaning to read Causality for a long time now: I'd be interested.
Showing Compassion To Designated Enemies; A Punishable Offense
There's this very interesting trope being forged on TVT, and I found it very interesting from a rationalist standpoint, especially the examples involving God... What an asshole.
Interesting link. Again from a rationalist standpoint it seems to be the correct move (at times, with the same conditions that apply to punishment in general as well as a few others).
As someone who's studied The Art Of War intensively, and to whom "defeat means friendship" (as long as the opposition does feel thoroughly defeated) is a matter of course, I find that incentivizing unforgiveness and wanton destruction (I mean, seriously, they even had to kill the cattle? How is that in any way practical or rational?) is not only aesthetically dissonant, but wasteful and silly as hell. Going out of one's way to ensure some don't get a proper ritual, or otherwise kicking the defeated while they are down, also strikes me as disgusting, wasteful, and, frankly, cartoonishly over the top.
They should make a piece of fiction with a villain whose actions mirror those of YHWH perfectly. Have him blow a fortress to pieces and then demand that everything that breathes within be slaughtered. Have him kill some of his followers for disobeying some arbitrary rule, then have him kill many more just because they complained about it.
With his mind.
That's not an omnibenevolent deity, that's a fucking Dungeon Keeper.
See how many people notice the references. See how many identify this overlord as a vile villain before being informed he's patterned after The God.
Do you remember the other parts too? The parts that don't feel so warm and fuzzy? Or other effective military strategies? Defeat rather seldom means friendship when it comes to pre-established enemies, whether in The Art of War or outside of it. The generals discussed in The Art of War commanded their soldiers to kill other soldiers (and their leaders) and conquer strategic resources. The Art of War gives instructions on how to do so more effectively, with more compliance from soldiers and, where possible, in such a way that the enemy does not fight back significantly.
Yes, God is silly as well as a dick (and non-existent). But looking at the strategy from the perspective of instrumental rationality interest rather than from the perspective of indignant atheism the cattle-killing silliness is not especially relevant. Most people agree that it is better to keep the cattle than kill them. What is more interesting is just when it is instrumentally rational to apply force (be that political, economic or physical) against those that are assisting a particularly troublesome enemy.
The details matter a lot of course. There are cases where it obvious that is instrumentally rational to kill those who are assisting the enemy while there are cases where that would be outright self destructive. On one side of the line there is the sole provider of particularly advanced weaponry to the enemy, who does not trade with you and who has no significant social alliances and on the other side there is an welfare charity who provides medical assistance indiscriminately worldwide, is loved by all and protected by an alliance of powerful nations. In between things are less simple.
I don't recall ever mentioning anything fuzzy or warm; It's simply a pragmatic matter of taking the human factor into account. You simply try to fight and destroy as little as possible because it's expensive, risky, and creates ill will in the long tern. Napoleon and the armies of the Spanish Empire are excellent examples of how to win every battle, piss everyone off, and never win the peace.
Of course, if you do need to crush, kill, destroy, do it quickly and decisively, with no hesitation or pussyfooting. Therein lies the difference between being respectably compassionate, and being a sentimental fool begging to be abused.
The Art of War isn't just about winning battles or wars, it's about winning the peace that comes afterwards; it's not just about beating your foe, but about getting them to stay beaten, and, in fact, help you out.
And, in particular, Scorched Earth tactics are extremely costly, and they are only effective in very specific circumstances.
As for the Bible examples cited there, I do not see how they are practical in any way, shape, or form. I can see the point of some of the other examples, but most of them are about helping out or standing up for someone who has been reduced to complete harmlessness and can't be a threat anymore. This is most egregious when the defeated enemy in question is a freaking corpse.
God forbid I find myself defending the morality of the Hebrew Bible, but it seems to me you're making a claim here (i.e. by implication, that the behavior of the Israelites was impractical/silly/evil) from a very poor epistemic position. The details of warfare, religious ritual, and the politics of conquest of that period are thin, and it's not even clear that the story in Joshua (for example) represents an historical event (we have, and expect, no archeological record of this war), and so the practical purpose of the passage you cite may be entirely other than recommending or reporting a certain mode of warfare. Even if it is making such a report or recommendation, we simply lack the details to evaluate it.
Essentially, we have no idea whether or not what's being described is silly or cunning or even moral or immoral (unless maybe we're deontologists, but I doubt that). Speaking so confidently about something where confidence is so ill warrented is often a symptom of being mind-killed on a particular subject. And we should expect that as atheists, we are very, very likely to be mind-killed about biblical historiography (maybe right, as well, but mind-killed nonetheless).
Pointing out the horrors of the bible is a worthwhile way to put the morality of theists in tension with their holy book. But once that rhetorical work is done, the epistemically cautious consequentialist has absolutely no business throwing invective at the bible without a thorough study of the period being written about. Lord knows why you'd bother with that though.
" the behavior of the Israelites was impractical/silly/evil"
I made no such claim, unless you mean the fictional Hebrews in the book rather than the collection of tribes the Romans diaspora'd many, many centuries later. Even then, what would you expect of the poor guys, when they're being terrorized and cajoled into being evil by Kira-on-steroids?
However, from what we know of the behaviour of the Judeans under Roman rule, practicality and pragmatism weren't top priorities for them, and their Scripture probably wasn't a very sane source of advice on that matter.
More relevantly, some people still (claim to) take their ethics from the Bible, the Old Testament is very popular in the USA, and rhetorics of an Angry God Striking Down Evil and Smiting The Heathen, and of Lambs not Going Astray from Flocks (what a horrifying metaphor, being compared to cattle, of the stupidest sort no less!) are still floating around, influencing the way politics is done, whispering in the subconscious and blaring in the loudspeakers.
That is, of course, not the only culture that was influenced by a book that not only advocates genocide, but demands that, when it be done, it be carried out thoroughly. The Massacre of St. Barthelemy, some of the actions of Cromwell, and think of the horrifying, nauseating, vertiginous irony of Old Testament memes having been a factor, no matter how small, in the intellectual genesis of the Final Solution.
I mean the Israelites! That's what these people, whose historicity is not in doubt, called themselves. They're the ones that wrote the bulk of the biblical material between around 900 and 587 BCE.
Maybe, though this strikes me as conjecture. I also don't see how it's related to claims you seem to be making about the authors of the bible and their people.
You know what, get back to me on the historicity of Hebrews after reading this. I'm not adverse to shifting my priors on that topic; please refer me to a work that does not refer to the Bible as a starting point for its hypotheses, if that's at all possible.
Until I have a Bible-independent framework on how to think of the ethnic conglomerate that claim to be the Descent of Israel, I prefer to assume it is all fiction as a working hypothesis, and start from there.
This is why also why I am reticent to call them Israelites, despite them calling themselves something like that, just the same way I wouldn't call Arabs Ismaelites; I doubt that Israel/Jacob, Isac, Ishmael or Abraham existed, and I doubt either group's direct descent from them. I certainly doubt that any human gained the title of Israel after wrestling with God and winning.
As for them calling themselves Israelites, allow me to be a little pedantic here; they called themselves B'nei Yisrael; Israelites is a greek term.
I see the point that post is making, but I'm not just blowing air here. I have a degree in near-eastern history, and I studied with an archeologist who works on this period. None of us were theists, or remotely interested in defending or even discussing any modern religion. The historical books of the Hebrew bible are a relatively reliable historical record, so far as we can tell, but the fact is we just don't have that much detail about the period in which it was written, so mostly we just don't know. Too many of our sources are (as EY points out) singular. However the historicity of the first-temple (900-587 BCE) Israelites very roughly as we find them in the HB is not really subject to much doubt. There are people who argue that the whole bible was written much later, and the history of Israel was just made up, but this theory is taken about as seriously by archeologists and historians as is ID by biologists. Needless to say, pretty much everything from the Torah that's plausible (like the period of slavery) is pretty much unconfirmable. And no one takes seriously the implausible stuff, like Abraham or Noah.
I'm throwing authority at you here for two reasons. 1) the real argument consists in taking you through a bunch of archeology and historiography and I don't feel like taking the time and 2) neither do you. You don't, I suspect, actually care at all about first temple Israelite culture. You care about how modern Abrahamic religions are false and politically destructive. Granted! But that claim doesn't have anything to do with history, and thinking that arguments against modern theists constitute an understanding of an ancient culture is not justifiable.
My real point however was one of caution. You're exactly right to point out that by the standards of Christians or Jews or Muslims, the god of the Hebrew bible is savage. But you have no empirical standing to make claims about the morality or practicality of first temple Israelites, because pretty much no one does.
Yeah, who knows. But I call them Israelites because they called themselves that. I see no reason to make a point of it. And 'Israelites' may happen to have been a Greek term, but today it's just the way you translate that Hebrew phrase into English.
A nice feature on the Bitcoin-accepting pub in London.
I was thinking about writing down everything I know. After reflecting a few seconds on that I realized what a daunting task I haveset out to do. Has anyone tried this or a suggestion how I should go about this if at all?
See: How to Make a Complete Map of Every Thought you Think [pdf]
An excerpt from the introduction (tldr: beware of eating yourself):
I think you'll get more concrete suggestions if you explained what you hope to accomplish with this proposed task.
I've settled for (1) keeping a structured list of all books from which I've learned something worthwhile, and (2) a log for current ideas (with no more than a few short entries a week). This is sufficient to locate and efficiently relearn most barely-remembered ideas when they become relevant again.
Writing down everything you know seems pretty pointless. Writing down everything you fear forgetting might give you a smaller but more useful list, since it lets you cull out anything you're in no danger of forgetting (e.g. all the arithmetic facts) as well as anything you wouldn't care if you forgot (e.g. the vast majority of knowledge in your head).
I actually sort of do this, in a private git repository where (alongside lists of interesting typically-paragraph-sized quotes) I keep lists of interesting (typically-sentence-fragment-sized) topic names. Sometimes the names serve as mnemonics that merely remind me of an interesting fact I once encountered but haven't thought about recently (e.g. "Rai stones"). Sometimes I'll skim through the lists and encounter a topic that I've completely forgotten about (e.g. "burying the corpse effect") and I'll quickly Google to see why I once thought it was so interesting to begin with. A little organization helps. E.g. "burying the corpse effect" was in my "economicsbits" file under the hierarchy "Financial markets, investment", "Market manipulation", "Cornering the market", so it was easy to tailor web searches to lead me to results from economists rather than morticians.
I don't know if this is useful for anything more than entertainment. TimS had a very good question here.
I started something like this awhile ago. I was trying to write papers for one of my classes and couldn't find a reference I needed. After about the third time this happened, I figured I ought to make some kind of searchable list of references with summaries about what they contain, and links to the file. I use a google document now, with summaries of books I've read and notes from my classes, in addition to references. What I really want is something like workflowy where I can collapse bulleted points. Workflowy would be fine, but I'd be worrying about going over their limit and having to pay for it, since I have a lot of bullet points. In the meantime, I use google docs' "table of contents" feature so I have that orderly list I want.
I don't put "everything" in it. My general rule is that it has to be either useful, something I'd likely forget, or something interesting. I also link to everything so I don't have to search my history.
In last year's survey, someone likened Less-Wrong rationalism to Ayn Rand's Objectivism. Rand once summed up her philosophy in the following series of soundbites: "Metaphysics, objective reality. Epistemology, reason. Ethics, self-interest. Politics, capitalism." What would the analogous summary of the LW philosophy be?
In the end, I found that the simplest way to sum it up was to cite particular thinkers: "Metaphysics, Everett. Epistemology, Bayes. Ethics, Bentham. Politics, Vinge."
A few comments:
For metaphysics... I considered "multiverse" since it suggests not only Everett, but also the broader scheme of Tegmark, which is clearly popular here. Also, "naturalism", but that is less informative.
For epistemology... maybe "Jaynes" is an alternative. Or "overcoming cognitive bias". Or just "science"; except that the Sequences contain a post saying that Bayes trumps Science.
For ethics... "Bentham" is an anodyne choice. I was trying to suggest utilitarianism. If there was a single well-known thinker who exemplifies "effective altruism", I would have gone with them instead... Originally I said "CEV" here; but CEV is really just a guess at what the ethics of a friendly AI should be.
For politics... Originally, I had "FAI" as the answer here. That may seem odd - friendly AI is not presented as a political doctrine or opinion - but the paradigm is that AGI will determine the future of the world, and FAI is the answer to the challenge of AGI. These are political concerns, even if the ideal for FAI theory would be to arrive at conclusions about what ought to happen, that become as uncontroversial and "nonpartisan" as the equations of gravity. I chose Vernor Vinge as the iconic name to use here; I suppose one could use Kurzweil. Alternatively, one could argue that LW's cardinal metapolitical framework is "existential risk", rather than FAI vs UFAI.
I wonder whether more people will think of Julian Jaynes rather than E. T. Jaynes if you just rattle "Everett, Jaynes, Bentham, Vinge" at them. This does seem like a very nice ultra-concise description though.
Odd brain exercise I find entertaining: Using only knowledge about this universe, try to determine what kind of universe would be most likely to simulate our kind of universe.
For example, to my eye, general relativity, plus the propensity of pi to pop up in odd places, implies something like a hierarchy-relative polar coordinate system is the standard mathematical model in our host universe, as opposed to the Cartesian coordinate system we tend to default to. So, what would a universe look like such that this is the most intuitive way of considering data? Seems most likely that such a view is most likely to arise in a nonoriented universe; gravity would be unlikely, as gravity provides a natural plane of orientation, so something like universal attractive and repulsive forces probably wouldn't exist.
So with the PRISM program outed, the main thrust of discussions is about its legality and consequences. But what interests me is a rather non-political issue of general competence. One would think that the NSA and in general any security agency would take risk assessment and mitigation seriously. And having its cover blown ought to be somewhere close to the top of the list of critical risks. Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin. And given the Bradley Manning security breach, one would expect even a half-decent internal security officer to be rather paranoid. And who knows what other low-ranking admins quietly did and probably are doing with what information and for what purpose and in what organizations.
I am wondering, is it reasonable to assume that the people responsible for the integrity of a spy agency are this inept? Or is what we see now is somewhere low on the list of risks and is being handled according to plan? Or, if you go deeper in the conspiracy mode, is orchestrated for some non-obvious reason?
Personally, I hope it's the last possibility, because I'd take competency over ineptitude anytime, nefarious purposes or not.
So, I'm not an expert but going from a couple of news articles and HN discussion I get the impression that Snowden actually did require that level of access to do his job and that its enough of a sellers market for people with his general class of IT skills that you can't really get technically competent people if you add to many additional constraints.
So, I just moved to Europe for two years and finally got finantial independence from my (somewhat) Catholic parents and I want to sign up for cryonics. Is there international coverage? Is there anything I should be aware of? Are there any steps I should be taking?
Doctors say "If I'm going to die, please don't freaking try to keep me alive". Normal people and doctors agree that they want a peaceful death, but only doctors are truly aware that a peaceful death is often a willing one.
Useful chrome-plugin for learning to do fermi calculations:
http://blog.xkcd.com/2013/05/15/dictionary-of-numbers/
Looks interesting (but unfortunately crashes when attempting to install it on my chrome.)
What is the proper font, spacing, and so forth for a LessWrong article?
The proper formatting for a LessWrong article is the formatting that you see in most LessWrong articles.
The way to achieve that is to use for formatting only the tools provided in the LessWrong editor (except for the button labelled "HTML" — use that only if you absolutely must). If you find it more convenient to write your article in an external editor, make sure that when you copy and paste it across, you copy only the text without the formatting. One way to do that is to prepare the text in an editor that does not support formatting, such as any editor that might be used to type program code.
I'm using a Mac and copying from Pages into TextWrangler and from TextWrangler to the LessWrong editor does not appear to have worked. What might be going wrong?
Also, information both about people's reaction to nonstandard formatting and how go about getting standard formatting if you're working from an external editor should really be included in the LessWrong FAQ.
That's strange. One gotcha is that if you accidentally paste in some formatted text, then delete it, the LW editor remembers the font and line settings at that point and will apply them to any unformatted text that is then pasted in. To make absolutely sure of eliminating all formatting from the editor, select everything and hit backspace twice. The first will delete the text and the second will delete its memory of the formatting. Then paste in the unformatted text, making sure that you actually copied it out of TextWrangler after pasting it in there.
In the last resort, click the HTML button and see if the HTML has anything like:
at the top, and delete it (together with the corresponding </p> at the end of the text).
And of course, save the article as a draft so you can see exactly how it is being formatted before publishing.
Bayes' theorem written a certain way is surprisingly effective and easy to use in Fermi estimates of population parameters and risks. Unless you are already quite well versed in intuitive Bayes, this is likely of interest.
Hastie & Dawes (2010, p. 108) describe the "Ratio Rule", a helpful way of writing out Bayes' theorem that is useful for the quick estimation of an unknown proportion:
Pr(A|B) / Pr(B|A) = Pr(A) / Pr(B)
(Ratio of conditional probabilities equals ratio of unconditional probabilities.)
To steal their example, it's often reported that most 'hard' drug users also use (or started out with) pot, and this is often taken to support the notion that pot is a gateway to hard drugs. Hastie & Dawes point out that for the purposes of evaluating the 'gateway' claim, what we really want is not the reported value of Pr(has used pot | has used hard drugs) but rather Pr(has used hard drugs | has used pot) [*]. Suppose that Pr(pot|hard) ~ 0.9. We know that Pr(pot) ~ 0.5 (fraction of Americans who've used pot at some point), and we estimate that Pr(hard) is lower by a factor of, say 2.5 - 5, so the ratio Pr(pot) / Pr(hard) is between 2.5 and 5. By the ratio rule, Pr(pot|hard)/Pr(hard|pot) = 2.5 - 5, so Pr(hard|pot) is between about 0.2 and 0.4.
Another example: recently I found that the annual risk of dying by suicide for a young to middle-aged male is about 0.02%, as high as the annual risk of a middle-aged male dying in a car accident (!). I figured that taking into account my not having a history of mental illness should decrease this risk. Googling revealed that Pr(mental illness|suicide) = 0.9, and Pr(mental illness) is between 0.06 and 0.25 depending on the severity criteria you use. I want x = Pr(suicide|not mental illness), so I set up the ratios (assuming that the population proportions can be interpreted as annual risk):
Pr(suicide|not mental illness) / Pr(not mental illness|suicide) = Pr(suicide) / Pr(not mental illness)
x/0.1 = 0.0002/0.75 ~ 0.0002
x ~ 0.00002 (0.002%)
This seems much less worth worrying about. In the plausible range, the precise value of Pr(not mental illness) doesn't matter much.
[*] What we would really like is Pr(has used hard drugs | started out with pot) but we can assume that the two are close.
A meetup is coming up on July 4th in Tel Aviv. I want to post about it, but I've never done a meetup post before. Are there any non-obvious guidelines I should follow? A template? What about the map?
Click the 'Add Meetup' button in your user-box thing at the top of the sidebar, and fill in the fields with appropriate info. The map is automatically inserted based on the address you entered and Google Maps.
Is there a general way to answer questions like this, which often occur in economics and the social sciences:
"Does institution X play a part in keeping parameter Y stable? It looks like parameter Y has been really stable for awhile now. Is institution X doing a good job, or is it completely useless?"
Well, to go ahead and state the incredibly obvious: in cases where institution X is not equally well-established globally, one thing to look at is variations in X among different nations, geographic regions, populations, etc. (depending on the kind of thing X is). If Y remains equally stable across the board while X varies, that's evidence that X doesn't have much to do with Y.
Part of the problem is that changes in X might be aimed at keeping Y stable when some other factor Z varies. See Milton Friedman's Thermostat:
For example, it's hard to answer the question "do armies stop invasions?" by using correlations, because the rulers can adjust the strength of the army in response to the risk of getting invaded, so the resulting risk depends mostly on the risk tolerance of the rulers.
All right, since I was told on my very first, and abortive, discussion thread that I should post a larger summary or excerpt of the link I had on there if I wanted to comport with LW's norms, let me do that here instead (since my karma is now too low to make another discussion post).
So I've written a long article summarizing a life philosophy which asserts the significance of a certain kind of meditative self-expression for grasping human freedom and understanding the significance of pain and suffering in human life.
Any LessWrong readers interested in thinking about the meaning of life, meditation, psychology, philosophy, spirituality, art, or in better understanding and handling their own minds should be interested.
Here is the largest excerpt I could post without the comment being rejected as too long:
The next time you stub your toe or otherwise hurt yourself, take a moment to become curious about exactly what the pain is like. What exactly does it feel like? Is it stabbing? Does it radiate? Is it blunt or sharp? Does it come and go? Is it cold or hot? Does it remind you of someone, or something, or some place?
As soon as you suspend the pain in your mind, the pain immediately changes. It becomes interesting. Like Keanu Reeves might stop a bullet in the air in The Matrix, you stun the pain by paying it conscious attention and then examining it like a scientist or artist might. It becomes fascinating. And then, as you describe it, its character changes more and more. It becomes sharp, specific, and beautiful. It might still be pain, but still, even as pain, it is no longer painful in the same way. Now it is a jewel. You see within it organization, ideas, intelligence.
Through the process of reflection and then expression, we can transform pain into beauty. This is true not just of physical pain, but of all pain, and indeed, of any experience. This is the essence of human freedom and power.
The most interesting and fundamental question in the world is what we’re doing here in this life. What’s the point? I spent years thinking about this question — going through psychology and western and eastern philosophy, and asking this question over and over.
I think I have an answer, at least a certain kind of partial answer. It’s certainly not totally original. Yet it is not often seen, not often heard.
My problem is how to explain it in words. I have tried many formulations on paper and in my head and none of them seem quite right. So I’ve decided to share several of them with you here, and hope you get the point. I’m trying to indicate a sensibility about the world — a way of relating to it, of seeing it, of dealing with it. What I’m trying to say cannot be wholly communicated in words (though can anything?). I need you to get the feel of it, to have the shift in perspective without which none of this will really make sense.
There’s a zen story about the monk who points to the moon. And the disciple keeps looking at his finger. ‘No, no, up there!’ the monk tries to say, but the disciple cannot understand the concept of pointing. That’s the kind of barrier I feel I’m up against.
Let me give you another example: to someone new to wine, wine tastes like wine. Maybe red wine tastes like red wine, and white wine tastes like white wine. To someone who drinks a little more, and thinks about what they drink, perhaps they start to identify sour and bitter, dryness and acidity. But to the wine connoisseur and critic, the vocabulary and the experience expand. They start to be able to detect and name notes of musk, florality, and minerality. They distinguish the taste of the wine at the front of the palate from the taste at the back. They comprehend the history and the heritage of the wine, its lineage in the soil, the effect of the sun and the rain on the grapes that made it. They taste and appreciate the various nuances of fermentation.
For the connoisseur, the wine unfolds into a much more complex, in-depth experience. It happens not just because the person drinks a lot of wine, but because they pay attention, and because they analyze the wine, and come up with labels, and break down and express their experiences.
The same way the experience of the wine reveals layer after layer with increased attention and thought, so too can the same general idea apply to life. Any particular experience you’ve had without thinking about it you’ve barely even lived. It passed by and vanished, and you missed a lot in it, a lot like a rookie misses almost everything interesting in the wine she’s tasting. If you take an experience of yours, pay it attention, and express what it is like, you will find that the experience starts to refine itself. It becomes complex, multi-layered, rich, fascinating, interesting, beautiful. It ceases to be one big blob and starts to become a multitude.
This revelation of layers of intelligence, of pattern but also of chaos—interesting chaos—is the reward for this expression.
Expression is the key.
Mere observation is not enough. Simply remembering an experience is not enough. You just remember the same pale, shallow memory you had before. But if you remember an experience and then 1) think deeply about it, 2) try to honestly and originally express exactly what it was like for you, and 3) put this expression in some form (music, poetry, film, or even just a few sentences in your journal) then 4) that will allow you to see the experience in a new light. It will force you to choose the important aspects of the experience. Those aspects of the experience will come into focus. Like a near-sighted man putting on glasses for the first time, the experience will become dramatically sharper.
Of course, expression inevitably distorts. Even a good map is partly wrong. It is still illuminating. A map has to distort and simplify to be useful. Similarly, every expression breaks down experience in a way that is partly wrong. One kind of expression will highlight certain facets of an experience; another expression will highlight other facets. Experiences can be expressed in an endless variety of ways.
This sensibility I’m trying to communicate results in the appreciation of “subtlety.” To a casual listener of music, when someone plays a key on a piano, they hear it as a single note. To a musician or a music critic or an audiophile, though, the note has at least three parts. The first is the attack, when the key is struck and a tiny hammer literally pings against a tight string inside the piano. The second is a middle portion of the note. The third is the decay, as the note fades. Each of these is different. And in fact even within each of these parts the note changes. Expression is like an instrument that allows you to see the worlds within every world.
Observing and expressing any experience streams down beautiful ideas that allow you to see it in a new way. The experience discloses connections to other experiences, patterns within it, intelligence.
To appreciate the finer and finer details of these changes — to see distinctions and discern refinement where once there was only sameness — is the spirit of subtlety. It is to see not just a thing but the presence of the space surrounding that thing. It is the spirit of the Japanese tea ceremony.
It is the spirit of not trying to overwhelm with a simple rush of pleasure, but to see deeper and deeper and quieter and quieter parts of something. It is why John Cage created an entirely silent piece — he wanted to make a statement about this spirit of subtlety, that looks for the shyest and most reluctant details. It is the spirit of, when you’re hungry, not just gobbling up food, but making food that tastes good, and then, looks good. Taking your time to do that prolongs the hunger, but then allows you to explore that hunger in a more and more elegant, artistic way.
The Magic Equation: Desire = Pain
And if you want to see these subtleties, desire is crucial. You do not fully control your mind anymore than you fully control the weather. You are at all times in a mental landscape, and the most important feature of this landscape is desire. We always want something — or perhaps to avoid something — and this focuses and defines our attention. We can use this desire as the starting point of our attention and expression.
Desire, which unfulfilled is the same thing as pain, is what allows you to appreciate anything. So the connoisseur realizes that desire is a precious thing. It should not be used up too early. It is what allows you to be interested in something. As soon as you’ve had an orgasm, interest in sex decreases. It is the desire for sex, the ache, the hunger, that can motivate you to explore subtler and subtler realms of sexuality: to be interested in those realms. And that is why celibacy shows up so often in the world’s mystical traditions. It provides the motivation to seek sexuality not in physical bodies, but in knowledge and contemplation. The subtlest sexual objects are ideas.
The artistic mindset I am trying to communicate sees emotional or physical pain — unfulfilled desire — as a precious, specific energy that we can capture like a firefly in a jar, to follow its spirals and whirls. We can use it by investigating the desire itself. The desire is an experience. We can attend to it, note its intricacies.
Read the rest of it...
Assuming what you said is true, can you give a concrete example in one sentence what I should choose differently than I do now?
For example, I would draw from my experience as a lawyer to say:
Absolutely. To start with, I give a simple concrete suggestion in the first paragraph above about how to deal with physical pain.
Another concrete suggestion might be: any time you feel annoyed or angry, express in words exactly what the annoyance or anger is like, using metaphors, and going back and forth between your words and your experience to make sure you've captured the experience in as accurate and original -- or non-cliched -- a way as you possibly can.
A broader way to say the same thing might be: focus on those experiences that cause you emotional disturbance and express them, as accurately and as originally as possible, into an artistic medium of your choice (words, music, painting, whatever), using metaphors appropriate to that medium to convey what your experience is like.
If you do that, my contention is that you will find that your negative experiences bear within them a wealth of beauty.
There's more to it than that, but those are a couple of concrete suggestions.
Constructive suggestion: Write more like this, less like what you posted about.
Substantively, I think one could substitute any emotion or sensation and get the same advice. Thus:
Which I expect is true. But pain is generally no fun, and it isn't clear that you think avoiding pain is worth the effort.
When I stub my toe, I'm not doing something wrong by first choosing to figure out why I stubbed my toe and what to change to avoid that in the future. And once I've done that, I'm not sure I have time to do what you suggested.
Have you heard of Focusing? It's a psychological system based on that premise.
I have flouted this advice almost every time I installed software or signed up to a website over the last couple decades, and AFAICT I have never had much trouble as a result.
This may be overly harsh, but:
This essay is nonsense. There's an easy trick for analyzing writing like this: As you read, mentally remove all of the emotionally charged words and connotations and see if the argument still makes sense. When we get rid of all the flowery language here, we end up with (admittedly uncharitable) things like, "Humans can think about pain and other experiences and use these thoughts to create art that others find pleasurable" and "By paying close attention, you can gain more understanding of complex things (e.g. wine tasting)." None of this analysis even mentions the actual, causal reasons human beings suffer, or established theories about coping with suffering and creativity. As a result, I don't see anything particularly insightful or useful.
This is reminding me of the Enneagram. The idea is that people have basic habitual ways of relating to the universe-- all the standard ways (the Enneagram has nine of them) are useful but incomplete, and all of them can go bad or be refined into something very valuable.
Accurate perception is important, but so is action.
upvoted for politeness. Still didn't want to read more than a couple paragraphs due to craziness. Sorry.
NYTimes blog article: How Carbs Can Trigger Food Cravings.
(If you ever have trouble accessing a NYTimes article (there was a script which doesn't work anymore), having exceeded your monthly allotment, remember you can just google the title then follow a link from e.g. Google News, which won't count against your quota.)