I want to thank this community for existing, all the people founding it, the people contributing to it and all the stuff linked here. I may not like all the topics or agree with all the opinions posted here. Nor may I find use of most of the stuff I read around here. But at least I don't feel so alone anymore.
That is all, thank you.
When I hit discussion, it keeps automatically redirecting me to the 'top posts' even when I click back onto 'new'. Is anyone else getting this?
How well do medical doctors fare in terms of health outcomes compared to people of similar social economic status and family history? Is there a difference between research doctors and practising doctors? What about nurses, is there a notable difference too?
This question is posted within the context of "how big is the effect of medical knowledge on personal health?" and the assumption that medical doctors should represent the upper end of the spectrum. Other medical professionals should represent data points in between. All this together should hint at the personal use of medical knowledge in some kind of unit.
This study seems to go quite a ways towards answering your question:
Among both U.S. white and black men, physicians were, on average, older when they died, (73.0 years for white and 68.7 for black) than were lawyers (72.3 and 62.0), all examined professionals (70.9 and 65.3), and all men (70.3 and 63.6). The top ten causes of death for white male physicians were essentially the same as those of the general population, although they were more likely to die from cerebrovascular disease, accidents, and suicide, and less likely to die from chronic obstructive pulmonary disease, pneumonia/influenza, or liver disease than were other professional white men...
These findings should help to erase the myth of the unhealthy doctor. At least for men, mortality outcomes suggest that physicians make healthy personal choices.
-- Frank, Erica, Holly Biola, and Carol A. Burnett. "Mortality rates and causes among US physicians." American journal of preventive medicine 19.3 (2000): 155-159.
You may also find this worth checking into:
...The doctors had a lower mortality rate than the general population for all causes of death except suicide. The mortality rate ratios for other graduates and
Is there a name for the situation where the same piece of evidence is seen as obviously supporting their side by both sides of an argument?
eg: New statistics are published showing ethic group X is committing crimes at 10 times the rate of ethic group Y.
To one side, this is obvious evidence that ethic group X are criminals.
To another side, this is obvious evidence the justice system is biased.
Both sides are totally opposed, yet see the same fact as proving they are right.
Both sides are totally opposed, yet see the same fact as proving they are right.
If redheads are 10 times more likely to be in jail for violent crimes, it is evidence for both "redheads are violent" and "judges hate redheads" - and both might be true!
And "redheads are violent" and "judges hate redheads" are not totally opposed, they only look that way in a context where they are taken as arguments in support of broader ideologies who, them, are totally opposed (or rather, compete with each other so oppose each other).
More generally, many facts can be interpreted different ways, and if one interpretation is more favorable to one ideological side, that side will use that interpretation as an argument. Seen like that, facts looking like they support "opposite sides" seems almost inevitable.
Everyone knows utilitarians are more likely to break rules.
(This is mostly a joke based on the misspelling. I know a sophisticated utilitarianism would consider the effect of widespread lawbreaking and not necessarily break laws so much as to be overrepresented in prison)
I have sometimes seen arguments that fit this pattern, including on Less Wrong —
Your disagreement with me on a point of meta-level theory or ideology implies that you intend harm to me personally, or can't be trusted not to harm me if the whim strikes you to do so.
It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I'm not sure that it's always wrong. What are some examples of legitimate arguments of this form?
(A point of clarification: The "meta-level theory or ideology" part is important. That should match propositions such as "consequentialism is true and deontology is false" or "natural-rights theory doesn't usefully explain why we shouldn't hurt others". It should not match propositions such as "other people don't really suffer when I punch them in the head" or "injury to wiggins has no moral significance".)
One mistake is overestimating the probability that the other person will act on their ideology.
People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don't expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.
(And it's probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don't really need semi-rational Bible maximizers in my universe.)
The missing airplane story seems like an opportunity for prediction on par with the Amanda Knox trial.
What happened to slatestarcodex and does anyone know if it's just temporary or something to be concerned about?
My hosting company got annoyed because something was taking up too many resources. I did what the nice person on the telephone suggested (installed some WordPress plugins, uninstalled others) and it's back online now. If the problem recurs I might have to restrict commenting for a while until I can figure out a more permanent solution, but for now everything's fine.
Any tips on bailing out of an argument if you want to very nearly concede the whole thing without quite saying your opponent is right?
eg if you realise the whole conversation was a terrible mistake and you're totally unequipped to have the conversation, but still think you're right.
Should you just admit they're right for simplicity even if you're not quite convinced?
I state the truth: "I tend to get too attached to my opinion in live debates and want to think about your arguments in peace."
The people that get offended by this tend to be not the kind of people I want to associate with anyway.
The concept of heroic responsibility seems to be off-putting for some people, mostly because it looks like it puts the blame of every single bad thing at the feed of an individual. Generally, I've answered this objection by telling them that they don't need to look that broadly and that they can apply the concept at a smaller, everyday scale. So instead of worrying about solving depression forever, you can worry about making sure a friend gets the psychological help they need and not telling yourself things like: "It's their parent's/partner's/doctor's responsibility that they get proper help."
Is this a correct way to explain the concept or am I strongly misrepresenting it?
Maybe it's not a problem with explaining the concept per se, it's just that its consequences are unpleasant. Feels like you are telling people that heroic responsibility is one of the possible choices, a one that they didn't make, but could have made, and perhaps even should have made. -- There probably are good reasons why most people don't take heroic responsibility, but these are difficult to explain. So it's easier to pretend that the whole concept does not make sense to you.
Also, it's not my responsibility to understand the concept of heroic responsibility. :D
EDIT: It may be related to the status-regulation emotion that apparently some people feel very strongly, and some people don't even know. The problem with "heroic responsibility" might simply be the emotional reaction of: "Who do you think you are that you even consider taking more responsibility than other people around you?! That is a task worth of a king; and you obviously aren't one. And you try to explain it to me, but I am also not a king; I don't even pretend to be, so... this whole stuff doesn't make any sense. You must be insane."
Laser eye surgery (LASIK) is being suggested by several people on LessWrong, who suggest it is a costly procedure that has high likelihood to improve your life. I do not think this is a good trade-off across a life time, because presbyopia.
Almost all humans experience presbyopia. This is age-related deterioration in the ability of the eye to adjust focus. In history, the biggest effect for most people is reduced ability to read, but now it also is affecting the ability to use computers.
If you have myopia (short sight), you can not see distant objects witho...
[LINK] Sleep loss can cause brain damage (permanently lost neurons at least in mice). Even if the study is only about mice it nonetheless provides references to more general results:
http://www.uphs.upenn.edu/news/News_Releases/2014/03/veasey/
Scott Aaronson reviews Max Tegmark's book on the Mathematical Universe hypothesis. Tegmark responds in the comments, with an interesting and still ongoing back-and-forth.
I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.
As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. ...
Has anyone evr tried writing rationalist fiction of The Sandman? It's a world that explicitly runs on storytelling patterns, but surely something can be done that illustrates the merits of rational thought even in such a setting. Rationalistis should win and adapt to the circumstances, even if they are a dreamscape.
Priming can nudge one's thoughts in certain directions; fashion can nudge others'.
It's easy enough to try priming abstract, rational, far thinking with cool blue colours, Mozart, and by surrounding oneself with books... but is there any data on scents that nudge peoples' modes of thinking in similar directions? Failing that, is there anecdata?
Crapshot: Say I have some kind of data per country and I want to use Python or other FOSS tools to plot this on a good looking map at the country level. Is there a good tutorial for this? I ask because I can do virtually anything else with Python like data manipulation and analysis or plots, so I'd be nice to do this with Python too.
What does Everett Immortality look like in the long term?
The general idea of EI is that there is always some small chance you will survive in any given situation, so there will be some multiverse timelines whose present is the same as your present, but in which you keep on living indefinitely. However, some forms of survival are a lot more likely than others; eg, it's a lot more likely that my cryonically-preserved brain will be scanned and turned into an AI than that a copy of my brain will spontaneously appear out of nothingness. Thus, it makes sense to ...
They neither know of night or day,
They night and day pour out their thunder.
As every Ingot rolls away,
A dozen more are split 'asunder.
There is a sign above the gate: Eleven days since a man lay dying,
Now every shift brings fear and hate, and shaken men in terror crying.
*
The molten rivers boil away a fiery brew Hell never equalled,
To their profits the bosses pray,
And Mammon sings in his grim cathedral:
His attendants join the choir,
and Heaven help us if we're shirking!
Stoke the furnace's altar fire and just be thankful that we're
... Would it be possible for a comment to have anchors that are Karma scored separately, so that someone making several points in the same comment can see which one are getting/losing Karma?
Amusing final sentence from Clarke & Primo (2005):
...While much ink has been spilled arguing for this approach to the study of political science, little attention has been paid to justifying and rationalizing the method. On the rare occasions that justification has been attempted, the results have been maddeningly vague. Why test predictions from a deductive, and thus truth-preserving, system? What can be learned from such a test? If a prediction is not confirmed, are assumptions already known to be false to blame? What precisely is the connection betwe
I'm having trouble knowing how well I understand a concept, while learning the concept. I tend to be good at making up consistent verbalizations of why something is, or how something works. However these verbalizations aren't always accurate.
The first strategy against this trend is to simply do more problem sets with better feedback. I'm wondering if we can come up with a supplementary strategy where I can check if I really understand a concept or not.
I'm contemplating going to grad school for psychology.
I'd really like to focus on the psychology of religion, but there are other areas of psychology that I find interesting too (e.g. evolutionary psychology).
I don't have a background in psychology; I took one intro course in undergrad to fulfill a requirement for my bachelor's in IT. I do read a lot of pop-sci about psychology.
Anyone have any advice for me going forward?
What's the process for selecting what 'rationality blogs' are featured in the sidebar? Is it selected by the administrators of the site?
I'm surprised some blogs of other users with lots of promoted posts here aren't featured as rationality blogs.
Any LW NYCers have a room available for <$1,000 per month that I (a friendly self-employed 23-year-old male) might be able to move into within a week or two? Or leads on a 1br/studio for <$1400? I could also go a bit above those prices if necessary.
PM me if so and I'll send more details about myself. I'm also staying with some friends in NYC right now so we could meet up anytime.
What is your irrational reading guilty pleasure? Whenever I need a cheap laugh, I browse Conservapedia. Where do you go to indulge the occasional craving for high-octane idiocy?
Facebook announced graph search with great fanfare but if I want to know something simple like getting a list of my recently added friends I can't just type it into the search bar but have to search in Google and find that I have to go through recent activity tab.
Similarly I have told facebook that I speak English and German through the facebook menu. It still shows me French and Rumanian posts of my friends that I can't read. It doesn't offer to translate them. A simple idea like showing me English posts that my French friends post but not showing the Fre...
You have to remember that you are not the customer for Facebook... you are the product.
Giving you more control over your timeline and the posts you see is good for you, but makes Facebook's ability to charge for access to you through "promoted posts" substantially less inviting.
On the other hand, something like graph search allows the opportunity to compete with Google and LinkedIn.
Based on discussion at the South Bay Area meetup tonight.
The five pillars of Islam are
By analogy, I propose five pillars of LessWrongIsm:
I get he is. But Poe's Law works both ways: there's no self-parody that some clueless outsider won't mistake for real lunacy.
What does Everett Immortality look like in the long term?
The general idea of EI is that there is always some small chance you will survive in any given situation, so there will be some multiverse timelines whose present is the same as your present, but in which you keep on living indefinitely. However, some forms of survival are a lot more likely than others; eg, it's a lot more likely that my cryonically-preserved brain will be scanned and turned into an AI than that a copy of my brain will spontaneously appear out of nothingness. Thus, it makes sense to plan around the most likely sorts of scenarios, and not to bother doing much planning for the least likely ones.
But thinking /very/ long term, to the heat death of the universe... every form of negentropy is going to end up exhausted, with no more energy gradients that life and intelligence could use to survive from; meaning that however extended a life might be, there will be some point at which all of a person's futures eventually fade away...
... or maybe not. Thermodynamic miracles - events violating ordinary statistics - will, on the long term, happen every so often... so might it be possible for some form of life in that era to rely on them as the last available source of negentropy? Which forms of TMs occur most often, that could most reliably be 'fed' from? How often do they occur, compared to the potential stability of patterns of matter-energy at this time-scale?
You're assuming some sort of pattern theory of identity when you consider uploads a potential form of survival. If you go all-out pattern theory of identity and assume we're in a big world, is there a reason why the subjectively subsequent moments of awareness need to actually take place at increasing time points on the universe's timeline? A state of matter that corresponds to your pattern's subjective t + 1 might have occurred at the universe's t - 10000 at some distant light cone. If your mind stays at any finite size, it'll eventually just end up going... (read more)