Are there any known interventions for increasing one's lifespan which are commonly overlooked by transhumanist folks on LW?
- Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
- Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
- Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
- Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it'll stay safe for a couple hundred years or longer. This is a pretty long shot, but there's a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
Which country should software engineers emigrate to?
I'm going to research everything, build a big spreadsheet, weight the various factors, etc. over the next while, so any advice that saves me time or improves the accuracy of my analysis is much appreciated. Are there any non-obvious considerations here?
There are some lists of best countries for software developers, and for expats in general. These consider things like software dev pay, cost of living, taxes, crime, happiness index, etc. Those generally recommend Western Europe, the US, Canada, Israel, Australia, New Zealand, Singapore, Hong Kong, Mexico, India. Other factors I'll have to consider are emigration difficulty and language barriers.
The easiest way to emigrate is to marry a local. Otherwise, emigrating to the US requires either paying $50k USD, or working in the US for several years (under a salary reduction and risk that are about as bad as paying $50k), and other countries are roughly as difficult. I'll have to research this separately for each country.
This article is interesting to me because I have this belief that weight loss is basically about eating less (and exercising more). And some extremely high percentage of everything said about dieting, etc. beyond that is just irrational noise. And that the diets that work don't work because of the reasons their proponents say they work, but only because they end up restricting calories as a byproduct.
this study is NOT a blow to low-carb dieting, which can be quite effective due to factors such as typically higher protein and more limited junk food options.
This line is the funniest to me. This is why I think low carb diets work: Because if you eliminate the primary source of calories in a person's diet (carbs, which can be 50%+ of many people's diets), they will eat significant fewer calories overall restricting themselves to only protein and fat. But people have, instead, made up all sorts of fancy, science-y sounding reasons why carbs were evil.
Yes, the effect of diets on weight-loss is roughly mediated by their effect on caloric intake and expenditure. But this does not mean that "eat fewer calories and expend more" is good advice. If you doubt this, note that the effect of diets on weight-loss is also mediated by their effects on mass, but naively basing our advice on conservation of mass causes us to generate terrible advice like "pee a lot, don't drink any water, and stay away from heavy food like vegetables".
The causal graph to think about is "advice → behavior → caloric balance → long-term weight loss", where only the advice node is modifiable when we're deciding what advice to give. Behavior is a function of advice, not a modifiable variable. Empirically, the advice "eat fewer calories" doesn't do a good job of making people eat fewer calories. Empirically, advice like "eat more protein and vegetables" or "drink olive oil between meals" does do a good job of making people eat fewer calories. The fact that low-carb diets "only" work by reducing caloric intake does not mean that low-carb diets aren't valuable.
Apparently since the Enlightenment this idea has gotten about that all the previous generations didn't know how to live properly, even our parents' generation; but that we somehow mysteriously know how to do it right, or at least better. But then if we have offspring, many of them might develop the same attitude towards us.
This really doesn't make sense, because incompetent people generally don't leave descendants. Our ancestors must have gotten a lot of important things right on average for us to have come into existence in the first place. Yet we think we can just reverse the wisdom of the ages in all kinds of areas and not screw things up. It looks like a kind of evolution-denialism, in fact.
How can people who say they believe in evolution also hold the conflicting idea that they know better than the principles derived from the collective evolutionary experiences of human survival?
Here's an SSC post and ~700 comments on cultural evolution: http://slatestarcodex.com/2015/07/07/the-argument-from-cultural-evolution/
I am probably misunderstanding something here, but doesn't this
Then the correct guess, if you don't know whether a given question is "easy" or "hard"...
Basically say, "if you have no calibration whatsoever?" If there are distinct categories of questions (easy and hard) and you can't tell which questions belong to which category, then simply guessing according to your overall base rate will make your calibration look terrible - because it is
Replace "if you don't know" with "if you aren't told". If you believe 80% of them are easy, then you're perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.
There's been far less writings on improving rationality here on LW during the last few years. Has everything important been said about the subject, or have you just given up on trying to improve your rationality? Are there diminishing returns on improving rationality? Is it related to the fact that it's very hard to get rid off most of cognitive bias, no matter how hard you try to focus on them? Or have people moved talking about these on different forums, or in real life?
Or like Yvain said on 2014 Survey results.
It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.
This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then you're underconfident on the "easy" questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you're overconfident on the "hard" questions, because you guessed heads with 80% confidence but got heads 0% of the time.
So you can get apparent under/overconfidence on easy/hard questions respectively, even if you're perfectly calibrated, if you aren't told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.
Surely you don't think daughters are more reproductively successful than sons on average?
Surely I do - it is common knowledge today that about 40% of men and 80% of women managed to reproduce?
Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman's principle.
Maybe machine learning can give us recommendations for gardening without hurting your back.
"When changing directions turn with the feet, not at thewaist, to avoid a twisting motion."
“Push” rather than “pull” objects.
Depends on your feature extractor. If you have a feature that measures similarity to previously-seen films, then yes. Otherwise, no. If you only have features measuring what each film's about, and people like novel films, then you'll get conservative predictions, but that's not really the same as learning that novelty is good.
Since quantum mechanics is true, Deutsch self-consistency has pretty big advantages over Novikov self-consistency.
Good point. I may be thinking about this wrong, but I think Deutsch self-consistent time travel would still vastly concentrate measure in universes where time travel isn't invented, because unless the measures are exactly correct then the universe is inconsistent. Whereas Novikov self-consistent time travel makes all universes with paradoxes inconsistent, Deutsch self-consistent time travel merely makes the vast majority of them inconsistent. It's a bit like quantum suicide: creating temporal paradoxes seems to work because it concentrates your measure in universes where it does work, but it also vastly reduces your total measure.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What do the webcam and screenshots help with?
Allow the AI to reconstruct your mind and memories more accurately and with less computational cost, hopefully; the brain scan and DNA alone probably won't give much fidelity. They're also fun from a self-tracking data analysis perspective, and they let you remember your past better.