Outlive: A Critical Review
Outlive: The Science & Art of Longevity by Peter Attia (with Bill Gifford[1]) gives Attia's prescription on how to live longer and stay healthy into old age. In this post, I critically review some of the book's scientific claims that stood out to me. This is not a comprehensive review. I didn't review assertions that I was pretty sure were true (ex: VO2 max improves longevity), or that were hard for me to evaluate (ex: the mechanics of how LDL cholesterol functions in the body), or that I didn't care about (ex: sleep deprivation impairs one's ability to identify facial expressions). First, some general notes: * I have no expertise on any of the subjects in this post. I evaluated claims by doing shallow readings of relevant scientific literature, especially meta-analyses. * There is a spectrum between two ways of being wrong: "pop science book pushes a flashy attention-grabbing thesis with little regard for truth" to "careful truth-seeking author isn't infallible". Outlive makes it 75% of the way to the latter. * If I wrote a book that covered this many entirely different scientific fields, I would get a lot more things wrong than Outlive did. (I probably get a lot of things wrong in this post.) * When making my assessments, I give numeric credences and also use terms such as "true" and "likely true". The numbers give my all-things-considered subjective credences, and the qualitative terms give my interpretation of the strength of the empirical evidence. For example, if the scientific evidence suggests that a claim is 75% likely and I understand the evidence well, then I rate the claim as "likely true". If I only read the abstract of a single meta-analysis, and the abstract unequivocally supports the claim but I'm only 75% sure that the meta-analysis can be trusted, then I rate it as "true". Both claims receive a 75% credence. Now let's have a look at some claims from Outlive, broken down into four categories: disease, exercise, nutrition, and sleep. Diseas
I'm not sure how exactly this fits in to the discussion, but I feel it is worth mentioning that all plausible moral systems ascribe value to consequences. If you have two buttons where button A makes 100 people 10% happier, and button B makes 200 people 20% happier, and there are no other consequences, then any sane version of deontology/virtue ethics says it's better to push button B.
So e.g. if your virtue ethics AI predictably causes bad consequences, then you can be a staunch virtue ethicist and still believe that this AI is bad.