Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
wilkox100

Katelyn Jetelina has been providing some useful information on this. Her conclusion at this point seems to be 'more data needed'.

wilkox85

The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).

I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard.

Cade correctly informed the readers that Scott is aligned with Murray on race and IQ.

He really didn’t. Firstly, in the literal sense that Metz carefully avoided making this claim (he stated that Scott aligned himself with Murray, and that Murray holds views on race and IQ, but not that Scott aligns himself with Murray on these views). Secondly, and more importantly, even if I accept the implied claim I still don’t know what Scott supposedly believes about race and IQ. I don’t know what ‘is aligned with Murray on race and IQ’ actually means beyond connotatively ‘is racist’. If this paragraph of Metz’s article was intended to be informative (it was not), I am not informed.

wilkox2338

It seems like you think what Metz wrote was acceptable because it all adds up to presenting the truth in the end, even if the way it was presented was 'unconvincing' and the evidence 'embarassing[ly]' weak. I don't buy the principle that 'bad epistemology is fine if the outcome is true knowledge', and I also don't buy that this happened in this particular case, nor that this is what Metz intended.

If Metz's goal was to inform his readers about Scott's position, he failed. He didn't give any facts other than that Scott 'aligned himself with' and quoted somebody who holds a politically unacceptable view. The majority of readers will glean from this nothing but a vague association between Scott and racism, as the author intended. More sophisticated readers will notice what Metz is doing, and assume that if there was substantial evidence that Scott held an unpalatable view Metz would have gladly published that instead of resorting to an oblique smear by association. Nobody ends up better informed about what Scott actually believes.

I think trevor is right to invoke the quokka analogy. Rationalists are tying ourselves in knots in a long comment thread debating if actually, technically, strictly, Metz was misleading. Meanwhile, Metz never cared about this in the first place, and is continuing to enjoy a successful career employing tabloid rhetorical tricks.

wilkox10

The section on ‘How do you do it?’ looks like a generalised version of John Platt's Strong Inference, a method of doing science that he believed ‘makes for rapid and powerful progress’. The essence of Strong Inference is to think carefully about a scientific question (the goal) to identify the main competing hypotheses that have yet to be discriminated between (the blockers), and devise and perform experiment(s) that rapidly discriminate between them (taking responsibility to remove the blockers and actually perform the next step).

Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly:
1) Devising alternative hypotheses;
2) Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses;
3) Carrying out the experiment so as to get a clean result;
1') Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain; and so on.

He also favoured Actually Thinking, as opposed to scientific busywork:

We speak piously of taking measurements and making small studies that will “add another brick to the temple of science.” Most such bricks just lie around the brickyard. Tables of constants have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.

wilkox32

And, part of the point here is "it is very hard to talk about this kind of thing". And I think that if the response to this post is a bunch of "gotcha! You said this comment was bad in one particular way, but it's actually bad in an interestingly different way", that kinda feels like it proves Elizabeth right?

This seems like a self-fulfilling prophecy. If I wrote a post that said:

It's common for people on LessWrong to accuse others of misquoting them. For example, just the other day, Elizabeth said:

wilkox is always misquoting me! He claimed that I said the moon is made of rubber, when of course I actually believe it is made of cheese.

and philh said:

I wish wilkox would stop attributing made-up positions to me. He quoted me as saying that the sky is blue. I'm a very well-documented theskyisgreenist.

The responses to that post would quite likely provide evidence in favour of my central claim. But this doesn't mean that the evidence I provided was sound, or that it shouldn't be open to criticism.

wilkox2012

For several of the examples you give, including my own comments, your description of what was said seems to misrepresent the source text.

Active suppression of inconvenient questions: Martín Soto

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that's a made-up problem.

This is not a charitable or even plausible description of what Martín wrote, and Martín has described this as a 'hyperbolic' misrepresentation of their position. There is nowhere in the source comment thread that Martín claims or implies anything resembling the position that naïve veganism is 'made-up'. The closest they come is to express that naïve transitions to veganism are not common in their personal experience ('I was very surprised to hear those anecdotal stories of naive transitions, because in my anecdotal experience across many different vegan and animalist spaces, serious talk about nutrition, and a constant reminder to put health first, has been an ever-present norm.') Otherwise, they seem to take the idea of naïve transitions seriously while considering the effects of 'signaling [sic] out veganism' for discussion: 'To the extent the naive transition accounts are representative of what's going on...', 'seems to me to be one of the central causes of these naive transitions...', '...the single symptom of naive vegan transitions...'.

(Martín also objected to other ways in which they believe you misrepresented their position, and Slapstick agreed. I found it harder to evaluate whether they were misrepresented in these other ways, because like Stephen Bennett I found it hard to understand Martín's position in detail.)

Active suppression of inconvenient questions: 'midlist EA'

But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you'd expect if the health claims were true but inconvenient. I don't know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

The author is clear about what they actually believe. They say that the claims that plant foods are poisonous or responsible for Western diseases are 'based on dubious evidence' and are 'dubious health claims'. They then make an argument proceeding from this: that these dubious claims increase the consumption of animal-based foods, which they believe to be unethical, with the evidence for animal suffering being much stronger than the evidence for the 'dubious health claims'.

You may disagree with their assessment of the health claims about plant foods, and indeed they didn't examine any evidence for or against these claims in the quoted post. This doesn't change the fact that the source quotation doesn't fit the pattern you describe. The author clearly does not believe that the health claims about plant foods are 'true but inconvenient', but that they are 'dubious'. Their focus on the consequences of these health claims is not an attempt to 'actively suppress inconvenient questions', but to express what they believe to be true.

Active suppression of inconvenient questions: Rockwell

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient.

Rockwell expresses, in passing, a broad concern with your posts ('...why I find Elizabeth's posts so troubling...'), although as they don't go into any further detail it's not clear if they think your 'beliefs are wrong' or that they find your posts troubling for some other reason. It's reasonable to criticise this as vague negativity without any argument or details to support it. However, it cannot serve as an example of 'active suppression of an inconvenient question' because it does not seem to engage with any question at all, and there's certainly nowhere in the few words Rockwell wrote on 'Elizabeth's posts' where they express or emphasise 'the claim that [your] beliefs are wrong, not just inconvenient'. (This source could work as an example of 'strong implications not defended').

Active suppression of inconvenient questions: wilkox[1]

...the top comment says that vegan advocacy is fine because it's no worse than fast food or breakfast cereal ads...If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them.

My comment does not claim 'that vegan advocacy is fine because it's no worse than fast food or breakfast cereal ads', and does not describe veganism or vegan advocacy as 'no worse than McDonalds'. It sets up a hypothetical scenario ('Let's suppose...') in which vegan advocates do the extreme opposite of what you recommend in the conclusions of the 'My cruxes' section of the Change My Mind post, then claims that even this hypothetical, extreme version of vegan advocacy would be no worse than the current discourse around diet and health in general. This was to illustrate my claim that health harms from misinformation are 'not a problem specific to veganism', nor one where 'veganism in particular is likely to be causing significant health harms'.

Had I actually compared McDonalds to real-world vegan advocacy rather than this hypothetical worst-case vegan advocacy, I would have said McDonalds is much worse. You know this, because you asked me and I told you. (This also doesn't seem to be an example of 'active suppression of inconvenient questions'.)

Frame control, etc.: wilkox

Over a very long exchange I attempt to nail down his position:

  • Does he think micronutrient deficiencies don't exist? No, he agrees they do.
  • Does he think that they can't cause health issues? No, he agrees they do.

This did not happen. You did not ask or otherwise attempt to nail down whether I believe micronutrient deficiencies exist, and I gave my position on that in the opening comment ('Veganism is a known risk factor for some nutrient deficiencies...'). Likewise, you did not ask or attempt to nail down whether I believe micronutrient deficiencies can cause health issues, and I gave my position on that in the opening comment ('Nutrient deficiencies are common and can cause anything ranging from no symptoms to vague symptoms to life-threatening diseases').

  • Does he think this just doesn't happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

You did ask me what I thought about the Faunalytics data ('Do you disagree with their data...or not consider that important...?').

So what exactly does he disagree with me on?

This is answered by the opening sentences of my first comment: 'I feel like I disagree with this post, despite broadly agreeing with your cruxes', because I interpreted your post as making 'an implicit claim' that there are 'significant health harms' of veganism beyond the well-known nutritional deficiencies. I went on to ask whether you actually were making this claim: 'Beyond these well-known issues, is there any reason to expect veganism in particular to cause any health harms worth spending time worrying about?' Over two exchanges on the 'importance' of nutrient deficiencies in veganism, I asked again and then again whether you believe that there are health harms of veganism that are more serious and/or less well-known than nutrient deficiencies, and you clarified that you do not, and provided some useful context that helped me to understand why you wrote the post the way you did.

My account of the conversation is that I misread an implicit claim into your post, and you clarified what you were actually claiming and provided context that helped me to understand why the post had been written in the way it was. We did identify a disagreement over the 'importance' of nutrient deficiencies in veganism, but this also seemed explicit and legible. It's hard to construe this as an example where the nature of the disagreement was unclear, or otherwise of 'nailing jello to the wall'.

Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it's fine because if people get tired they can go to a doctor

I did not claim that fatigue due to B12 or iron deficiencies, or any other health issue secondary to veganism, is 'fine because if people get tired they can go to a doctor'. I claimed that to the extent that people don't see a doctor because of these symptoms, the health harms of veganism are unlikely to be their most important medical problem, because the symptoms are 'minor enough that they can't be bothered', they 'generally don't seek medical help when they are seriously unwell, in which case the risk from something like B12 deficiency is negligible compared to e.g. the risk of an untreated heart attack', or they 'don't have good access to medical care...[in which case] veganism is unlikely to be their most important health concern'. I did not say that every vegan who has symptoms due to nutritional deficiencies can or will go to a doctor (I explicitly said the opposite), nor that this situation is 'fine'.

But it's irrelevant when the conversation is "can we count on veganism-induced fatigue being caught?"

'Can we count on veganism-induced fatigue being caught?' is not a question raised in my original comment, nor in Lukas Finnveden's reply. I claimed that it would not always be caught, and gave some reasons why it might not be caught (symptoms too minor to bother seeing a doctor, generally avoid seeking medical care for major issues, poor access to medical care). Lukas Finnveden's comment added reasons that people with significant symptoms may not seek medical care: they might not notice issues that are nonetheless significant to them, or they might have executive function problems that create a barrier to accessing medical care. There's nowhere in our brief discussion where 'can we count on veganism-induced fatigue being caught?' is under debate.

Bad sources, badly handled: wilkox

Wilkox's comment on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn't happened at the time of linking).

I did not 'eventually agree' to these points, and we did not discuss them at all in the exchange. In my first comment, I said 'Many vegans, including myself, will routinely get blood tests to monitor for these deficiencies. If detected, they can be treated with diet changes, fortified foods, oral supplementation, or intramuscular/intravenous supplementation.'

  1. ^

    I am not an EA, have only passing familiarity with the EA movement, and have never knowingly met an EA in real life. I don't think anything I have written can stand as an example of 'EA vegan advocacy', and actual EAs might reasonably object to being tarred with the same brush. ↩︎

wilkox20

This sounds like you're saying "I won't prescribe B12 until my patient gives up oreos" or even "I won't prescribe B12 until everyone gives up oreos", which would be an awful way to treat people.[1]

[...]

You probably mean "I don't think Elizabeth/anyone should spend time on veganism's problems, when metabolic issues are doing so much more aggregate harm."

 

I wouldn’t say either of these things. A quick and easy treatment like B12 replacement is not mutually exclusive with a long-term and difficult treatment like diet modification. (This is not an abstract question for me; prescribing a statin and counselling on lifestyle changes are both things I do several times a week, and of the two, the script is orders of magnitude easier for both me and the patient, but we’ll usually do both in parallel when treating dyslipidaemia.)

As I said earlier in the thread, I’m all in favour of you or anybody else spending time on making people aware of the risk of nutrient deficiencies associated with veganism and what to do about them. (Again, this is not an abstract issue to me; I routinely discuss, screen for, and treat nutrient deficiencies with vegan and vegetarian patients.) I do recognise that you’ve had some bad experiences doing this, which is unfair.

  1. ^

    I’m not sure if you chose this example intentionally, but for what it’s worth: Oreos are vegan.

wilkox20

I absolutely agree. McDonalds and the other demons of the Western Diet cause much more harm, both in absolute terms and per capita. That was really my point; within the class of 'health misinformation and disinformation that causes harm', furphies about vegan nutrition are a comparatively minor problem.

wilkox53

You don't just have a level of access, you have a type of access. Your access to your own mind isn't like looking at a brain scan.

From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn't like my indirect access to other people's minds; to understand another person's mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn't like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it's almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I'm not well-equipped to model them.

Whether you call that a ‘level’ or ‘type’ of access, I'm still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness.

The Mary's Room thought experiment brings it out. Mary has complete access to someone elses mental state, form the outside, but still doesn't experience it from the inside.

Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain's visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visualise something like a red square, a person with the Simulation Centre mutation could examine sufficiently detailed blueprints for a building and have a vivid photorealistic visual experience of looking at it, indistinguishable from reality.

Poor Mary, locked in her black-and-white room, doesn't have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary's sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don't reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.

In other words: the Mary's Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains (‘we don't have Simulation Centres’) rather than pointing to any fundamental conceptual mystery.

  1. ^

    This might just be a matter of degree. Some people apparently can do things like visualise a red square, and it seems reasonable that a person who had seen shapes of almost every colour before but had never happened to see a red square could nevertheless visualise one if given the concept.

wilkox10

Apologies for the repetition, but I'm going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:

  1. The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don't currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael's post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
  2. You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I've never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don't believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I'm talking about the scope of the physical system to be explained. When you talk about it, you're talking about the location(s) of the conceptual mystery.

What you call "model" here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn't provide a difference between the two. Explaining the neural correlate is of course just as "easy" as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn't explain the belief/experience in question in terms of this correlate.

As a Camp 1 person, I don't think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself.  Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don't think there is a Hard Problem.

It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.

I take Dennett's view on p-zombies, i.e. they are not conceivable.

So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn't explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.

In the Camp 1 view, once you've explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.

Load More