because cortex is incapable to learn conditioned response, it's an uncontested fiefdom of cerebellum
What? This isn't my understanding at all, and a quick check with an LLM also disputes this.
I don't think it is obvious that you have to multiply by some other number.
I don't know how conscious experience works. Some views (such as Eliezer's) hold that it's binary: either a brain has the machinery to generate conscious experience or it doesn't. That there aren't gradations of consciousness where some brains are "more sentient" than others. This is not intuitive to me, and it's not my main guess. But it's on the table, given my state of knowledge.
Most moral theories, and moral folk theories, hold to the common sense claim that "pain is bad, and extreme pain is extremely bad." There might be other things that are valuable or meaningful or bad. We don't need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.
Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be "adding up the experience hours", when attempting to minimize pain.
And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the "the thickness of the coper wires in the computer running an AGI." (Though I sure don't know, and neuron count seems like one reasonable guess amongst several.)
Suppose that two dozen bees sting a human, and the human dies of anaphylaxis. Is the majority of the tragedy in this scenario the deaths of the bees?
FYI, this isn't a good characterization of the view that I'm sympathetic to here.
The moral relevance of pain and the moral relevance of death are importantly different. The badness of pain is very simple, and doesn't have to have have much relationship to higher-order functions relating to planning, goal-tracking, or narrativizing, or relationships with others. The badness of death is tied up in all that.
I could totally believe that, at reflective equilibrium, I'll think that if I were to amputate the limb of a bee without anesthetic, the resulting pain is morally equivalent to that of amputating a human limb without anesthetic. But I would be surprised if I come to think that it's equally bad for a human to die and a bee to die.
then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons)
Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?
[Edit: though to clarify, it's not particularly cruxy to me. I hadn't heard of this report and it's not causal in my views here.]
This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.
Namely, I don't think that high levels of confidence in particular view about "level of consciousness" or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious.
Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.
Exhibiting a pessimism bias (thinking, if they’re been exposed to new positive and negative stimuli at an equal rate, probably the next stimuli will be beneficial).
Is this supposed to be "harmful"? As worded, this sentence is confusing.
This obviously isn't how this works.
I don't think it's obvious at all?
Why do you dismiss the obvious hypothesis that “almost everyone” basically just doesn’t really think that factory farming is morally bad in any substantive way?
I confirm that I do dismiss this hypothesis on the basis of various pieces of evidence, from answers on surveys to the results of the Milligram experiment (though most people's views about who counts as a moral patient definitely not a crux for my overall model here), but I would prefer not to get into it.
But this story isn't cannon to AI-2027, which is what makes it a fan fiction.