LESSWRONG
LW

Eli Tyre
7772Ω525510943
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
My pitch for the AI Village
Eli Tyre4d40

But this story isn't cannon to AI-2027, which is what makes it a fan fiction.

Reply
Foom & Doom 1: “Brain in a box in a basement”
Eli Tyre5d20

because cortex is incapable to learn conditioned response, it's an uncontested fiefdom of cerebellum

What? This isn't my understanding at all, and a quick check with an LLM also disputes this.

Reply
Don't Eat Honey
Eli Tyre5d*20

I don't think it is obvious that you have to multiply by some other number. 

I don't know how conscious experience works. Some views (such as Eliezer's) hold that it's binary: either a brain has the machinery to generate conscious experience or it doesn't. That there aren't gradations of consciousness where some brains are "more sentient" than others. This is not intuitive to me, and it's not my main guess. But it's on the table, given my state of knowledge.

Most moral theories, and moral folk theories, hold to the common sense claim that "pain is bad, and extreme pain is extremely bad." There might be other things that are valuable or meaningful or bad. We don't need to buy into hedonistic utilitarianism wholesale, to think that pain is bad.

Insofar as we care about reducing pain and it might be that brains are either conscious or not, it might totally be the case that we should be "adding up the experience hours", when attempting to minimize pain.

And in particular, after we understand the details of the information processing involved in producing consciousness, we might think that weighting by neuron count is as dumb as weighting by the "the thickness of the coper wires in the computer running an AGI." (Though I sure don't know, and neuron count seems like one reasonable guess amongst several.)

Reply
Don't Eat Honey
Eli Tyre5d125

Suppose that two dozen bees sting a human, and the human dies of anaphylaxis.  Is the majority of the tragedy in this scenario the deaths of the bees?

FYI, this isn't a good characterization of the view that I'm sympathetic to here.

The moral relevance of pain and the moral relevance of death are importantly different. The badness of pain is very simple, and doesn't have to have have much relationship to higher-order functions relating to planning, goal-tracking, or narrativizing, or relationships with others. The badness of death is tied up in all that. 

I could totally believe that, at reflective equilibrium, I'll think that if I were to amputate the limb of a bee without anesthetic, the resulting pain is morally equivalent to that of amputating a human limb without anesthetic. But I would be surprised if I come to think that it's equally bad for a human to die and a bee to die. 

Reply1
Don't Eat Honey
Eli Tyre5d*20

then somehow thinking that conditional on that the methodology in the RP welfare ranges is a reasonable choice of methodology (one that mostly ignores all mechanistic evidence about how brains actually work, for what seem to me extremely bad reasons)

Do you have a preferred writeup for the critique of these methods and how they ignore our evidence about brain mechanisms?

[Edit: though to clarify, it's not particularly cruxy to me. I hadn't heard of this report and it's not causal in my views here.]

Reply
Don't Eat Honey
Eli Tyre5d4-2

This comment is a longer and more articulate statement of the comment that I might have written. It gets my endorsement and agreement.

Namely, I don't think that high levels of confidence in particular view about "level of consciousness" or moral weight of particular animals is justified, and it especially seems incorrect to state that any particular view is obvious. 

Further, it seems plausible to me that at reflective equilibrium, I would regard a pain-moment of an individual bee as approximately morally equivalent to a that of a pain-moment individual human.

Reply
Don't Eat Honey
Eli Tyre5d42

Exhibiting a pessimism bias (thinking, if they’re been exposed to new positive and negative stimuli at an equal rate, probably the next stimuli will be beneficial).

Is this supposed to be "harmful"? As worded, this sentence is confusing.

Reply
Don't Eat Honey
Eli Tyre5d30

This obviously isn't how this works.

I don't think it's obvious at all?

Reply
Something to Protect
Eli Tyre8d20

Not good, I'm afraid.

Reply
“Sharp Left Turn” discourse: An opinionated review
Eli Tyre11d64

Why do you dismiss the obvious hypothesis that “almost everyone” basically just doesn’t really think that factory farming is morally bad in any substantive way?

I confirm that I do dismiss this hypothesis on the basis of various pieces of evidence, from answers on surveys to the results of the Milligram experiment (though most people's views about who counts as a moral patient definitely not a crux for my overall model here), but I would prefer not to get into it.

Reply
Load More
29Eli's shortform feed
6y
322
Center For AI Policy
2y
Blame Avoidance
2y
Hyperbolic Discounting
2y
23Evolution did a surprising good job at aligning humans...to social status
1y
37
48On the lethality of biased human reward ratings
2y
10
14Smart Sessions - Finally a (kinda) window-centric session manager
2y
3
63Unpacking the dynamics of AGI conflict that suggest the necessity of a premptive pivotal act
2y
2
20Briefly thinking through some analogs of debate
3y
3
146Public beliefs vs. Private beliefs
3y
30
143Twitter thread on postrationalists
3y
32
22What are some good pieces on civilizational decay / civilizational collapse / weakening of societal fabric?
Q
4y
Q
8
38What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model?
Q
4y
Q
16
42I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction
4y
29
Load More