Wikitag Contributions

Comments

Sorted by
dr_s
40

That’s why it’s bad when mentally disabled people suffer, and would be even if we discovered that they were secretly not human.

 

Define "not human". If someone is, say, completely acephalus, I feel justified in not worrying much about their suffering. Suffering requires a certain degree of sentience to be appreciated and be called, well, suffering. In humans I also think that our unique ability to conceptualise ourselves in space and time heightens the weight of suffering significantly. We don't just suffer at a time. We suffer, we remember not suffering in the past, we dread more future suffering, and so on so forth. Animals don't all necessarily live in the present (well, hard to tell, but many behaviours don't seem to lean that way) but they do seem to have a smaller and less complex time horizon than ours.

Insects can probably suffer according to our best evidence—they respond to anesthetic, make tradeoffs between pain and reward, avoid locations where they’ve been hurt, self-medicate, communicate, and much more.

The problem is the distinction between suffering as "harmful thing you react to" and the qualia of suffering. Learning behaviours that lead you to avoid things associated with negative feedback isn't hard; any reinforcement learning system can do that just fine. If I spin up trillions of instances of a chess engine that is always condemned to lose no matter how it plays, am I creating the new worst thing in the world?

Obviously what feels to us like it's worth worrying about is "there is negative feedback, and there is something that it feels like to experience that feedback in a much more raw way than just a rational understanding that you shouldn't do that again". And it's not obvious when that line is crossed in information-processing systems. We know it's crossed for us. Similarity to us does matter because it means similarity in brain structure and thus higher prior that something works kind of in the same way with respect to this specific matter.

Insects are about as different as it gets from us while still counting as having a nervous system that actually does a decent amount of processing. Insects barely have brains. We probably aren't that far off from being able to decently simulate an EM of an insect. I am not saying insects can't possibly be suffering, but they're the least likely class of animals to be, barring stuff like jellyfish and corals. And if we go with the negative utilitarian view that any life containing net negative utility is as good as worse than non-existence, and insect suffering matters this much, then you might as well advocate total Earth-wide ecocide of the entire biosphere (which to be sure, is just about what you'd get if you mercy-extinguished a clade as vital as insects).

dr_s
310

Concurrently, GiveWell has announced that all of your donations will be devolved to the development of EA Sports' latest entry of the NBA Live game series:

it's nothing but net.

dr_s
20

I think there's a difference though between propaganda and the mix of selection effects that decides what gets attention in profit driven mass media news. Actual intentional propaganda efforts exist. But in general what makes news frustrating is the latter, which is a more organic and less centralised effort.

dr_s
20

I guess! I remember he was always into theoretical QM and "Quantum Foundations" so this is not a surprise. It's not a particularly big field either, most researchers prefer focusing on less philosophical aspects of the theory.

dr_s
42

Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don't think is very realistic if we're talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have "replace humans"-good AI without almost immediately having "wipes out humans, takes over the universe"-good AI.

dr_s
30

That sounds interesting! I'll give the paper a read and try to suss out what it means - it seems at least a serious enough effort. Here's the reference for anyone else who doesn't want to go through the intermediate news site:

https://arxiv.org/pdf/2012.06580

(also: professor D'Ariano authored this? I used to work in the same department!)

dr_s
54

This feels like a classic case of overthinking. Suggestion: maybe twin sisters care more about their own children than their nieces because they are the ones whom they carried in their womb and then nurtured and actually raised as their own children. Genetics inform our behaviour but ultimately what they do align us to is something like "you shall be attached to cute little baby like things you spend a lot of time raising". That holds for our babies, it holds for babies born with other people's sperm/eggs, it holds for adopted babies, heck it even transfers to dogs and cats and other cute animals.

The genetically determined mechanism is not particularly clever or discerning. It just points us in a vague direction. There was no big evolutionary pressure in the ancestral environment to worry much about genetic markers specifically. Just "the baby that you hold in your arms" was a good enough proxy for that.

dr_s
32

I mean, I guess it's technically coherent, but it also sounds kind of insane. That way Dormammu lies.

Why would one even care about their future self if they're so unconcerned about that self's preferences?

dr_s
20

I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can't conceive of it.

dr_s
30

Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn't the case I don't know if we could stop ourselves from doing that either.

The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I'd call dystopian. You can't be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield, the less tolerant our situation becomes of mistakes. So the alternative would be that we need to willingly choose to slow down or abort entirely certain branches of technological progress - choosing shorter and more miserable lives over the risk of having to curtail our freedom. But of course for the most part, not unreasonably!, we don't really want to take that trade-off, and ask "why not both?".

Load More