Bakkot comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Upvoted.
Huh. I admit, this was not the response I was expecting.
What about fish? I'm pretty sure many fish are significantly more functional than one-month-old humans, possibly up to two or three months. (Younger than that I don't think babies exhibit the ability to anticipate things. Haven't actually looked this up anywhere reputable, though.) Also, separately, would you say that babies are around the lowest level of functioning that you can possess and still qualify as a person?
Trying to narrow down where we differ here: what signs of being-a-person does a one-month-old infant display that, say, Cleverbot does not?
Frequently. It's scary. But if I were in a body in which intelligence was not easy to express, and I was killed by someone who didn't think I was sufficiently functional to be a person, that would be a tragic accident, not a moral wrong.
About age four, possibly a year or two earlier. I'm reasonably confident I had introspection at age four; I don't think I did much before that. I find myself completely unable to empathize with a 'me' lacking introspection.
I am afraid that this might come off as condescending; know that no condescension is felt. :
I really like that in this community, and in this discussion in particular, this question can be asked and answered honestly and seriously. Thank you.
(Data point: I would not have asked if I had known you consider wolves to be people.)
OK. So the point of this analogy is that newborns seem a lot like the script described, on the compilation step. Yes, they're going to develop advanced, functioning behaviors eventually, but no, they don't have them yet. They're just developing the infrastructure which will eventually support those behaviors.
Yes. (Holds outside of Japan as well.) It is, arguably, maladaptive. But it's certainly not immoral, no?
Admittedly the analogy is poor. You're right to point that out, and I'm not going to try to support it. However, thanks to the ensuing discussion, I know the question I actually want to ask: do you think behaviors are immoral if and only if they're maladaptive?
I don't know enough about them - given they're so different to us in terms of gross biology I imagine it's often going to be quite difficult to distinguish between functioning and instinct - this:
http://news.bbc.co.uk/1/hi/england/west_yorkshire/3189941.stm
Says that scientists observed some of them using tools, and that definitely seems like people though.
Yes.
Shared attention, recognition, prediction, bonding -
The legal definition of an accident is an unforeseeable event. I don't agree with that entirely because, well everything's foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there's much room for doubt, there's so much suffering involved in killing and eating animals that we shouldn't do it even if we only argue ourselves to some low probability of their being people.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
I agree, if it doesn't have the capabilities that will make it a person there's no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven't killed a human.
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we're not rational fitness maximisers, we just tend that way on the more readily apparent issues.