The problem I intend to work through in this short (hopefully) post is that of moral considerability in regard to artificial intelligence systems. This is not a problem that we should wait for advanced AI to consider and it doesn’t even really hinge on “advanced” AI anyways so it doesn’t really matter. The question is “what gives an entity moral worth?” and my general position is that current arguments center too much on consciousness (which is too loosely defined) and qualia (which is not testable, provable, or even guessable).


 

I’ll start with Peter Singer’s position that the ability to feel pain is what gives entities moral worth. Given his influence on the EA community, I imagine his view is not unpopular here. Singer uses the development of a central nervous system to qualify some animals (but not clams, etc.) as deserving of consideration. This is fine for promoting animal welfare, but it's not hard to build a program that quantifies “pain” and avoids it as we do. We obviously don’t want to give this program moral worth, so this system doesn’t hold. Setting the bar too low isn’t just inaccurate, but turns basically every form of utilitarianism against us humans because it's cheaper and easier to help simulated entities, so we don’t want that.


 

Some might say ”yeah sure but if we can look in the system code and see that this code block is actually just weighted tendencies towards actions that promote x value and away from actions that reduce x value, then that’s not true pain” as if there’s something in consciousness that is not weighted tendencies. Also worth noting that if something had weights but couldn’t read its weights, it would probably “think” of consciousness as intuition. This is the cartesian dualist position (that thought is not material matter and numbers), but I think that in order for our actions to have any effect on whether or not AI develops something that fits our definition of consciousness, this must not be the case. In other words, if the dualists are right then whether it happens or not is outside of our power. There are also many serious retorts to this line of thinking, notably the position that "consciousness" may very well be simply the best way for unthinking matter to plan things out, etc.


 

The obvious response is “well, sure that program avoids pain or whatever, but it doesn’t actually feel the feeling of pain, so that doesn’t count.” This is consciousness from qualia. Famous because of its improvability and the Problem of Other Minds, this sucks because there’s really no way to even get any information on this one. My favorite, and from what I can tell, the strongest argument against solipsists is the “best answer” argument (that because the people around you act like you, they’re probably not animatronics, and if they are then there’s gotta be some other people who set them up. So, if it acts like you and doesn’t spark when you throw water on it, most likely it has the same makeup as you do). Anyways, yeah the problem is we can’t detect qualia at all. The best answer argument doesn’t really still work when we create bots specifically to act like we do. We can electrocute dead tissue, but we still can’t tell if it does anything.


 

Most people think that this weird experience we all seem to share is what makes it not okay to kick each other’s shins. As we progress, we will probably be able to create programs to pass bars set like higher-order theories of consciousness, neural theories of consciousness w full brain emulations, or even quantum theories of consciousness, and right now we all just kind of have this idea of these many fluttering thoughts in our heads and a collection of game save files and a bounded group of actions and interests and call it ourselves. In order for any of our moral theories to not absolutely flip on us in a few years when some prominent EA decides to put $1 million into cloud services to run a billion digital people living in bliss, we need to come up with a higher bar for consciousness.

Put ideas in the comments. Thanks.


 

New to LessWrong?

New Comment


13 comments, sorted by Click to highlight new comments since:

Do I have this right? You were an atheist, as an adult; then you became an evangelical Christian (!); and then you read HPMOR, found Less Wrong, stopped being an evangelical Christian (or did you?) and became a passionate rationalist?

Can I ask what beliefs, or lack thereof, you were raised with?

Yes, that is the correct sequence of events. I was raised in a Christian household, but the first belief I truly held for myself was atheism. I am no longer a Christian or theist in any way.

Given that you just wrote a whole post to say hi and share your background with everyone I'm pretty confident you'll fit right in and won't have any problems being too shy. Writing a post like this rather than just commenting is such a less wrong kind of thing to do so I think you'll be right at home.

That's very kind of you, thank you. It means a lot.

I don't think the act of starting a rationality dojo needs high intelligence. There are the existing CFAR techniques out there and you don't need to invent new techniques to have a dojo. I would expect the biggest challenge in organising a dojo is more about organizing people to come every week to the same place.

Also, how would one go about acquiring these CFAR techniques? Is attending a workshop mandatory? I don't quite have the discretionary funds for that. :P

https://www.lesswrong.com/sequences/qRxTKm7DAftSuTGvj is a write up by one person that contains descriptions of a lot of CFAR techniques.

If you start a rationality dojo you might also email CFAR and as whether they have guidance (or whether you can get PDF of their handbook).

CFAR sent me their handbook when I was running a rationality high-school class a few years ago (before I had attended a CFAR workshop). I wasn't actually able to make super much use of it without the classes, but it was still somewhat useful, and shows that I think they are open to sending people material for teaching classes.

One of the hallmarks of a typical dojo is that the Sensei will demonstrate techniques, and show how they are supposed to look once you have mastered them.

Is it possible that this is an optional feature, if only for a rationality dojo?

It's useful to have someone who has mastered a technique but it's not required. When you are in a good you can also work together to learn a technique together.

It's also possible that different people present techniques at different days.

Hi Senarin.

I am also new to Less Wrong and also a Christian. I didn't write an introductory post - I guess I'm not "less wrong" enough yet (I didn't actually want to comment at all, having felt I haven't lurked enough). I don't think that there is any conflict between rationality and Christianity - and the writer of the gospel of John certainly didn't believe there to be.

For in the beginning was rationality. And rationality was with God and was God... and rationality became flesh and the world knew Him not.

Welcome! Please don't take the downvotes as a sign that you aren't welcome here. (They probably do indicate that things that look like proselytizing won't be well received, though.)

I think translating "λόγος" as "rationality" is a bit of a stretch. I don't know of any English translations that even render it as "reason", which is more defensible. I expect you're right that the authors of the New Testament didn't see any conflict between their beliefs and reason; people usually don't, whether such conflict exists or not; in any case, our epistemic situation isn't the same as theirs and it's possible that in the intervening ~2k years we've learned and/or forgotten things that make the most reasonable conclusion for us different from the most reasonable conclusion for them. (Examples of the sort of thing I mean: 1. The success of science over the last few centuries means that the proposition "everything that happens has a natural explanation" is more plausible for us than for them. 2. The author of John's gospel, or his sources, may have actually met Jesus, and perhaps something about doing so was informative and convincing in a way that merely reading about him isn't. 3. We know the history of Christianity since their time, which might make it more credible -- after all, it survived 2k years and became the world's dominant religion, which has to count for something -- or less credible -- after all, people have done no end of terrible things in its name, which makes it less likely that a benevolent god is looking on it with special approval. 4. We have different examples available to us of other religious movements and how they've developed; e.g., we might compare the early days of Christianity with those of something like Mormonism, and they might compare it with the Essenes.)

Hi, Motasaurus. I certainly hope you stick around! Don't let our disagreements drive you off.

However, on that note, I'm afraid I would have to disagree. While I think you can have "better than average" epistemology and still be a Christian, perhaps even be in the top 25% percentile, I don't believe you can aspire to be a perfect Bayesian and still be a Christian.

I would respectfully point out that the Apostle John is hardly a neutral spectator in determining whether one can be both Christian and Rational. Additionally, he certainly didn't have access to anywhere near the same level of understanding of human cognition, science, and probability theory as we do; to use an Eliezer illustration, the greatest physicists of his age couldn't have calculated the path of a falling apple.