13

Cross-posted on By Way of Contradiction

In my morals, at least up until recently, one of the most obvious universal rights was freedom of thought. Agents should be allowed to think whatever they want, and should not be discouraged for doing so. This feels like a terminal value to me, but it is also instrumentally useful. Freedom of thought encourages agents to be rational and search for the truth. If you are punished for believing something true, you might not want to search for truth. This could slow science and hurt everyone. On the other hand, religions often discourage freedom of thought, and this is a major reason for my moral problems with religions. It is not just that religions are wrong, everyone is wrong about lots of stuff. It is that many religious beliefs restrict freedom of thought by punishing doubters with ostracizing or eternal suffering. I recognize that there are some "religions" which do not exhibit this flaw (as much).

Recently, my tune has changed. There are two things which have caused me to question the universality of the virtue of freedom of thought:

1) Some truths can hurt society

Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.

2) Simulations

This is the main point I want to talk about. If we get to the point where minds can simulate other minds, then we run into major issues. Should one mind be allowed to simulate another mind and torture it? It seems like the answer should be no, but this rule seems very hard to enforce without sacrificing not only free thought, but what would seem like the most basic right to privacy. Even today, people can have preferences over the thoughts of other people, but our intuition tells us that the one who is doing the thinking should get the final say. If the mind is simulating another mind, shouldn't the simulated mind also have rights? What makes advanced minds simulating torture so much worse than a human today thinking about torture. (Or even worse, thinking about 3^^^^3 people with dust specks in their eyes. (That was a joke, I know we cant actually think about 3^^^^3 people.))

The first thing seems like a possible practical concern, but it does not bother me nearly as much as the second one. The first seems like it is just and example of the basic right of freedom of thought contradicting another basic right of safety. However the second thing confuses me. It makes me wonder whether or not I should treat freedom of thought as a virtue as much as I currently do. I am also genuinely not sure whether or not I believe that advanced minds should not be free to do whatever they want to simulations in their own minds. I think they should not, but I am not sure about this, and I do not know if this restriction should be extended to humans.

What do you think? What is your view on the morality of drawing the line between the rights of a simulator and the rights of a simulatee? Do simulations within human minds have any rights at all? What conditions (if any) would make you think rights should be given to simulations within human minds?

New Comment
50 comments, sorted by Click to highlight new comments since: Today at 1:08 PM

Some truths can hurt society

That's a very common refrain through history. Especially if you're a member of the local elite benefiting from the status quo and the truth is likely to upset it.

Note that human moral intuitions did not evolve to work well in transhumanist fantasies and dystopias commonly discussed here (that's one reason why an AGI with human morality built-in would almost necessarily turn unfriendly). Thus before you can pronounce that "torture is bad", you have to carefully define the terms "torture", "is" and "bad" to make sense in your imagined setting. An earnest attempt to do that is likely to lead you deeper into metaethics, epistemology and AI research. Until then, your question is meaningless.

1) Some truths can hurt society Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.

I'm not sure why freedom of thought is in principle a bad thing in this case. I can think about whatever horrible thing I want, but unless I act upon my thoughts, I see no danger in it. I'm pretty sure that some people get off with rape fantasies, or seriously considers murdering someone, but if they don't act upon their thoughts I see no problem with it. Then of course thinking usually does have consequences: depressed people are often encouraged not to daydream and focus on concrete tasks, for example. But this kind of distinction is true in many kind of situations: while in principle having the fridge stocked with wine wouldn't hurt a former alcoholic, in practice it's much better if they stay as far away as possible from temptation.

[-][anonymous]10y60

Yes, an FAI would put limits on what you could think and do with your own computing power. No, you as a human do not have the moral authority to shut down your rival's thought processes.

What makes advanced minds simulating torture so much worse than a human today thinking about torture.

Do you think someone is suffering due to people thinking about torture?

No, but there is a very high risk if I am wrong.

Aren't there innumerable possibilities that are very high risk if you are wrong? How do you differentiate between those you dismiss out of hand and those you ponder to the point of bringing it up for discussion?

I'm not of the intellect of the average LWer, but I'm completely baffled as to how thinking about torture can directly effect someone.

I have more questions, if you are willing:

  • Do you have to think of a specific person?
  • Do you have to think very specific thoughts?
  • Living or dead? A future person?

Most of us probably think that an AI simulating a mind and torturing it is bad. We are not going to draw a hard line between that and what humans can do. Instead, it seems to me that a human simulating a (very simple) mind, and torturing it is also bad, just very very much less bad. The question is how much less bad is it? The answer is we don't know. It is probably a lot less bad than actually torturing an insect, but how do we know that? It seems we need some sort or metric that measures how much we care about minds. That metric probably measures not just how complex the minds are, but also how similar they are to humans.

I think the probability is small that after sufficient reflection I will think simulating torture in my mind is a huge evil, but maybe it is not negligible.

Again, I don't actually believe that humans simulating torture is bad, but I can play devils advocate and answer your questions.

You don't have to think of a specific person who exists outside your mind. You just have part of your brain (not necessarily a conscious part) put itself in the shoes of the tortured person. Some part of your brain has to feel the pain of the torture. Maybe this is not possible if you have not gone through the pain of torture in the past. I do not think there is harm in thinking about someone being tortured, if you do not simulate the tortured person's feelings. However, it might be that our brains automatically think about our approximation of what the tortured person is feeling.

Thanks for the additional information. I'm going to trust that you and everyone else who considers this a remote possibility is just way smarter than me. The whole scenario is so abstract and odd that it makes me dizzy...

It depends on how broadly you define 'someone' or a person. If you would say that all things that have souls are persons then the dream character that appear in your dreams isn't a persons.

Of LW the consensus is that there are no souls and therefore that's no criteria by which you can decide whether an entity is a person. Another straightforward way to define person is to define it as those entities with ownership of a physical brain. That definition works for a lot of cases but breaks down if you want to see uploaded brains as persons.

It corresponds to the discussion whether animals should be considered as persons. We agree that it's bad to torture animals but that it's not as bad as torturing humans.

Dream character could have a similar status as animals. The kind of entity you conjure up when you think about torturing another person are probably a bit less of a person than a dream character but there not necessarily 0% person.

1) In the first case, it may raise the general risks to think about AI, but there's a perfectly good Schelling point of 'don't implement the AI'; if you could detect the thoughts, you ought to be able to detect the implementation. In the second case, you don't need a rule specifically about thinking. You just need a rule against torture. If someone's torture method involves only thinking, then, well, that could be an illegal line of thought without having to make laws about thinking.

2) In general, one reason thought crimes are bad is because we don't have strong control over what we think of. If good enough mind-reading is implemented, I suspect that people will have a greater degree of control over what they think.

3) Another reason thought crimes are bad is because we would like a degree of privacy, and enforcement of thought laws would necessarily infringe on that a lot. If your thoughts are computed, it will be possible to make the laws a function to be called on your mental state. That function could be arranged to output only a 'OK/Not OK' output or a 'OK/Not OK/You are getting uncomfortably close on topic X', with no side-effects. That would seem to me to be much less privacy-invading.

If we get to the point where minds can simulate other minds

Human minds (given the biological constraints) are unlikely to be able to simulate other human minds to the extent that we need to start worrying about the rights of the simulated mind. In cases where something similar happens (split personality, tulpas) I haven't seen serious ethical concerns raised, partially, I think, because it's all still you. People can and do torture themselves by their thoughts but suggesting to forbid it is... silly.

Now, if we are talking about enhanced cyborg brains or uploads (ems) running on powerful hardware, well, maybe it could become a problem at that point. Until then I don't see how this is an issue.

[-][anonymous]10y10

I think some of this is going to depend on the physical differences between simulations and people.

For instance, I started a few paragraphs about how Alice and SimAlice might react differently to torture from Bob. Then I realized that this was mostly based on the assumption "Under normal circumstances, a simulation can reload a saved state version of themselves from yesterday to recover from harm."

But there's no guarantee that's the case. Maybe simulations take up enough space that state saving yourself in that manner is impractical. Maybe simulations don't do it because they are worried that the saved states would have rights. On the other hand, maybe they do do it and having only daily backups is in fact thought of as unwise: A smarter simulation would have hourly or minutely backups.

Also, even if I work out the details of that, there are a number of other questions where I don't actually have a framework. As a further example, you can torture a person by starving them. Can your torture a simulation by reducing the amount of electrical power given to it? What if you think it isn't torture but the simulation says it is? How much electrical power does the simulation require? Is it something that can run for days on a smartphone battery, or does it require multiple Megawatts of power, like a server farm?

Of course it is entirely possible that the physical structures will change over time and the laws will have to change to. If Sims in 2050 require server farms and multiple human support staff to run and Sims in 2060 require only smartphone batteries and anyone with a power outlet can easily support a dozen, it seems unlikely that any set of laws or ethical rules I come up with for Sims is going to need no changes during that course of time.

2) Simulations

I don't get it. Can someone simplify this concern?

There's a very commonly accepted line of thought around here whereby any sufficiently good digital approximation of a human brain is that human, in a sort of metaphysical way anyhow, because it uses the same underlying algorithms which describe how that brain works in it's model of the brain.

(It doesn't make much sense to me, since it seems to conflate the mathematical model with the physical reality, but as it's usually expressed as an ethical principle it isn't really under any obligation to make sense.)

The important thing is that once you identify sufficiently good simulations as moral agents you end up twisting yourself into ethical knots about things like how powerful beings in the far future treat the NPCs in their equivalent to video games. For that, and other reasons I'm not going to get into here, it seems like a fairly maladaptive belief even if it were accurate.

I'm pretty sure (but don't know how to test) that I am not capable of simulating another mind to the point where it has moral value above epsilon. But if I was, then it would be wrong of me to do that and torture it.

I think I hold freedom of thought as a terminal value, but torture also violates at least one of my terminal values, and it feels like torture is worse than some versions of forbidding people from torture-by-thought. But it might be that there's no practical way to implement this forbidding. If the only way to make sure that nobody commits thoughture is to have an external body watching everybody's thoughts, then that might be worse than having some people commit thoughture because there's no way to catch them.

(But one person is probably capable of commiting an awful lot of thoughture. In Egan's Permutation City, bar crefba perngrf n fvzhyngrq pybar bs uvzfrys va beqre gb unir vg gbegherq sbe cbffvoyl-rgreavgl.)

I have no idea how to even start thinking about how to test if I am capable of simulating another mind with moral value.

Sam Harris has said that there are some beliefs, so dangerous, that we could have to kill someone for believing it.

Imagine an agent with an (incorrect) belief that only by killing everyone, would the world be the best place possible, and a prior against anything realistically causing it to update away. This would have to be stopped somehow, because of what it thinks (and what that causes it to do).

Y'know the intentional stance?

Belief + Desire --> Intentional Action

(In fact, I the agent sounds similar to a religious person who believes that killing everyone ensures believers eternity in heaven, and evil people an eternity in hell or something similar - and knows that doubts are only offered by the devil. Sam Harris talks about this idea in the context of a discussion of people who believe in Islam and act on the beliefs by blowing themselves up in crowded places.)

I'm not what practical advice this gives, I'm just making the general point that what you think becomes what you do, and there's a lot of bad things you can do.

Imagine an agent with an (incorrect) belief that only by killing everyone, would the world be the best place possible, and a prior against anything realistically causing it to update away. This would have to be stopped somehow, because of what it thinks (and what that causes it to do).

That doesn't quite follow.

Thinking something does not make it so, and there are a vanishingly small number of people who could realistically act on a desire to kill everyone. The only time you have to be deeply concerned about someone with those beliefs is if they managed to end up in a position of power, and even that just means "stricter controls on who gets access to world-ending power" rather than searching for thoughtcriminals specifically.

Sam Harris has said that there are some beliefs, so dangerous, that we could have to kill someone for believing it.

The Catholic Church has been practicing this for quite a few centuries.

I think you are confusing freedom of thought with freedom of expression (of thought), a.k.a. freedom of speech. People were always free to think whatever they want, because rules about thoughts are unenforceable (until we invent mind-reading).

You can simulate another entity through nothing but thought, with no expression.

And you are free to do that, because mind-reading still isn't invented, and any rules about thoughts are still unenforceable. And I hope it stays that way! You can't make yourself not think of something. At least, I can't. I suffer from intrusive thoughts, and they just pop up from nowhere. I can (and do) try not to dwell on them, think of something else, go do something else, and they pass; but I can't prevent myself from thinking them in the first place, however much I wish I could.

There's a pretty big distinction between freedom of thought and freedom of expression. There's lots of things I think that I believe would cause harm to say or act on. Your concerns seem to be about action (altering others by spreading information or torturing a simulation) rather than thought (internally considering the consequences of torturing a sim).

If you really want to explore whether other agents' freedom of thought is important to you, a better example is whether it's permitted to create an AI or SIM from which you've intentionally removed the ability to think or want certain things.

My answer is that there's no such thing as a universal right, but I value diversity of thought and expression pretty highly. There's quite a wide variance in how others answer.

[-][anonymous]10y30

The point is that a mind is potentially a simulator of other minds, and when it comes to sims, that there isn't really a line between thought and action.

Your concerns seem to be about action (altering others by spreading information or torturing a simulation) rather than thought (internally considering the consequences of torturing a sim).

This assumes mind-body dualism. It's a very flawed model of the world. Your body reacts to your thoughts. Not enough for reading your thoughts word for word, but enough that the thoughts matter. If you are with a person and think about causing that person pain, that changes subtle things about your bodylanguage towards that person.

I suppose you're right: thinking is doing. It's a lot more quiet than the actions given in the examples, and has correspondingly less capacity for harm.

The way I can see it in sci-fi terms:

If human mind is the first copy of a brain that has been uploaded to an computer, than it deserves the same rights as any human. There is a rule against running more than one instance of the same person at the same time.

Human mind created on my own computer from first principles, so to speak of, does not have any rights, but there is also a law in place to prevent such agents from being created, as human minds are dangerous toys.

Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being would allow such an crude ingerence of third parties into their own thought process. I mean, it starts with NO THINKING ABOUT NANOTECHNOLOGY and in time changes to NO THINKING ABOUT RESISTANCE .

EDIT:

Also, assuming that there is really a need to extract some information from an individual, I would reluctantly grant government right to create temporary copy of an individual to be interrogated, interrogate (i.e. torture) the copy and then delete it shortly afterwards. It is squicky, but in my head superior to leaving the original target with memories of interrogation.

Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being would allow such an crude ingerence of third parties into their own thought process.

I don't think that's the case. If I would present a technique about how everyone on LessWrong could install in himself Ugh-fields that prevents that person from engaging in akrasia I would think there would be plenty of people who would welcome the technique.

Would you consider the following to be "Ugh-fields that prevents that person from engaging in akrasia"?

A drug such that, when a person who has the drug in their system drinks alcohol, the interaction is very unpleasant.

A surgery altering the tongue so that consuming food is painful.

As far as my default mental model goes, ugh-field is a term that's specific enough to filter out your examples. But I have no problem accepting a mental model that defines the term more broadly. How narrowly you want to define terms always depends on the purpose for which you want to use them.

A surgery altering the tongue so that consuming food is painful.

Even if you want to lose weight, you probably don't want all eating to hurt. There's however the real world treatment of barbaric surgery that works for most people who want to lose weight. It's however not without it's issues.

If I would present a technique about how everyone on LessWrong could install in himself Ugh-fields that prevents that person from engaging in akrasia I would think there would be plenty of people who would welcome the technique.

I don't know. That would depend on your credibility, reversibility of the procedure, etc.

So, say, a startup says "Here is a mind-control device, implant it into your head and it will make you unable to engage in akrasia. That's all it does, honest! Trust us! Oh, and if there are bugs we'll fix it by firmware updates." -- how many people would be willing to do it?

Of course you need a trustworthy source for the technology. But as far as spreading new technology there will always be a bunch of people who trust certain people.

Of course you need a trustworthy source for the technology.

So, who will you trust to rearrange your mind?

There are quite a few people. All people I have seen face to face.

Unfortunately I'm still quite bad at switching on real trust that you need to do things like that mentally without implanting chips. However the farthest I went in that direction was a hypnosis episode where I allowed someone to switch off my ability to pee accidentally for a short while.

*Just to be clear: I"m not claiming that you can eliminate akrasia completely through hypnosis.

Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?

I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.

no self-respecting human being would allow such an crude ingerence of third parties

I have learned a new word today. Was that the French "ingérence", meaning "interference, intervention"?

OED says: "Bearing in upon; intrusion; interference." and also "Compare French ingérence."

I wouldn't say they are doomed to fail because it is a slippery slope to NO THINKING ABOUT RESISTANCE , but I would say that is a good reason to object to thought-taboo devices.

I think a law stopping you from creating a second copy of a human or creating a new human counts as a thought crime, if the copy or new human is being run in your mind.

I guess it is kind of a slippery slope, indeed. There are probably ways in which it could work only as intended (hardwired chip or whatever), but allowing other people to block your thoughts is only a couple of steps from turning you into their puppet.

As for simulation as though crime, I am not sure. If they need to peek inside your brain to check if you are not running illegally constructed internal simulations, the government can just simulate a copy of you (with a warrant, I guess), either torture or explicitly read it's mind (either way terrible) to find out what is going on and then erase it (I mean murder, but government does it so it is kind of better, except not really.).

Your approval of such measures, probably depends on the relative values that you assign to freedom and privacy.

no self-respecting human being would allow such an crude ingerence of third parties into their own thought process

Of course, provided the alternative is not to just be killed here and now.

Men with weapons have been successfully persuading other people to do something they don't want to do for ages.

So you're saying that the government (or whoever runs the show) is going to force everyone - at gunpoint - to insert such devices into their brains?

Not "is going to" but "may".

That was a simple objection to the statement "but no one will agree to that!".

It seems to me that if the government can run a simulation of an individual, it can also get the information in a better way.

I am not sure though. That is an interesting question.

[-][anonymous]10y00

Sam Harris has said that there are some beliefs, so dangerous, that we could have to kill someone for believing it.

Imagine an agent with an (incorrect) belief that only by killing everyone, would the world be the best place possible, and a prior against anything realistically causing it to update away. This would have to be stopped somehow, because of what it thinks (and what that causes it to do).

Y'know the intentional stance?

Belief + Desire --> Intentional Action

(In fact, I the agent sounds similar to a religious person who believes that killing everyone ensures believers eternity in heaven, and evil people an eternity in hell or something similar - and knows that doubts are only offered by the devil. Sam Harris talks about this idea in the context of a discussion of people who believe in Islam and act on the beliefs by blowing themselves up in crowded places.)

I'm not what practical advice this gives, I'm just making the general point that what you think becomes what you do, and there's a lot of bad things you can do.

[This comment is no longer endorsed by its author]Reply

In most practical moral questions I think it's useful to focus on acting based on love instead of acting based on not trying to violate others rights.

In cases where simulating torture of others is no loving act it might be a good idea to avoid doing so.

Not only because of the effect on others but also because the effects on yourself. You will a much happier life if you act from love instead of act based on avoidance. A buddhist might use the fancy word "karma" to name the principle but you don't need to believe in the metaphysical stuff to understand that a brain that frequently filled with thoughts of love is more happy than a brain filled with fear of breaking rules.

For many people thoughts of violence are a useful way of dealing with unpleasant emotions or desires, freeing them to act more positively. Do you think that's a bad thing?

There are two levels on which to answer that question.

First the societal level. I think there a good chance that a society with 2200 AD levels of technology will collapse when it has a significant level of those people around.

Secondly the personal level. I rather spend 20 minutes meditating and let a unpleasant emotion process itself than act it out in violence.

I think that there are cases where it might be preferable for an individual to act out his anger when the alternative is apathy. Allowing anger to manifest itself might be a step upwards from suppressing all of your emotions. On the other hand a person who resonates at a level of love and kindness is more happy than a person who's driven by anger.

I do think that it's useful to do the mental practice that allows you to stay at a level of love and kindness regardless of what life throws at you.

As far as my own level of functioning goes, there are cases where I have automatic reactions towards pressure that put me in a state of apathy that produces akrasia. Those resolve mostly about situations that are unclear. If anger comes up in such a situation I might go with it because it causes me to take action with might break akrasia.

If I do have concrete emotions that I don't consider useful and which are not behind some ugh-field that prevents me from accessing them, I don't have any trouble with just releasing those emotions. I don't need act them out to free myself from them.

To go back to the society level, it would be good if we could teach sensible strategies for emotional management on the school level that don't require people to act out in violence when the want to free themselves from some emotion that they find unpleasant.

On that note I like the CFAR post of Julia Galef where she makes the point that dealing with emotions is an important topic for CFAR.

I'm not talking about acting violently, except in a very rarefied sense. I'm talking about thinking about it, playing games or the like. It seemed like you were objecting not just to violent actions but also to violent thoughts.

If you spend your time playing games because of your anger you aren't even using it as a resource to get in the direction of moving in a productive direction. Playing games is often a form of escapism. You could spent elsewhere more efficiently.

On the other hand if you can't motivate yourself do to sport but find that if you go into your anger you can funnel that into playing American football, that might be a step in the right direction.