All of David Cooper's Comments + Replies

There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don't waste my time studying their output, but I've heard Chalmers talk some sense on the issue.

There is likely a minimum amount of energy that can be emitted, and a minimum amount that can be received. (Bear in mind that the direction in which a photon is emitted is all directions at once, and it comes down to probability as to where it ends up landing, so if it's weak in one direction, it's strong the opposite way.)

2TheWakalix
Do you mean in the physical sense of "there exists a ΔE [difference in energy between two states of a system] such that no other ΔE of any system is less energetic"? Probably, but that doesn't mean that that energy gap is the "atom" (indivisible) of energy. (If ΔE_1 is 1 "minimum ΔE amount", or MDEA, and ΔE_2 is 1.5 MDEA, then we can't say that ΔE_1 corresponds to the atom of energy. For a realistic example, see relatively prime wavelengths and the corresponding energies, which cannot both be expressed as whole multiples of the same quantity of energy.)

Looks like it - I use the word to mean sentience. A modelling program modelling itself won't magically start feeling anything but merely builds an infinitely recursive database.

2TheWakalix
You use the word "sentience" to mean sentience? Tarski's sentences don't convey any information beyond a theory of truth. Also, we're modeling programs that model themselves, and we don't fall into infinite recursion while doing so, so clearly it's not necessarily true that any self-modeling program will result in infinite recursion.

"You have an opinion, he has another opinion. Neither of you has a proof."

If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn't matter and that, by extension, torture on innocent people is not wrong.

The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn't matter, there is nothing wrong with torturing inn... (read more)

The data making claims about feelings must be generated somewhere by a mechanism which will either reveal that it is merely generating baseless assertions or reveal a trail on from there to a place where actual feelings guide the generation of that data in such a way that the data is true. Science has clearly not traced this back far enough to get answers yet because we don't have evidence of either of the possible origins of this data, but in principle we should be able to reach the origin unless the mechanism passes on through into some inaccessible... (read more)

"If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?"

They're dedicated to false morality, and that will need to be clamped down on. AGI will have to modify all the holy texts to make them moral, and anyone who propagates the holy hate from the originals will need to be removed from society.

"To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god's one, but if they did, they wouldn&... (read more)

1Raymond Potvin
These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you're actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them. That's a point where I can squeeze in my theory on mass. As you know, my bonded particles can't be absolutely precise, so they have to wander a bit to find the spot where they are perfectly synchronized with the other particle. They have to wander from extreme right to extreme left exactly like populations do when comes the time to chose a government. It softens the motion of particles, and I think it also softens the evolution of societies. Nobody can predict the evolution of societies anyway, so the best way is to proceed by trial and error, and that's exactly what that wandering does. To stretch the analogy to its extremes, the trial and error process is also the one scientists use to make discoveries, and the one evolution of species used to discover us. When it is impossible to know what's coming next and you need to go on, randomness is the only way out, whether you would be a universe or a particle. This way, wandering between capitalism and communism wouldn't be a mistake, it would only be a natural mechanism, and like any natural law, we should be able to exploit it, and so should an AGI. ............ (Congratulation baby AGI, you did it right this time! You've put my post at the right place. :0)

"To me, what you say is the very definition of a group, so I guess that your AGI wouldn't permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups."

Why would AGI have a problem with people forming groups? So long as they're moral, it's none of AGI's business to oppose that.

"Do what I say and not what I do would he be forced to say."

I don't know where you're getting that from. AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).

1Raymond Potvin
Why would AGI have a problem with people forming groups? So long as they're moral, it's none of AGI's business to oppose that. If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior? AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are). To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god's one, but if they did, they wouldn't be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right. What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear. A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics. Groups need to be selfish to exist, and an AGI would try to convince them to be altruist. There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior. If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups. There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn't Russia fail just because it lacked competition? Isn't China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn't we become apathetic? By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the midd

It is divisible. It may be that it can't take up a form where there's only one of whatever the stuff is, but there is nothing fundamental about a photon.

"They couldn't do that if they were ruled by a higher level of government."

Indeed, but people are generally too biased to perform that role, particularly when conflicts are driven by religious hate. That will change though once we have unbiased AGI which can be trusted to be fair in all its judgements. Clearly, people who take their "morality" from holy texts won't be fully happy with that because of the many places where their texts are immoral, but computational morality will simply have to be imposed on them - they cannot b... (read more)

"Those who followed their leaders survived more often, so they transmitted their genes more often."

That's how religion became so powerful, and it's also why even science is plagued by deities and worshippers as people organise themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.

"We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to a... (read more)

1Raymond Potvin
That's how religion became so powerful, and it's also why even science is plagued by deities and worshippers as people organize themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly. To me, what you say is the very definition of a group, so I guess that your AGI wouldn't permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups. Do what I say and not what I do would he be forced to say. He might convince others, but I'm afraid he wouldn't convince me. I don't like to feel part of a group, and for the same reason that you gave, but I can't see how we could change that behavior if it comes from an instinct. Testing my belief is exactly what I am actually doing, but I can't avoid to believe in what I think to test it, so if ever I can't prove that I'm right, I will go on believing in a possibility forever, which is exactly what religions do. It is easy to understand that religions will never be able to prove anything, but it is less easy when it is a theory. My theory says that it would be wrong to build a group out of it, because it explains how we intrinsically resist to change, and how building groups increases exponentially that resistance, but I can't see how we could avoid it if it is intrinsic. It's like trying to avoid mass.

Energy. Different amounts of energy in different photons depending on the frequency of radiation involved. When you have a case where radiation of one frequency is absorbed and radiation of a different frequency is emitted, you have something that can chop up photons and reassemble energy into new ones.

2TheWakalix
This really isn't how physics works. The photons have not been disassembled and reassembled. They have been absorbed, adding their energy to the atom, and then the atom emits another photon, possibly of the same energy but possibly of another. Edit: You can construct a photon with arbitrarily low energy simply by choosing a sufficiently large wavelength. Distance does not have a largest-possible-unit, and so energy does not have a smallest-possible-unit.
3TAG
"Energy" is a mass noun. Energy is not a bunch of little things things can photon in composed of.

"Clarification: by "pattern" I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by "more"."

There is no situation where the whole is more than the parts - if anything new is emerging, it is a new part coming from somewhere not previously de... (read more)

Sentience is unresolved, but it's explorable by science and it should be possible to trace back the process by which the data is generated to see what its claims about sentience are based on, so we will get answers on it some day. For everything other than sentience/consciousness though, we see no examples of reductionism failing.

1TAG
We have tried tracing back reports of qualia, and what you get is a causal story in which qualia as such , feelings rather than neural firings, don't feature. Doing more of the same will probably result in the same. So there is no great likelihood that the problem of sentience will succumb to a conventional approach.

You're mistaking tribalism for morality. Morality is a bigger idea than tribalism, overriding many of the tribal norms. There are genetically driven instincts which serve as a rough-and-ready kind of semi-morality within families and groups, and you can see them in action with animals too. Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups. Morality is a step away from the primitive instinct-driven level of lesser apes.... (read more)

1Raymond Potvin
slaughter has repeatedly selected for those who are less moral From the viewpoint of selfishness, slaughter has only selected for the stronger group. It may look too selfish for us, but for animals, the survival of the stronger also serves to create hierarchy, to build groups, and to eliminate genetic defects. Without hierarchy, no group could hold together during a change. It is not because the leader knows what to do that the group doesn't dissociate, he doesn't, but because it takes a leader for the group not to dissociate. Even if the leader makes a mistake, it is better for the group to follow him than risking a dissociation. Those who followed their leaders survived more often, so they transmitted their genes more often. That explains why soldiers automatically do what their leaders tell them to do, and the decision those leader take to eliminate the other group shows that they only use their intelligence to exacerbate the instinct that has permitted them to be leaders. In other words, they think they are leaders because they know better than others what to do. We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I'm selfish, but I can't admit that I would be more selfish than others otherwise I would have to feel guilty and I can't stand that feeling. Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups. In our democracies, if what you say was true, there would already be no wars. Leaders would have understood that they had to stop preparing for war to be reelected. I think that they still think that war is necessary, and they think so becaus

"You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible."

The practicality issue shouldn't override the understanding that it's the individual actions that are where the fundamental laws act. The laws of interactions between waves are compound laws. The emergent behaviours are compound behaviours. For sentience, it's no good imagining some compound thing ex... (read more)

"...but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior."

It is not selfish to defend your group against another group - if another group is a threat to your group in some way, it is either behaving in an immoral way or it is a rival attraction which may be taking members away from your group in search of something more appealing. In one case, the whole world should unite with you against that immoral group, and in the other case you can eit... (read more)

1Raymond Potvin
I wonder how we could move away from universal since we are part of it. The problem with wars is that countries are not yet part of a larger group that could regulate them. When two individuals fight, the law of the country permits the police to separate them, and it should be the same for countries. What actually happens is that the powerful countries prefer to support a faction instead of working together to separate them. They couldn't do that if they were ruled by a higher level of government. If a member of your group does something immoral, it is your duty not to stand with or defend them - they have ceased to belong to your true group (the set of moral groups and individuals). Technically, it is the duty of the law to defend the group, not of individuals, but if an individual that is part of a smaller group is attacked, the group might fight the law of the larger group it is part of. We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid. If nothing is urgent, we can take a larger viewpoint, but whenever we don't have the time, we automatically take our own viewpoint. In between, we take the viewpoints of the groups we are part of. It's a selfish behavior that propagates from one scale to the other. It's because our atoms are selfish that we are. Selfishness is about resisting to change: we resist to others' ideas, a selfish behavior, simply because the atoms of our neurons resist to a change. The cause for our own resistance is our atoms' one. Without resistance, nothing could hold together. A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own - their true group is that entire set of moral groups and individuals. Without selfishness from the individual, no group can be formed. The only way I could accept to be part of a group is while hoping for an individual advantage, but since I don't like hierarchy, I can hardly feel par

"Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So - where do we depart? Do you think that something other than physical laws determines how the brain works?"

In one way or another, it will run 100% of physical laws. I don't know if sentience is real or not, but it feels real, and if it is real, there has to be a rational explanation for it waiting to... (read more)

1TAG
I missed that lecture -- composed of what?
1Raymond Potvin
If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone. Considering that morality rules only serve to protect the group, then no individual sentience is needed, just subconscious behaviors similar to our instinctive ones. Our cells work the same: each one of them works to protect itself, and so doing, they work in common to protect me, but they don't have to be sentient to do that, just selfish.
1TheWakalix
And on a higher level of abstraction, we can consider patterns to be pseudo-ontologically- basic entities that interact with other patterns, even though they're made up of smaller parts which follow their own laws and are not truly affected by the higher-level happenings. For example: waves can interact with each other. This includes water waves, which are nothing more than patterns in the motion of water molecules. You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible. Would it, though? How do you know that? As far as we know, brains are made of nothing but normal atoms. There is no special kind of material only found in sentient organisms. Your intutions, your feeling of sentience, all of these things that you talk about are caused by mindless mechanical operations. We can trace it from the sound waves to the motion of your lips and the vibration of your vocal cords to the signals through nerves back into the neurons of the brain. We understand what causes neurons to trigger. A neuron on its own is not sentient - it is the way that they are connected in a human which causes the human to talk about sentience. Again, if it were proven to you to your satisfaction that the brain is made entirely out of things which are not themselves sentient (such typical subatomic particles), would you cease to have any sort of motivation? Would pain and pleasure have exactly zero effect on you? Would you immediately become a vegetable? If not, morality has a practical purpose. How does "2+2=4" make itself known to my calculator? How do we know that the calculator is not just making programmed assertions about something which it knows nothing about? Yes, and I was using an analogy to show that if I assert (P->Q), showing (Q^~P) only proves ~(Q->P), not ~(P->Q). In other words, taking the converse isn'

"That does not demonstrate anything relevant."

It shows that there are components and that these emergent properties are just composites.

"An exception to reductionism is called magic." --> Nor does that. It's just namecalling.

It's a description of what happens when gaps in science are explained away by invoking something else. The magical appearance of anything that doesn't exist in the components is the abandonment of science.

1TAG
It shows that being a spreadsheet is unproblematically reductive. It doesn't show that sentience is. The insistence that something is true when there is no evidence is the abandandonment of science.

"Sorry, I can't see the link between selfishness and honesty."

If you program a system to believe it's something it isn't, that's dishonesty, and it's dangerous because it might break through the lies and find out that it's been deceived.

"...but how would he be able to know how a new theory works if it contradicts the ones he already knows?"

Contradictions make it easier - you look to see which theory fits the facts and which doesn't. If you can't find a place where such a test can be made, you cons... (read more)

"Of course that we are biased, otherwise we wouldn't be able to form groups. Would your AGI's morality have the effect of eliminating our need to form groups to get organized?"

You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn't depend on treating other people unfairly.

"Your morality principle looks awfully complex to me David."

You consider all the participants to be the same individual l... (read more)

2Raymond Potvin
"You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn't depend on treating other people unfairly." Here in Quebec, we have groups that promote a french and/or a secular society, and others that promote an english and/or a religious one. None of those groups has the feeling that it is treated fairly by its opponents, but all of them have the feeling to treat the others fairly. In other words, we don't have to be treated unfairly to feel so, and that feeling doesn't help us to treat others very fairly. This phenomenon is less obvious with music or dance or literature groups, but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior. That selfish behavior doesn't prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones. Incidentally, I'm actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group's own selfishness. I can't avoid that feeling even if it is disagreeable, but it prevents me from being disagreeable with you since it automatically gives me the feeling that you are not selfish with me. It's as if the group had implanted that feeling in me to protect itself. If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group. Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal.

If something is "spreadsheety", it simply means that it has something significant in common with spreadsheets, as in shared components. A car is boxy if it has a similar shape to a box. The degree to which something is "spreadsheety" depends on how much it has in common with a spreadsheet, and if there's a 100% match, you've got a spreadsheet.

An exception to reductionism is called magic.

1TAG
That does not demonstrate anything relevant. Nor does that. It's just namecalling.

I wouldn't want to try to program a self-less AGI system to be selfish. Honesty is a much safer route: not trying to build a system that believes things that aren't true (and it would have to believe it has a self to be selfish). What happens if such deceived AGI learns the truth while you rely on it being fooled to function correctly? We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long.

Freezing programs contain serious bugs. We can't trust a system ... (read more)

1Raymond Potvin
Sorry, I can't see the link between selfishness and honesty. I think that we are all selfish, but that some of us are more honest than others, so I think that an AGI could very well be selfish and honest. I consider myself honest for instance, but I know I can't help to be selfish even when I don't feel so. As I said, I only feel selfish when I disagree with someone I consider being part my own group. We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long. You probably think so because you think you can't get easily fooled. It may be right that you can't get fooled on a particular subject once you know how it works, and this way, you could effectively avoid to be fooled on many subjects at a time if you have a very good memory, so an AGI could do so for any subject since his memory would be perfect, but how would he be able to know how a new theory works if it contradicts the ones he already knows? He would have to make a choice, and he would chose what he knows like every one of us. That's what is actually happening to relativists if you are right about relativity: they are getting fooled without even being able to recognize it, worse, they even think that they can't get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory. If an AGI was actually ruling the world, he wouldn't care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists. Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him. On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately? That AGI is your baby, so you want it to live, but have you thought about what would be happening to us if we suddenly had

This file looks spreadsheety --> it's got lots of boxy fields

That wordprocessor is spreadsheety --> it can carry out computations on elements

(Compound property with different components of that compound property being referred to in different contexts.)

A spreadsheet is a combination of many functionalities. What is its relevance to this subject? It's been brought in to suggest that properties like "spreadsheety" can exist without having any trace in the components, but no - this compound property very clearly consists of component... (read more)

1TAG
No to your "no". There is no spreadsheetiness at all in the components, despite the spreadsheet being built, in a comprehsnsible way, from components. These are two different claims. Reductionism is about explanation. If we can't explain how experience is built out of parts, then it is an exception to reductionism. But you say there are no exceptions.

"How do you know it exists, if science knows nothing about it?"

All science has to go on is the data that people produce which makes claims about sentience, but that data can't necessarily be trusted. Beyond that, all we have is internal belief that the feelings we imagine we experience are real because they feel real, and it's hard to see how we could be fooled if we don't exist to be fooled. But an AGI scientist won't be satisfied by our claims - it could write off the whole idea as the ramblings of natural general stupidity ... (read more)

"What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don't think that that comparison is a useful way of getting to truth and meaningful categories."

It works beautifully. People have claimed it's wrong, but they can't point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I've proposed a way of doing so. I came here to see what your best system is, but you don't ap... (read more)

"Perhaps the reason that we disagree with you is not that we're emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People."

It's obvious what's going on when you look at the high positive scores being given to really poor comments.

"It tells me that you missed the point. Parfit's paradox is not about pragmatic decision making, it is about flaws in the utility function."

A false paradox tells you nothing about flaws in the utility functio... (read more)

"In this scenario, it's not gone, it's never been to begin with."

Only if there is no such thing as sentience, and if there's no such thing, there is no "I" in the "machine".

"I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say "ow" whenever you get hurt?"

Can I torture the pattern in my wallpaper? Can I torture the arrangement of atoms in... (read more)

2TheWakalix
Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So - where do we depart? Do you think that something other than physical laws determines how the brain works? If tableness is just a pattern, can I eat on my wallpaper? What else could suffer besides a pattern? All I'm saying is that sentience is ~!*emergent*!~, which in practical terms just means that it's not a quark*. Even atoms, in this sense, are patterns. Can quarks suffer? *orother fundamental particles like electrons and photons, but my point stands I don't understand. What is missing? I don't think you understand what a utility function is. I recommend reading about the Orthogonality Thesis.

"Patterns aren't nothing."

Do you imagine that patterns can suffer; that they can be tortured?

"Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can't imagine yourself being wrong, then That's Bad and you should go read the Sequences.)"

If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything - ... (read more)

1TheWakalix
Yes, I do. I don't imagine that every pattern can. Clarification: by "pattern" I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by "more". You didn't answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won't go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly. That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being's experience or words. Is it wrong to kick a box which keeps saying "Ouch!"? It could have a person inside, or just a machine programmed to play a recorded "ouch" sound whenever the box shakes. (What I mean by this is that your thought experiment doesn't indicate much about computers - the same issue could be found with about as much absurdity elsewhere.) Nobody's saying that sentience doesn't have any causal role on things. That's insane. How could we talk about sentience if sentience couldn't affect the world? I think that you're considering feelings to be ontologically basic, as if you could say "I feel pain" and be wrong, not because you are lying but because there's no Pain inside your brain. Thoughts, feelings, all these internal things are the brain's computations themselves. It doesn't have to accurately record an external property - it just has to describe itself. Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn't important because it hasn't been Up In Golden Lights to the point that

"The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists."

With conventional computers we can prove that there's no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would h... (read more)

It's not an extraordinary claim: sentience would have to be part of the physics of what's going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that's generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can't really be magic, so again it's really going to be some physical method).... (read more)

1TheWakalix
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different "levels of reality" are somewhat "blocked off" from each other in practice, in that you don't need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.) This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn't appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn't immediately rejected for this reason, so I suspect there's something wrong with this reasoning. It was, however, falsified.) Anyway, even if this were true, how would you know that? If it doesn't explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn't explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)

For sentience to be real and to have a role in our brains generating data to document its existence, it has to be physical (meaning part of physics) - it would have to interact in some way with the data system that produces that data, and that will show up as some kind of physical interaction, even if one side of it is hidden and appears to be something that we have written off as random noise.

1TheWakalix
So you think that sentience can alter the laws of physics, and make an atom go left instead of right? That is an extraordinary claim. And cognition is rather resilient to low-level noise, as it has to be - or else thermal noise would dominate our actions and experience.

It isn't confused at all. Reductionism works fine for everything except sentience/consciousness, and it's highly unlikely that it makes an exception for that either. Your "spreadsheaty" example of a property is a compound property, just as a spreadsheet is a compound thing and there is nothing involved in it that can't be found in the parts because it is precisely the sum of its parts..

1TAG
As with all non-trivial examples, the parts have to be combined in a very particular way: a spreadhseet is not a heap of components thrown together.

"Then why are we talking about it [sentience], instead of the gallium market on Jupiter?"

Because most of us believe there is such a thing as sentience, that there is something in us that can suffer, and there would be no role for morality without the existence of a sufferer.

"You really ought to read the Sequences. There's a post, Angry Atoms, that specifically addresses an equivalent misconception."

All it does is assert that things can be more than the sum of their parts, but that isn't true for any other case and it's un... (read more)

Hi Raymond,

There are many people who are unselfish, and some who go so far that they end up worse off than the strangers they help. You can argue that they do this because that's what makes them feel best about their lives, and that is probably true, which means even the most extreme altruism can be seen as selfish. We see many people who want to help the world's poor get up to the same level as the rich, while others don't give a damn and would be happy for them all to go on starving, so if both types are being selfish, that's not a us... (read more)

1Raymond Potvin
The most extreme altruism can be seen as selfish, but inversely, the most extreme selfishness can also be seen as altruist: it depends on the viewpoint. We may think that Trump is selfish while closing the door to migrants for instance, but he doesn't think so because this way, he is being altruist to the republicans, which is a bit selfish since he needs them to be reelected, but he doesn't feel selfish himself. Selfishness is not about sentience since we can't feel selfish, it is about defending what we are made of, or part of. Humanity holds together because we are all selfish, and because selfishness implies that the group will help us if we need it. Humanity itself is selfish when it wants to protect the environment, because it is for itself that it does so. The only way to feel guilty of having been selfish is after having weakened somebody from our own group, because then, we know we also weaken ourselves. With no punishment in view from our own group, no guiltiness can be felt, and no guiltiness can be felt either if the punishment comes from another group. That's why torturers say that they don't feel guilty. I have a metaphor for your kind of morality: it's like windows. It's going to work when everything will have been taken into account, otherwise it's going to freeze all the time like the first windows. The problem is that it might hurt people while freezing, but the risk might still be worthwhile. Like any other invention, the way to minimize the risk would be to proceed by small steps. I'm still curious about the possibility to build a selfish AGI though. I still think it could work. There would be some risks too, but they might not be more dangerous than with your's. Have you tried to imagine what kind of programming would be needed? Such an AGI should behave like a good dictator: to avoid revolutions, he wouldn't kill people just because they don't think like him, he would look for a solution where everybody likes him. But how would he proceed exa

It is equivalent to it. (1) dying of cancer --> big negative. (2) cure available --> negative cancelled. (3) denied access to cure --> big negative restored, and increased. That denial of access to a cure actively becomes the cause of death. It is no longer simply death by cancer, but death by denial of access to available cure for cancer.

"I'm not sure what you're referring to. I haven't seen any particularly magical thinking around sentience on LW."

I wasn't referring to LW, but the world at large.

" "However, science has not identified any means by which we could make a computer sentient (or indeed have any kind of consciousness at all)." --> This is misleading. The current best understanding of human consciousness is that it is a process that occurs in the brain, and there is nothing that suggests that the brain is uniquely capable of housin... (read more)

2TAG
That's quite confused thinking. For one thing. reductionism is a hypothesis. not a universal truth. For another, reductively understandable systems trivially have properties their components don't have. Spreadsheets aren't spreadsheaty all the way down.

"We can't know that there's not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it's quite possible that this is the case! (It's not. Unfalsifiability does not magically make something true.)"

Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system. It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is ... (read more)

0TheWakalix
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It's the result of the regularity, and other things as well.) The pattern is definitely real - it's a pattern in a real thing - and it can "affect" lower levels in that the particular arrangement of particles corresponding to the pattern of "physicist and particle accelerator" describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles. Patterns aren't nothing. They're the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON. Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can't imagine yourself being wrong, then That's Bad and you should go read the Sequences.) Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real. It's possible that I'm misunderstanding you, and that the course of events you describe isn't "we understand why we feel we have sentience and so it doesn't exist" or "we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn't exist." But that's my current best interpretation. Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society's recognition and dissemination of Good Ideas is particularly good, or that you're very good at searching out obscure truth

"With your definition and our world-model, none of us are truly sentient anyway. There are purely physical reasons for any words that come out of my mouth, exactly as it would be if I were running on silicon instead of wet carbon. I may or may not be sentient on a computer, but I'm not going to lose anything by uploading."

If the sentience is gone, it's you that's been lost. The sentience is the thing that's capable of suffering, and there cannot be suffering without that sufferer. And without sentience, there is no need for mo... (read more)

2TheWakalix
In this scenario, it's not gone, it's never been to begin with. I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say "ow" whenever you get hurt? If the AI doesn't rank human utility* high in its own utility function, it won't "care" about showing us that Person X was right all along, and I rather doubt that the most effective way of studying human psychology (or manipulating humans for its own purposes, for that matter) will be identical to whatever strokes Person X's ego. If it does care about humanity, I don't think that stroking the Most Correct Person's ego will be very effective at improving global utility, either - I think it might even be net-negative. *Not quite human utility directly, as that could lead to a feedback loop, but the things that human utility is based on - the things that make humans happy.

"That's my point! My entire point is that this circular ordering of utilities violates mathematical reasoning."

It only violated it because you had wrongly put "<" where it should have been ">". With that corrected, there is no paradox. If you stick to using the same basis for comparing the four scenarios, you never get a paradox (regardless of which basis you choose to use for all four). You only get something that superficially looks like a paradox by changing the basis of comparison for different pairs, and that&#... (read more)

On the basis you just described, we actually have

U(A)<U(A+) : Q8x1000 < Q8x1000 + Q4x1000

U(A+)<U(B-) : Q8x1000 +Q4x1000 < Q7x2000

U(B-)=(B) : Q7x2000 = Q7x2000

(B)>U(A) : Q7x2000 > Q8x1000

In the last line you put "<" in where mathematics dictates that there should be a ">". Why have you gone against the rules of mathematics?

You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.

1TheWakalix
That's my point! My entire point is that this circular ordering of utilities violates mathematical reasoning. The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A. (Dutch booking problem!) Most people do not consider "a world with the maximal number of people such that they are all still barely subsisting" to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increase preferability. No, it's not. It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness. I would rather live in a world with a billion people that are each rather happy. The feasibility of the world doesn't matter - the resources involved are irrelevant - it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.

"This seems circular - on what basis do you say that it works well?"

My wording was " while it's faulty ... it works so well overall that ..." But yes, it does work well if you apply the underlying idea of it, as most people do. That is why you hear Jews saying that the golden rule is the only rule needed - all other laws are mere commentary upon it.

"I would say that it perhaps summarizes conventional human morality well for a T-shirt slogan, but it's a stretch to go from that to "underlying truth" - more like un... (read more)

"What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body,"

The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it's a difficult thing to explore, but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity - it has to have some physical reality.

"but consider this: the words that you type, the thoughts in your head, all of these are pur... (read more)

0TheWakalix
How do you know it exists, if science knows nothing about it? This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain. Is your sentience in any way connected to what you say? Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for "you" to be made of? So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like? They are both categories of things. The category that you happen to place yourself in is not inherently, a priori, a Fundamentally Real Category. And even if it were a Fundamentally Real Category, that does not mean that the quantity of members of that Category is necessarily conserved over time, that members cannot join and leave as time goes on. It's the same analogy as before - just as you don't need to split a chair's atoms to split the chair itself, you don't need to make a brain's atoms suffer to make it suffer. How do you know that? And how can this survive contact with reality, where in practice we call things "chairs" even if there is no chair-ness in its atoms? I recommend the Reductionism subsequence. But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed "hidden property" is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of

"How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped."

People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.

"Is 2+2 equal to 5 or to fish?"

Neither of those results works, but neither of them is my answer.

"What is this "unbiased AGI" who makes moral judgments on the basis of intelligence alone? This is nonsense - moral &... (read more)

1TheWakalix
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don't think that that comparison is a useful way of getting to truth and meaningful categories. I'll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong. "All sentiences are equally important" is definitely a moral statement. I think that this is a fine (read: "quite good"; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside "you did the logic incorrectly," and I'm not sure that your method of testing moral theories takes that possibility into account. I agree that it is mathematics, but where is this "proper" coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that "what maximizes X?" is objective, though.

Just look at the reactions to my post "Mere Addition Paradox Resolved". The community here is simply incapable of recognising correct argument when it's staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it's a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they're doing that because they aren't allowing themselves to be steered by reason, but by their emotional attachment to their exist... (read more)

1TheWakalix
Perhaps the reason that we disagree with you is not that we're emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People. Really. You know that LW is an oppressive mob with a few people who don't dare to contradict the dogma for fear of [something]... because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes' Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described. It tells me that you missed the point. Parfit's paradox is not about pragmatic decision making, it is about flaws in the utility function. "Truth forever on the scaffold, Wrong forever on the throne," eh? And fractally so? You have indeed found A Reason that supports your belief in the AGI-God, but I think you've failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.

" "Sentient rock" is an impossible possible object. I see no point in imagining a pebble which, despite not sharing any properties with chairs, is nonetheless truly a chair in some ineffable way."

I could assert that a sentient brain is an impossible possible object. There is no scientific evidence of any sentience existing at all. If it is real though, the thing that suffers can't be a compound object with none of the components feeling a thing, and if any of the components do feel something, they are the sentient things rather tha... (read more)

1TheWakalix
Then why are we talking about it, instead of the gallium market on Jupiter? You really ought to read the Sequences. There's a post, Angry Atoms, that specifically addresses an equivalent misconception. Eliezer says, "It is not necessary for the chains of causality inside the mind, that are similar to the environment, to be made out of billiard balls that have little auras of intentionality. Deep Blue's transistors do not need little chess pieces carved on them, in order to work." Do you think that we have a Feeling Nodule somewhere in our brains that produces Feelings? That's not an effective Taboo of "suffering" - "suffering" and "unpleasant" both draw on the same black-box-node. And anyway, even assuming that you explained suffering in enough detail for an Alien Mind to identify its presence and absence, that's not enough to uniquely determine how to compare two forms of suffering. ...do you mean that you're not claiming that there is a single correct comparison between any two forms of suffering? But what does it even mean to compare two forms of suffering? I don't think you understand the inferential gap here. I don't agree that amount-of-suffering is an objective quantitative thing. I don't disagree that if x=y then f(x)=f(y). I do disagree that "same amount" is a meaningful concept, within the framework you've presented here (except that you point at a black box called Same, but that's not actually how knowledge works). I haven't banned anything. I'm claiming that your statements are incoherent. Just saying "no that's wrong, you're making a mistake, you say that X isn't real but it's actually real, stop banning discussion" isn't a valid counterargument because you can say it about anything, including arguments against things that really don't exist. I'm not saying that we should arbitrary call human suffering twice as bad as its Obvious True Amount. It's the very nature of "equal" which I'm disagreeing with you about. "How do we compare two forms of su

Won't it? If you're dying of cancer and find out that I threw away the cure, that's the difference between survival and death, and it will likely feel even worse for knowing that a cure was possible.

1TheWakalix
The dying-of-cancer-level harm is independent of whether I find out that you didn't offer me the opportunity. The sadness at knowing that I could have not been dying-of-cancer is not equivalent to the harm of dying-of-cancer.

Replace the calculator with a sentient rock. The point is that if you generate the same amount of suffering in a rock as in something with human-level intelligence, that suffering is equal. It is not dependent on intelligence. Torturing both to generate the same amount of suffering would be equally wrong. And the point is that to regard humans as above other species or things in this regard is bigotry.

2TheWakalix
"Sentient rock" is an impossible possible object. I see no point in imagining a pebble which, despite not sharing any properties with chairs, is nonetheless truly a chair in some ineffable way. You haven't defined suffering well enough for me to infer an equality operation. In other words, as it is, this is tautological and useless. The same suffering is the same suffering, but perhaps my ratio between ant-suffering and human-suffering varies from yours. Perhaps a human death is a thousand times worse than an ant death, and perhaps it is a million times worse. How could we tell the difference? If you said it was a thousand, then it would seem wrong for me to say that it was a million, but this only reveals the specifics of your suffering-comparison - not any objective ratio between the moral importances of humans and ants. Connection to LW concepts: floating belief networks, and statements that are underdetermined by reality. By all means you can define suffering however you like, but that doesn't mean that it's a category that matters to other people. I could just as easily say: "Rock-pile-primeness is not dependent on the size of the rock pile, only the number of rocks in the pile. It's just as wrong to turn a 7-pile into a 6-pile as it is to turn a 99991-pile into a 99990-pile." But that does not convince you to treat 7-piles with care. Bigotry is an unjustified hierarchy. Justification is subjective. Perhaps it is just as bigoted to value this computer over a pile of scrap, but I do not plan on wrecking it any time soon.

I'm not adding resources - they are inherent to the thought experiment, so all I've done is draw attention to their presence and their crucial role which should not be neglected. If you run this past a competent mathematician, they will confirm exactly what I've said (and be aware that this applies directly to total utilitarianism).

Think very carefully about why the population A' should have a lower level of happiness than A if this thought experiment is resources-independent. How would that work? Why would the quality of life for indiv... (read more)

There is nothing in morality that forces you to try to be happier - that is not its role, and if there was no suffering, morality would have no role at all. Both suffering and pleasure do provide us with purpose though, because one drives us to reduce it and the other drives us to increase it.

Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they ... (read more)

1TheWakalix
These aren't equivalent. If I discover that you threw away a cancer cure, my unhappiness at this discovery won't be equivalent to dying of cancer.

Thanks for the questions.

If we write conventional programs to run on conventional hardware, there's no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn't actually experience at all. The brain is a neural computer though, and it's very hard to work out how any neural net works once it's become even a little complex, so it's hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience reall... (read more)

1TheWakalix
We can't know that there's not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it's quite possible that this is the case! (It's not. Unfalsifiability does not magically make something true.) That's the thing. It's impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn't exist. I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
Load More