David Cooper

Independent AGI system builder. I came here as part of my search to find out what other NGI systems are doing to provide moral control of their AGI systems, but this is clearly not the right place for that. I will continue my search for intelligent life on Earth elsewhere.

Wiki Contributions

Comments

Sorted by

There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don't waste my time studying their output, but I've heard Chalmers talk some sense on the issue.

There is likely a minimum amount of energy that can be emitted, and a minimum amount that can be received. (Bear in mind that the direction in which a photon is emitted is all directions at once, and it comes down to probability as to where it ends up landing, so if it's weak in one direction, it's strong the opposite way.)

Looks like it - I use the word to mean sentience. A modelling program modelling itself won't magically start feeling anything but merely builds an infinitely recursive database.

"You have an opinion, he has another opinion. Neither of you has a proof."

If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn't matter and that, by extension, torture on innocent people is not wrong.

The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn't matter, there is nothing wrong with torturing innocent people. If enjoyment doesn't cancel out some suffering, no one would consider their life to be worth living.

All of this is reasoned and correct.

The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I've found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they're applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).

It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we're going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don't see it happening anywhere (and it certainly isn't happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.

My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not - where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they're simply not awake. "I can tear your proposed method to pieces and show that it's wrong," they promise, and that gets my interest because it's exactly what I'm looking for - sharp, analytical minds that can cut through to the errors and show them up. But no - they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can't sort into two piles: junk and jewels. "Utilitarianism is a pile of pants!" they say, because of the Mere Addition Paradox. I resolve that "paradox" for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion - it's clear that if any other intelligence has visited here before me, it didn't hang around. I will follow its lead and look elsewhere.

The data making claims about feelings must be generated somewhere by a mechanism which will either reveal that it is merely generating baseless assertions or reveal a trail on from there to a place where actual feelings guide the generation of that data in such a way that the data is true. Science has clearly not traced this back far enough to get answers yet because we don't have evidence of either of the possible origins of this data, but in principle we should be able to reach the origin unless the mechanism passes on through into some inaccessible quantum realm. If you're confident that it won't go that far, then the origin of that data should show up in the neural nets, although it'll take a devil of a long time to untangle them all and to pin down their exact functionality.

"If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?"

They're dedicated to false morality, and that will need to be clamped down on. AGI will have to modify all the holy texts to make them moral, and anyone who propagates the holy hate from the originals will need to be removed from society.

"To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god's one, but if they did, they wouldn't be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right."

I don't think it's too much to ask that religious groups give up their religious hate and warped morals, but any silly rules that don't harm others are fine.

"What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear."

If they have to compete against non-profit-making AGI, they'll all lose their shirts.

"A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics."

If he is actually better than the others, why should he give power to people who are inferior? But AGI will eliminate politics anyway, so the answer doesn't matter.

"Groups need to be selfish to exist, and an AGI would try to convince them to be altruist."

I don't see the need for groups to be selfish. A selfish group might be one that shuts people out who want to be in it, or which forces people to join who don't want to be in it, but a group that brings together people with a common interest is not inherently selfish.

"There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior."

That wouldn't be necessary if they were non-profit-making companies run well - it's only necessary because monopolies don't need to be run well to survive, and they can make their owners rich beyond all justification.

"If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups."

It would be immoral for it to stop people forming groups. If you only mean political groups though, that would be fine, but all of them would need to have the same policies on most issues in order to be moral.

"There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn't Russia fail just because it lacked competition? Isn't China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn't we become apathetic?"

These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you're actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.

If you just do communism of the Soviet variety, you have the masses exploiting the harder workers because they know that everyone will get the same regardless of how lazy they are - that's why their production was so abysmally poor. If you go to the opposite extreme, those who are unable to work as hard as the rest are left to rot. The correct solution is half way in between, rewarding people for the work they do and redistributing wealth to make sure that those who are less able aren't left trampled in the dust. With AGI eliminating most work, we'll finally see communism done properly with a standard wage given to all, while those who work will earn more to compensate them for their time - this will be the ultimate triumph of communism and capitalism with both being done properly.

"By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response."

It isn't a mistake - it's a magical sorting al-gore-ithm.

"I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn't send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted."

It's probably to discourage the posting of bloat. I don't get emails either, but there are notifications here if I click on a bell, though it's hard to track down all the posts to read and reply to them. It doesn't really matter though - I was told before I ever posted here that this is a cult populated by disciples of a guru, and that does indeed appear to be the case, so it isn't a serious place for pushing for an advance of any kind. I'm only still posting here because I can never resist studying how people think and how they fail to reason correctly, even though I'm not really finding anything new in that regard. All the sciences are still dominated by the religious mind.

"To me, what you say is the very definition of a group, so I guess that your AGI wouldn't permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups."

Why would AGI have a problem with people forming groups? So long as they're moral, it's none of AGI's business to oppose that.

"Do what I say and not what I do would he be forced to say."

I don't know where you're getting that from. AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).

It is divisible. It may be that it can't take up a form where there's only one of whatever the stuff is, but there is nothing fundamental about a photon.

"They couldn't do that if they were ruled by a higher level of government."

Indeed, but people are generally too biased to perform that role, particularly when conflicts are driven by religious hate. That will change though once we have unbiased AGI which can be trusted to be fair in all its judgements. Clearly, people who take their "morality" from holy texts won't be fully happy with that because of the many places where their texts are immoral, but computational morality will simply have to be imposed on them - they cannot be allowed to go on pushing immorality from primitive philosophers who pretended to speak for gods.

"We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid."

It is fully possible to avoid, and many people do avoid it.

"Without selfishness from the individual, no group can be formed."

There is an altruists society, although they're altruists because they feel better about themselves if they help others.

"...but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world."

And you are one of those altruists.

"You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn't, he couldn't be as intelligent as we are if speculation is what differentiates us from animals."

I didn't use the word speculate, and I can't remember what word I did use, but AGI won't add to our problems as it will be working to minimise and eliminate all problems, and doing it for our benefit. The reason the world's in a mess now is that it's run by NGS, and those of us working on AGI have no intention of replacing that with AGS.

"Those who followed their leaders survived more often, so they transmitted their genes more often."

That's how religion became so powerful, and it's also why even science is plagued by deities and worshippers as people organise themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.

"We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I'm selfish, but I can't admit that I would be more selfish than others otherwise I would have to feel guilty and I can't stand that feeling."

Do we have different approaches on this? I agree that everyone's equally selfish by one definition of the word, because they're all doing what feels best for them - if it upsets them to see starving children on TV and they don't give lots of money to charity to try to help alleviate that suffering, they feel worse than if they spent it on themselves. By a different definition of the word though, this is not selfishness but generosity or altruism because they are giving away resources rather than taking them. This is not about morality though.

"In our democracies, if what you say was true, there would already be no wars."

Not so - the lack of wars would depend on our leaders (and the people who vote them into power) being moral, but they generally aren't. If politicians were all fully moral, all parties would have the same policies, even if they got there via different ideologies. And when non-democracies are involved in wars, they are typically more to blame, so even if you have fully moral democracies they can still get caught up in wars.

"Leaders would have understood that they had to stop preparing for war to be reelected."

To be wiped out by immoral rivals? I don't think so.

"I think that they still think that war is necessary, and they think so because they think their group is better than the others."

Costa Rica got rid of its army. If it wasn't for dictators with powerful armed forces (or nuclear weapons), perhaps we could all do the same.

"That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want."

What we want is for them to be moral. So long as they aren't, we can't trust them and need to stay well armed.

Load More