All of J_Thomas2's Comments + Replies

I thought of a simpler way to say it.

If Hillary Clinton was a man, she wouldn't be Bill Clinton's wife. She'd be his husband.

Similarly, if PA proved that 6 was prime, it wouldn't be PA. It would be Bill Clinton's husband. And so ZF would not imply that 6 is actually prime.

Larry, you have not proven that 6 would be a prime number if PA proved 6 was a prime number, because PA does not prove that 6 is a prime number.

The theorem is only true for the phi that it's true for.

The claim that phi must be true because if it's true then it's true, and if it's false then "if PA |- phi then phi" has an officially true conclusion whenever PA does not imply phi, is bogus.

It's simply and obviously bogus, and I don't understand why there was any difficulty about seeing it.

Caledonian, it's possible to care deeply about choices that were made in a seemingly-arbitrary way. For example, a college graduate who takes a job in one of eight cities where he got job offers, might within the year care deeply about that city's baseball team. But if he had taken a different job it would be a completely different baseball team.

You might care about the result of arbitrary choices. I don't say you necessarily will.

It sounds like you're saying it's wrong to care about morals unless they're somehow provably correct? I'm not sure I get your o... (read more)

I haven't read Roko's blog, but from the reflection in Eliezer's opposition I find I somewhat agree.

To the extent that morality is about what you do, the more you can do the higher the stakes.

If you can drive a car, your driving amplifies your ability to do good. And it amplifies your ability to do bad. If you have a morality that leaves you doing more good than bad, and driving increases the good and the bad you do proportionately, then your driving is a good thing.

True human beings have an insatiable curiousity, and they naturally want to find out about ... (read more)

"You should care about the moral code you have arbitrarily chosen."

No, I shouldn't. Which seems to be the focal point of this endless 'debate'.

Well, you might choose to care about a moral code you have arbitrarily chosen. And it could be argued that if you don't care about it then you haven't "really" chosen it.

I agree with you that there needn't be any platonic absolute morality that says you ought choose a moral code arbitrarily and care about it, or that if you do happen to choose a moral code arbitrarily that you should then care about it.

We are born with some theorems of right (in analogy to PA).

Kenny, I'd be fascinated to learn more about that. I didn't notice it in my children, but then I wouldn't necessarily notice.

When I was a small child people claimed that babies are born with only a fear of falling and a startle reflex for loud noises. I was pretty sure that was wrong, but it wasn't clear to me what we're born with. It takes time to learn to see. I remember when I invented the inverse square law for vision, and understood why things get smaller when they go farther away. It takes time to notice that parents have their own desires that need to be taken into account.

What is it that we're born with? Do you have a quick link maybe?

Larry, one of them is counterfactual.

If you draw implications on a false asumption then the result is useful only to show that an assumption is false.

So if PA -> 1=2 then PA -> 1<>2. How is that useful?

If PA -> 6 is prime then PA also -> 6 is not prime.

Once you assume that PA implies something that PA actually implies is false, you get a logical contradiction. Either PA is inconsistent or PA does not imply the false thing.

How can it be useful to reason about what we could prove from false premises? What good is it to pretend that PA is inconsistent?

Honestly I do not understand how you can continue calling Eliezer a relativist when he has persistently claimed that what is right doesn't depend on who's asking and doesn't depend on what anyone thinks is right.

Before I say anything else I want you to know that I am not a Communist.

Marx was right about everything he wrote about, but he didn't know everything, I wouldn't say that Marx had all the answers. When the time is ripe the proletariat will inevitably rise up and create a government that will organize the people, it will put everybody to work accord... (read more)

6VAuroch
Eliezer's moral theory is Aristotelian, not Platonic. Plato believed that Forms and The Good existed in a separate realm and not in the real world; any triangle you drew was an approximation of The Triangle. Aristotle believed that Forms were generalizations of things that exist in the real world, and had no independent existence. The Triangle is that which is shared among all drawings of triangles; The Dog is that which is shared among all dogs. Eliezer's moral theory, it seems to me, is that there is Rightness, but it is generalized from the internal sense of rightness that every human has. People may deviate from The Right, and could take murderpills to make everyone believe something which is Wrong is right, but The Right doesn't change; people would just go further out of correspondence with it.

But Larry, PA does not actually say that 6 is prime, and 6 is not prime.

You could say that if PA proved that every theorem is false then every theorem would be false.

Or what would it mean if PA proved that Lob's theorem was false?

It's customary to say that any conclusion from a false premise is true. If 6 is prime then God's in his heaven, everything's right with the world and we are all muppets. Also God's in hell, everything's wrong with the world, and we are all mutant ninja turtles. It doesn't really matter what conclusions you draw from a false premis... (read more)

Let me try to say that clearer.

Suppose that A is false.

How the hell are you going to show that if PA proves A true then A will be true, when A is actually false?

If you can't prove what would happen if PA proved A true when A is actually false, then if you can prove that if PA proves A is true then A has to be true, it must be that A is true in the first place.

If this reasoning is correct then there isn't much mystery involved here.

One more time. If PA proves you are a werewolf, then you're really-and-truly a werewolf. PA never proves anything that isn't ac... (read more)

J_Thomas2100

I went to the carnival and I met a fortune-teller. Everything she says comes true. Not only that, she told me that everything she says always comes true.

I said, "George Washington is my mother" and she said it wasn't true.

I said, "Well, if George Washington was my mother would you tell me so?" and she refused to say she would. She said she won't talk about what would be true if George Washingto was my mother, because George Washington is not my mother.

She says that everything she says comes true. She looked outside her little tent, and ... (read more)

3TobyBartels
You can tell that J_Thomas2's "if" (which is the counterfactual "if") is not the "if" of material implication (which is what appears in Loeb's theorem) from the grammar: "if George Washington was my mother" rather than "if George Washington is my mother".

The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

Obviously, he does know. So the next question is, why does he present material that he knows is wrong?

Professional mathematicians and scientists try not to do that because it makes them look bad. If you present a proof that's wrong then other mathematicians might embarrass you at parties. But maybe Eliezer is immune to that kind of embarrassment. Socrates pres... (read more)

When you try to predict what will happen it works pretty well to assume that it's all deterministic and get what results you can. When you want to negotiate with somebody it works best to suppose they have free will and they might do whatever-the-hell they want.

When you can predict what inanimate objects will do with fair precision, that's a sign they don't have free will. And if you don't know how to negotiate with them, you haven't got a lot of incentive to assume they have free will. But particularly when they're actually predictable.

The more predictabl... (read more)

People keep using the term "moral relativism". I did a Google search of the site and got a variety of topics with the term dating from 2007 and 2008. Here's what it means to me.

Relative moral relativism means you affirm that to the best of your knowledge nobody has demonstrated any sort of absolute morality. That people differ in moralities, and if there's anything objective to say one is right and another is wrong that you haven't seen it. That very likely these different moralities are good for different purposes and different circumstances, an... (read more)

We have quick muscles, so we do computation to decide how to organise those muscles.

Trees do not have quick muscles, so they don't need that kind of computation.

Trees need to decide which directions to grow, and which directions to send their roots. Pee on the ground near a tree and it will grow rootlets in your direction, to collect the minerals you give it.

Trees need to decide which poisons to produce and where to pump them. When they get chewed on by bugs that tend to stay on the same leaf the trees tend to send their poisons to that leaf. When it's bug... (read more)

If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.

My experience with math classes was much different from yours. When we had a disagreement, the teacher said, "How would we tell who's right? Do you have a proof? Do you have a counter-example?&q... (read more)

Nominull, don't the primalists have a morality about heaps of stones?

They believe there are right ways and wrong ways to do it. They sometimes disagree about the details of which ways are right and they punish each other for doing it wrong.

How is that different from morality?

I think there is an important distinction between "kill or die" and "kill or be killed." The wolf's life may be at stake, but the rabbit clearly isn't attacking the wolf. If I need a heart transplant, I would still not be justified in killing someone to obtain the organ.

Mario, you are making a subtler distinction than I was. There is no end to the number of subtle distinctions that can be made.

In warfare we can distinguish between infantrymen who are shooting directly at each other, versus infantry and artillery or airstrikes that dump ... (read more)

J_Thomas2-10

This series of Potemkin essays makes me increasingly suspicious that someone's trying to pull a fast one on the Empress.

Agreed. I've suspected for some time that -- after laying out descriptions of how bias works -- Eliezer is now presenting us with a series of arguments that are all bias, all the time, and noticing how we buy into it.

It's not only the most charitable explanation, it's also the most consistent explanation.

If you were to stipulate that the rabbit is the only source of nourishment available to the fox, this still in no way justifies murder. The fox would have a moral obligation to starve to death.

How different is it when soldiers are at war? They must kill or be killed. If the fact that enemy soldiers will kill them if they don't kill the enemy first isn't enough justification, what is?

Should the soldiers on each side sit down and argue out the moral justification for the war first, and the side that is unjustified should surrender?

But somehow it seems like they hardly ever do that....

Konrad Lorenz claimed that dogs and wolves have morality. When a puppy does something wrong then a parent pushes on the back of its neck with their mouth and pins it to the ground, and lets it up when it whines appropriately.

Lorenz gave an example of an animal that mated at the wrong time. The pack leader found the male still helplessly coupled with the female, and pinned his head to the ground just like a puppy.

It doesn't have to take language. It starts out with moral beliefs that some individuals break. I can't think of any moral taboos that haven't bee... (read more)

J_Thomas2-10

Chaung-Tzu had a story: Two philosophers were walking home from the bar after a long evening drinking. They stopped to piss off a bridge. One of them said, "Look at the fish playing in the moonlight! How happy they are!"

The other said, "You're not a fish so you can't know whether the fish are happy."

The first said, "You're not me so you can't know whether I know whether the fish are happy."

It seems implausible to me that rabbits or foxes think about morality at all. But I don't know that with any certainty, I'm not sure how th... (read more)

J_Thomas2-10

Caledonian, thank you. I didn't notice that there might be people who disagree with that, since it seemed to me so clearly true and unarguable.

I guess in the extreme case somebody could believe that fairness has nothing to do with agreement. He might find a bunch of people who have a deal that each of them believes is fair, and he might argue that each of them is wrong, that their deal is actually unfair to every one of them. That each of them is deforming his own soul by agreeing to this horrible deal.

My thought about that is that there might be some deal... (read more)

Lakshmi, Eliezer does have a point, though.

While there are many competing moral justifications for different ways to divide the pie, and while a moral relativist can say that no one of them is objectively correct, still many human beings will choose one. Not even a moral relativist is obligated to refrain from choosing moral standards. Indeed, someone who is intensely aware that he has chosen his standards may feel much more intensely that they are his than someone who believes they are a moral absolute that all honest and intelligent people are obligated ... (read more)

"Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"

Because they have a specific argument which leads them to believe that?

Sure, but have you ever seen such an argument that wasn't obviously fallacious? I have not seen one yet. It's been utterly obvious every time.

Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the f... (read more)

Hendrick, it could be argued that each person deserves to own 1/N of the pie because they are there. So if Doreen isn't hungry, she still owns 1/N of the pie which she can sell to anyone who is hungry.

Similarly it could be argued that the whole forest should be divided up and each person should own 1/N of it, and if the pie is found in the part of the forest that I own then I own that whole pie. But I have no rights to pies found in the rest of the forest.

Now suppose that all but one of the group is busy looking up into the trees at beautiful birds, which ... (read more)

But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all?

I don't think you ought to try to optimise fitness. Your opinion about fitness might be quite wrong, even if you accept the goal of optimising fitness. Say you sacrifice trying to optimise fitness and then it turns out you failed. Like, you try to optimise for intelligence just before a plague hits that kills 3/4 of the public. You should have optimised for... (read more)

Eliezer, you claim that there is no necessity we should accept Dennis's claim he should get the whole pie as fair. I agree.

There is also no necessity he should accept our alternative claim as fair.

There is no abstract notion that is inherently fair. What there is, is that when people do reach agreement that something is fair, then they have a little bit more of a society. And when they can't agree about what's fair they have a little less of a society. There is nothing that says ahead of time that they must have that society. There is nothing that says ahe... (read more)

One very funny consecuence of defining "fair" as "that which everyone agrees to be "fair"" is that if you indeed could convince everyone of the correctness of that definition, nobody could ever know what IS "fair", since they would look at their definition of "fair", which is "that which everyone agrees to be "fair"", then they would look at what everyone does agree to be fair, and conclude that "that which everyone agrees to be "fair" is "that which everyone agrees t... (read more)

If fairness is about something other than human agreement, what is it?

Suppose you have a rule that you say is always the fair one. And suppose that you apply it to a situation involving N people, and all N of them object, none of them think it's fair. Are you going to claim that the fair thing for them to do is something that none of them agrees to? What's fair about that?

When everybody involved in a deal agrees it's fair, who are you -- an outside kibitzer -- to tell them they're wrong?

Suppose a group all agrees, they think a deal is fair. And then you co... (read more)

It's fair when the participants all sincerely agree that it's fair.

If you think you're being unfair to somebody and he disagrees, who's right?

There isn't any guarantee that a fair solution is possible. If people can't agree, then we can't be fair. I say, fairness is a goal that we can sometimes achieve. There's no guarantee that we could always achieve all of our goals if only we did the right things. There's no guarantee that fairness is possible. Just, it's a good goal to try for sometimes, and sometimes we can actually be fair or mostly fair.

People ofte... (read more)

Glyn, I did something similar, but with mine after the granular tasks are estimated, a random delay is added to each according to a pareto distribution. The more subtasks, the more certain that a few of them will be very much behind schedule.

I chose a pareto distribution because it had the minimal number of parameters to estimate and it had a fat tail. Also I had a maximum entropy justification. Say you use an exponential distribution, you're assuming a constant chance for completion at any time that it's incomplete. But other things equal, the more you ge... (read more)

Steve, I think you posted your comment in the wrong thread, not this one.

This seems to imply that the relativists are right. Of course there's no right way to sort pebbles, but if there really is an absolute morality that AIs are smart enough to find, then they'll find it and rule us with it.

Of course, there could be an absolute morality that AIs aren't smart enough to find either. Then we'd take pot luck. That might not be so good. Many humans believe that there is an absolute morality that governs their treatment of other human beings, but that no morality is required when dealing with lower animals, who lack souls and full i... (read more)

0[anonymous]
Seeing as the universe itself, on it's most fundamental level seems to lack any absolutes, i.e. that it is purely a locality question, and that the only constants seem to be the ones embedded in the laws of physics, I am having trouble believing in absolute morality. Like, of the "I am confused by this" variety. To paraphrase "there is no term for fairness in the equations of general relativity." You cannot derive morality from the absolute laws of the universe. You probably cannot even do it from mathematical truth. You might want to read Least Convenient Possible World.
0Crabfishram
I don't think that they would tell the Als to not think things. When to them piling pebbles is all one should ever want to do. Its life to them so if you were super smart you would want to use to the only point in life.

Given that the morality we want to impose on a FAI is kind of incoherent, maybe we should get an AI to make sense of it first?

Funky, you might be right.

Consider Tacitus:
"To ravage, to slaughter, to usurp under false titles, they call empire; and where they make a desert, they call it peace."

How better to make a desert than with nukes?

As a general rule, real WMDs do not help nations achieve the goals they think of as victory. Imagine for example that we had created plentiful nukes two years earlier, and we had then bombed 20 german cities while the germans surrendered to us. We would then have to deal with russia, and our german ally would have 20 fewer cities to assi... (read more)

Caledonian, agreed. Whatever we say are the inevitable results of that slaughter, whether it's that we prevented a later nuclear war or we poisoned the chance for peace, they're all bogus.

We don't know what would have happenned instead if only things were different. We can only guess by making netaphors from other situations.

Here's a metaphor--

Pre-nuke: You have a neigbor who annoys you. He plays his stereo too loud. He throws garbage over the fence into your yard. He doesn't mow his grass, you get bugs and another neighbor has trouble selling his house. ... (read more)

Funny how the meaning changes if it's desire for gold atoms compared to desire for iron atoms.

I'm real unclear about the concept here, though. Is an FAI going to go inside people's heads and change what we want? Like, it figures out how to do effective advertising?

Or is it just deciding what goals it should follow, to get us what we want? Like, if what the citizens of two countries each with population about 60 million most want is to win a war with the other country, should the FAI pick a side and give them what they want, or should it choose some other w... (read more)

it's not the wheapons that kill people, but the people who use them.

There's a level where that's kind of true.

But consider the chicken. In the usual way of things, when two cocks meet they do some threat displays and likely one of them runs away. If not they fight each other a little and then likely one of them runs away.

If you strap razor blades to their feet and put them into a pen where they can't run away then you have something you can sell tickets to. Except it's illegal in this country. You could say "Razor blades don't kill fighting cocks, oth... (read more)

Mark, no one has used biological weapons even though we have developed them. (There may be some unpublicised exceptions, maybe south africa used some against africans etc.) No one has ever used genetic weapons. The idea that every weapon gets used except for MAD is wrong.

You say that we cannot have disarmament. As long as the USA prevents disarmament, you are right. But after the next nuclear war, we will have nuclear disarmament provided the world economy still exists. You say it can't happen on no evidence. I say, wait and see. When it comes, you won't b... (read more)

Mark, you have the right to your untestable opinions. No one can ever show whether we would have used nukes other times if we hadn't that time, or that somebody else would have used nukes if they had them, or that if you were in Truman's place you'd do the same thing he did.

There's no way for anybody to know about any of these things, so you have the perfect right to believe whatever you want just as you do about how many Santa Clauses there are in Heaven and whether the Yankees would have won the series in 1947 if they had Joe DiMaggio, and whether the ge... (read more)

Frelkins, let's consider MAD in action.

In 1973 israel lost a war. Egypt wasn't ready to take half of the sinai much less the whole thing, but still it was clear that israel had lost and would have to negotiate. instead, israel threatened to nuke egypt.

The USA detected nuclear material crossing the dardanelles, and "we" initiated DefCon 1 and announced it as DefCon 3. "We" told the russians that unless they backed down and let israel threaten egypt with nukes when there was no countermeasure available to egypt, we would kill everybody in... (read more)

It's absurd to argue that "we" did the right thing because the results happened to turn out well. You can make that argument about anything. For example, if Hitler had not started WWII when he did, there would inevitably have been a world war after both sides had nuclear weapons and it would have been far, far worse. Hitler might have done it for the wrong reasons but we owe him our lives for doing it.

All it takes is to look at what happened, and make up a worse alternative, and then say that what happened was better than the alternative. You can... (read more)

Suppose we break the problem down into multiple parts.

1. Understand how the problem works, what currently happens.
2. Find a better way that things can work, that would not generate the same problems.
3. Find a way to get from here to there.
4. Do it.

Then part 1 might easily be aided by a guy on a blog. Maybe part 2. Possibly part 3.

A blog is better than a newsgroup because the threads don't scroll off, they're all sitting on the site's computer if anybody cares. Also, as old posts are replaced by new posts people stop responding to old posts. So there isn... (read more)

'It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI.'

As was pointed out, this might not have the consequences one wants. However, even if that wasn't true, I'd still be leery of this option - this'd effectively be giving one human unlimited power.

Would you expect all the AIs to work together under one person'... (read more)

"Sorry, you're not allowed to suggest ideas using that method" is not something you hear, under Traditional Rationality.

But it is a fact of life, ....

It is a fact of life that ....

I disagree. You list a whole collection of mistakes people make after they have a bad hypothesis that they're attached to. I say, the mistake should not be to use your prior experience when you come up with hypotheses. The mistakes are first to get too attached to one hypothesis, followed by the list of "facts of life" mistakes you then described.

People will ... (read more)

There's always a nonzero chance that any action will cause an infinite bad. Also an infinite good.

Then how can you put error bounds on your estimate of your utility function?

If you say "I want to do the bestest for the mostest, so that's what I'll try to do" then that's a fine goal. When you say "The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are" then maybe something is wrong?

J_Thomas2-10

"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity.

The assumption is that when you lay waste to a million human species the bad that is done is finite.

Is there solid evidence for tha... (read more)

My point was that in an adversarial situation, you should assume your opponent will always make perfect choices. Then their mistakes are to your advantage. If you're ready for optimal thrashings, random thrashings will be easier.

It isn't that simple. When their perfect choice mean you lose, then you might as well hope they make mistakes. Don't plan for the worst that can happen, plan for the worst that can happen which you can still overcome.

One possible mistake they can make is to just be slow. If you can hit them hard before they can react, you might hur... (read more)

Humans faced with resource constraints did find the other approach.

Traditionally, rather than restrict our own breeding, our response has been to enslave our neighbors. Force them to work for us, to provide resources for our own children. But don't let them have children. Maybe castrate the males, if necessary kill the female's children. (It was customary to expose deformed or surplus children. If a slave does get pregnant, who's child is surplus?)

China tried the "everybody limit their children" approach. Urban couples were allowed one child, far... (read more)

Load More