All of Unknowns's Comments + Replies

The main problem with this is that it says that human beings are extremely unlike all nearby alien races. But if you willing to admit that humanity is that unique you might as well say that intelligence only evolved on earth, which is a much simpler and more likely hypothesis.

If "being rational" means choosing the best option, you never have to choose between "being reasonable" and "being rational," because you should always choose the best option. And sometimes the best option is influenced by what other people think of what you are doing; sometimes it's not.

0Satoshi_Nakamoto
I agree that rationality and reasonableness can be similar, but they can also be different. See this post for what I mean by rationality. The idea of it being choosing the best option is too vague. Some factors that may lead to what others think is reasonable being different from what is the most rational are: the continued use of old paradigms that are known to be faulty, pushing your views as being what is reasonable as a method of control and status quo bias. Here is are two more examples of the predicament * Imagine that you are in family that is heavily religious and you decide that you are an atheist. If you tell anyone in your family you are likely to get chastised for this making it an example of the just-be-reasonable predicament. * Imagine that you are a jury member and you are the cause of a hung jury. They tell you: “the guy obviously did it. He is a bad man anyway. How much evidence do you need? Just be reasonable about this so that we can go home”. Now, you may actually be being irrationally under confident or perhaps you are not. The post was about what you should do in this situation. I consider it a predicament because people find it hard to do what they think is the right thing when they are uncertain and when it will cause them social disapproval. Also, I have updated the below: To this to try and more clearly express what I meant:
Unknowns330

It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.

6fubarobfusco
Sex determination in placental mammals turns out to be really complicated, which is probably why there are so many intersex conditions. It's much simpler in marsupials, which is why male kangaroos don't have nipples. (Where would they keep them?)

I think what you need to realize is that it is not a question of proving that all of those things are false, but rather that it makes no difference whether they are or not. For example when you go to sleep and wake up it feels just the same whether it is still you or a different person, so it doesn't matter at all.

2Sabiola
Also, you're changing all the time anyway, even when you're awake. You have experiences, you learn things, you accumulate memories; all things that change you.

Excellent post. Basically simpler hypotheses are on average more probable than more complex ones, no matter how complexity is defined, as long as there is a minimum complexity and no maximum complexity. But some measures of simplicity are more useful than others, and this is determined by the world we live in; thus we learn by experience that mathematical simplicity is a better measure than "number of words it takes to describe the hypothesis," even though both would work to some extent.

0MrMind
Ah, I see now that we said almost the same thing. Which is the basic receipt of Solomonoff induction! I would only say here that simplicity is determined by the agent evaluating it, and while most agent are determined by the world that they live in, some might not.

I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.

EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arg... (read more)

0philh
Sorry, tapping out now. EDIT: but brief reply to your edit: I'm well aware that you think they're the same, and telling me that I'm not getting the point is super unhelpful.

What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?"

Could you two-box in that situation, or would that take away your freedom?

If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.

If you say you could not, why would that be you when the genetic case would not be?

-1philh
Unless something happens out of the blue to force my decision - in which case it's not my decision - then this situation doesn't happen. There might be people for whom Omega can predict with 100% certainty that they're going to one-box even after Omega has told them his prediction, but I'm not one of them. (I'm assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we're closer to a counterfactual mugging. At any rate, it changes the game significantly.)

"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.

As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at t... (read more)

-1philh
I was referring to "in principle", not to reality. Yes. I think that if I couldn't do that, it wouldn't be me. If we don't permit people without the two-boxing gene to two-box (the question as originally written did, but we don't have to), then this isn't a game I can possibly be offered. You can't take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it's the wrong way, and say that I'm still making the decision. So again, we're at the point where I don't know why we're asking the question. If not-me has the gene, he'll do one thing; if not, he'll do the other; and it doesn't make a difference what he should do. We're not talking about agents with free action, here. Again, I'm not sure exactly how this extends to the case where an agent doesn't know whether they have the gene.

In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.

Then, when you make the choice, in the original Newcomb you... (read more)

-1OrphanWilde
Hypotheticals are not required to follow the laws of reality, and Newcomb is, in the original problem, definitionally prescient - he knows what is going to happen. You can invent whatever reason you would like for this, but causality flows, not from your current state of being, but from your current state of being to your future decision to Newcomb's decision right now. Because Newcomb's decision on what to put in the boxes is predicated, not on your current state of being, but on your future decision.

Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.

-1OrphanWilde
In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it). In the medical Newcomb, causality -doesn't- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.

No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).

The original does not specify how Omega makes his prediction, so it may well be by investigating source code.

You cannot assume that any of those things are irrelevant or that they are overridden just because you have a gene. Presumably the gene is arranged in coordination with those things.

Unknowns-10

Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT.

The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb's problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb.

Your post reveals that the smoking lesion is the same as Newcomb's problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken.

Personally I haven't seen any real refutation of EDT.

This is not an "evil decision problem" for the same reason original Newcomb is not, namely that whoever chooses only one box gets the reward, not matter what process he uses.

Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene.

I would one-box, and hypothetical me has the one-boxing gene.

This is like saying "if my brain determines my decision, then I am not making the decision at all."

0Kindly
Not quite. I outlined the things that have to be going on for me to be making a decision.

It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.

-1Jiro
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The "regular equivalent" of genetic Newcomb is that Omega looks at the decision maker's source code, but it so happens that most decision makers work in ways which are easy to analyze.

This is like saying a 100% determinate chess playing computer shouldn't look ahead, since it cannot affect its actions. That will result in a bad move. And likewise, just doing what you feel like here will result in smoking, since you (by stipulation) feel like doing that. So it is better to deliberate about it, like the chess computer, and choose both to one box and not to smoke.

You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don't actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.

The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100... (read more)

0ike
We could, but I'm not going to think about those unless the problem is stated a bit more precisely, so we don't get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I've actually said elsewhere that if you didn't know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly? (Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.) I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.

I think this is addressed by my top level comment about determinism.

But if you don't see how it applies, then imagine an AI reasoning like you have above.

"My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me."

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

0philh
So I think where we differ is that I don't believe in a gene that controls my decision in the same way that you do. I don't know how well I can articulate myself, but: As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn't responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I'll do, then there are no worlds where Omega thinks I'll one-box, but I actually two-box. But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3. (Does the problem still count as Newcomblike? I'm not sure that it does, so I don't know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.) If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can't choose to two-box, after learning that I have TMP3=true, then this isn't me.) Since I can two-box without changing Omega's decision, I should. Whereas in the original Newcomb's problem, I can look at my source code, and... maybe I can prove whether I one- or two-box. But if I can, that doesn't constrain my decision so much as predict it, in the same way that Omega can; the prediction of "one-box" is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of "I really want to two-box just to prove myself wrong". More likely, I can't prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box. The case where I don't look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don't find the two comparable. (That said, I might actuall
-1Creutzer
Then you're talking about an evil decision problem. But neither in the original nor in the genetic Newcombe's problem is your source code investigated.

Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.

But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.

Even in the original Newcomb's problem there is presumably some causal pathway from your brain to your decision. Otherwise Omega wouldn't have a way to predict what you are going to do. And there is no difference here between "your brain" and the "gene" in the two versions.

In neither case does Omega cause your decision, your brain causes it in both cases.

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determ... (read more)

0Jiro
Omega can solve the halting problem?
3ike
You're describing regular Newcomb, not this gene version. (Also note that Omega needs to have more processing power than the programs to do what you want it to do, just like the human version.) The analogue would be defining a short program that Omega will run over the AIs code, that predicts what the AI will output correctly 99% of the time. Then it becomes a question of whether any given AI can outwit the program. If an AI thinks the program won't work on it, for whatever reason (by which I mean "conditioning on myself picking X doesn't cause my estimate of the prediction program outputting X to change, and vice-versa"), it's free to choose whatever it wants to. Getting back to humans, I submit that a certain class of people that actually think about the problem will induce a far greater failure rate in Omega, and that therefore that severs the causal link between my decision and Omega's, in the same way as an AI might be able to predict that the prediction program won't work on it. As I said elsewhere, were this incorrect, my position would change, but then you probably aren't talking about "genes" anymore. You shouldn't be able to get 100% prediction rates from only genes.

In the original Newcomb's problem, am I allowed to say "in the world with the million, I am more likely to one-box than in the world without, so I'm going to one-box"? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...

Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.

And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.

1philh
In the original, you would say: "in the world where I one-box, the million is more likely to be there, so I'll one-box". If there's a gene that makes you think black is white, then you're going to get killed on the next zebra crossing. If there's a gene that makes you misunderstand decision theory, you're going to make some strange decisions. If Omega is fond of people with that gene, then lucky you. But if you don't have the gene, then acting like you do won't help you. Another reframing: in this version, Omega checks to see if you have the photic sneeze reflex, then forces you to stare at a bright light and checks whether or not you sneeze. Ve gives you $1k if you don't sneeze, and independently, $1M if you have the PSR gene. If I can choose whether or not to sneeze, then I should not sneeze. Maybe the PSR gene makes it harder for me to not sneeze, in which case I can be really happy that I have to stifle the urge to sneeze, but I should still not sneeze. But if the PSR gene just makes me sneeze, then why are we even asking whether I should sneeze or not?

This is no different from responding to the original Newcomb's by saying "I would one-box if Omega put the million, and two-box if he didn't."

Both in the original Newcomb's problem and in this one you can use any decision theory you like.

-1[anonymous]
There is a difference - with the gene case, there is a causal pathway via brain chemistry or whatnot from the gene to the decision. In the original Newcomb problem, omega's prediction does not cause the decision.

This is confusing the issue. I would guess that the OP wrote "most" because Newcomb's problem sometimes is put in such a way that the predictor is only right most of the time.

And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.

But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.

0ike
Exactly; but since a vast majority of players won't do this, Omega can still be right most of the time. Can you formulate that scenario, then, or point me to somewhere it's been formulated? It would have to be a world with very different cognition than ours, if genes can determine choice 100% of the time; arguably, genes in that world would correspond to brain states in our world in a predictive sense, in which case this collapses to regular Newcomb, and I'd one-box. The problem presented by the gene-scenario, as stated by OP, is However, as soon as you add in a 100% correlation, it becomes very different, because you have no possibility of certain outcomes. If the smoking lesion problem was also 100%, then I'd agree that you shouldn't smoke, because whatever "gene" we're talking about can be completely identified (in a sense) with my brain state that leads to my decision.

Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

1ike
OP here said (emphasis added) Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different. Wrong; it's perfectly possible to have the gene to one-box but two-box. (If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)

Yes, all of this is basically correct. However, it is also basically the same in the original Newcomb although somewhat more intuitive. In the original problem Omega decides to put the one million or not depending on its estimate of what you will do, which likely depends on "what kind of person" you are, in some sense. And being this sort of person is also going to determine what kind of decision theory you use, just as the gene does in the genetic version. The original Newcomb is more intuitive, though, because we can more easily accept that &qu... (read more)

Why? They one-box because they have the gene. So no reversal. Just as in the original Newcomb problem they choose to one-box because they were the sort of person who would do that.

-1OrphanWilde
From the original post: If you one-box, you may or may not have the gene, but whether or not you have the gene is entirely irrelevant to what decision you should make. If, confronted with this problem, you say "I'll one-box", you're attempting to reverse causal flow - to determine your genetic makeup via the decisions you make, as opposed to the decision you make being determined by your genetic makeup. There is zero advantage conferred to declaring yourself a one-boxer in this arrangement.

What are you talking about? In the original Newcomb problem both boxes contain a reward whenever Omega predicts that you are going to choose only one box.

Under any normal understanding of logical influence, your decision can indeed "logically influence" whether you have the gene or not. Let's say there is a 100% correlation between having the gene and the act of choosing -- everyone who chooses the one box has the one boxing gene, and everyone who chooses both boxes has the two boxing gene. Then if you choose to one box, this logically implies that you have the one boxing gene.

Or do you mean something else by "logically influence" besides logical implication?

0OrphanWilde
No, your decision merely reveals what genes you have, your decision cannot change what genes you have.

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

2Caspar Oesterheld
Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?

If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.

0ChristianKl
The likelihood of success of establishing a human colony depends on the timeframe. If there's no major extinction event I would be surprised if we don't have a human mars colony in 1000 years. On the other hand having a colony in the next 50 years is a lot less likely.

If I understand correctly how these images are constructed, it would be something like this: take some random image. The program can already make some estimate of whether it is a baseball, say 0.01% or whatever. Then you go through the image pixel by pixel and ask, "If I make this pixel slightly brighter, will your estimate go up? if not, will it go up if I make it slightly dimmer?" (This is just an example, you could change the color or whatever as well.) Thus you modify each pixel such that you increase the program's estimate that it is a baseb... (read more)

3eternal_neophyte
But the explanation will be just as complex as the procedure used to classify the data. If I change the hue slightly or twiddle their RGB values just slightly, the "explanation" for why the data seems to contain a baseball image will be completely different. Human beings on the other hand can look at pictures of the same object in different conditions of lighting, of different particular sizes and shapes, taken from different camera angles, etc. and still come up with what would be basically the same set of justifications for matching each image to a particular classification (e.g. an image contains a roughly spherical field of white, with parallel bands of stitch-like markings bisecting it in an arc...hence it's of a baseball). The ability of human beings to come up with such compressed explanations, and our ability to arrange them into an ordering, is arguably what allows us to deal with iconic representations of and represent objects at varying levels of detail (as in http://38.media.tumblr.com/tumblr_m7z4k1rAw51rou7e0.png).

No, even if you classify these false positives as "no image", this will not prevent someone from constructing new false positives.

Basically the amount of training data is always extremely small compared to the theoretically possible number of distinct images, so it is always possible to construct such adversarial positives. These are not random images which were accidentally misidentified in this way. They have been very carefully designed based on the current data set.

Something similar is probably theoretically possible with human vision recognition as well. The only difference would be that we would be inclined to say "but it really does look like a baseball!"

6jacob_cannell
This technique exploits the fact that the CNN is completely deterministic - see my reply above. It may be very difficult for stochastic networks. CNNs are comparable to the first 150ms or so of human vision, before feedback , multiple saccades, and higher order mental programs kicks in. So the difficulty in generating these fooling images also depends on the complexity of the inference - a more complex AGI with human-like vision given larger amounts of time to solve the task would probably also be harder to fool, independent of the stochasticity issue.
3eternal_neophyte
A human being would be capable of pointing out why something looks like a baseball - to be able to point out where the curves and lines are that provoke that idea. We do this when we gaze at clouds without coming to believe there really are giant kettles floating around; we're capable of taking the abundance of contextual information in the scene into account and coming up with reasonable hypotheses for why what we're seeing looks like x, y or z. If classifier vision systems had the same ability they probably wouldn't make the egregious mistakes they do.

Yes, I get the same impression. In fact, Eliezer basically said that for a long time he didn't sign up because he had better things to spend his money on, but finally he did because he thought that not signing up gave off bad signals to others.

Of course, this means that his present attitude of "if you don't sign up for cryonics you're an idiot and if you don't sign up your children you're wicked" is total hypocrisy.

It's going to depend on your particular translation. You might try searching for "refute", "refuted", "curing", "being cured". This is what it says in my version:

And what is my sort? you will ask. I am one of those who are very willing to be refuted if I say anything which is not true, and very willing to refute any one else who says what is not true, and quite as ready to be refuted as to refute; for I hold that this is the greater gain of the two, just as the gain is greater of being cured of a very great evil than of curing another.

"Only humans are conscious" should indeed have a lower prior probability than "Only physical systems specified in some way are conscious", since the latter must be true for the former to be true but not the other way around.

However, whether or not only humans are conscious is not the issue. Most people think that many or all animals are conscious, but they do not think that "all physical systems are conscious." And this is not because of the prior probabilities, but is a conclusion drawn from evidence. The reason people think... (read more)

0eternal_neophyte
Yes, but not lower than "only physical systems specified in some way are conscious, and that specification criteria is not "x is one of {human, dog, parakeet...}"". If your idea of a "particular configuration" is a defined by a set of examplars then yes, "only physical systems of some particular configuration" follows. Given that, as you yourself say, whether humans are conscious or not is not the issue, we should consider "particular configurations" determined by some theoretical principle instead. And it seems to me my original argument concerning the conjunction fallacy does hold, given these caveats. That I must concede. But it's not clear what it means to say a human being is conscious either (were it clear, there would not be so many impenetrable tomes of philosophy on the topic). Ofcourse it's even less clear in the case of rocks, but at least it admits of the possibility of the rock's inherent, latent consciousness being amplified by rearranging it into some particular configuration of matter as opposed to flashing into awareness at once upon the reconfiguration.

This article is part of Eliezer's anti-religion series, and all of these articles have the pre-written bottom line that religion is horribly evil and cannot possibly have any good effects whatsoever.

In reality, of course, being false does not prevent a religion from doing some good. It should be clear to everyone that when you have more and stronger reasons for doing the right thing, you will be more likely to do the right thing, and when you have less and weaker reasons, you will be less likely to do it. This is just how motivation works, whether it is mo... (read more)

4Raemon
I think the rest of the series makes it pretty explicitly clear that Eliezer DOES think religion accomplishes the things you mention, and that there are important lessons we should learn from that.

No, you are misinterpreting the conjunction fallacy. If someone assigns a greater probability to the claim that "humans are conscious and rocks are not" than to the claim that "humans are conscious", then this will be the conjunction fallacy. But it will also be the conjunction fallacy to believe that it is more likely that "physical systems in general are conscious" than that "humans are conscious."

The conjunction fallacy is basically not relevant to comparing "humans are conscious and rocks are not" to "both humans and rocks are conscious."

0eternal_neophyte
Indeed. Thank you. Forget humans for a second. Just focus on the statement "only physical systems in a particular type of configuration will be conscious"; without knowing which type you mean. You cannot assign a higher probability to any particular system without already having some deciding criteria. It's when you fix your deciding criteria on the statement "x is human" that the conjunction fallacy comes back on you. Ofcourse it's not more likely for a human and a rock to be conscious than just for the human, you have to grant the latter just to avoid being obtuse. But who's arguing that being human is the deciding criteria for whether a system may be conscious? That's defenestrating any hope of investigating the phenomenon in other systems which does not do much to assist an empiricist framework for it.

"Because we're going to run out relatively soon" and "Because it's causing global warming" are reasons that work against one another, since if the oil runs out it will stop contributing to global warming.

0Manfred
They don't quite work against each other - both are true reasons to go renewable. At worst, they combine to form only one argument, one applicable to a broader audience than either alone. And outside the argument, unfortunately the oil reserves seem sized so that we can both get problematic global warming and run out of oil.

This is from Socrates in Plato's Gorgias.

Also, this would be better in the Open Thread.

0Bound_up
Thank you, and I'll note the use of the Open Thread for future use. Is there a word in the quote I can ctrl-f search for in Plato's Gorgias to find it quickly?

I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)

Even adamzerner probably doesn't value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.

0Silver_Swift
That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*. *: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).
0[anonymous]
Good point.

Utilitarianism does not support anything in particular in the abstract, since it always depends on the resulting utilities, which can be different in different circumstances. So it is especially unreasonable to argue for utilitarianism on the grounds that it supports various liberties such as gay rights. Rights normally express something like a deontological claim that other people should leave me alone, and such a thing can never be supported in the abstract by utilitarianism. In particular, it would not support gay rights if too many people are offended by them, which was likely true in the past.

Yes, I would, assuming you don't mean statements like "1+1 = 2", but rather true statements spread over a variety of contexts such that I would reasonably believe that you would be trustworthy to that degree over random situations (and thus including such as whether I should give you money.)

(Also, the 100 billion true statements themselves would probably be much more valuable than $100,000).

1V_V
According to game theory, this opens you to exploitation by an agent that wants your money for its own gain and can generate 100 billion true statements at a little cost.
1CalmCanary
So if I spouted 100 billion true statements at you, then said, "It would be good for you to give me $100,000," you'd pay up?
3jacob_cannell
I believe the orthogonality thesis is probably mostly true in a theoretical sense. I thought I made it clear in the article that a ULM can have any utility function. That being said the idea of programming in goals directly does not really apply to a ULM. You instead need to indirectly specify an initial approximate utility function and then train the ULM in just the right way. So it's potentially much more complex than "program in the goal you want". However the end result is just as general. If evolution can create humans which roughly implement the goal of "be fruitful and multiply", then we could probably create a ULM that implements the goal of "be fruitful and multiply paperclips". I agree that just because all utility functions are possible does not make them all equally likely. The danger is not in paperclip maximizers, it is in simple and yet easy to specify utility functions. For example, the basic goal of "maximize knowledge" is probably much easier to specify than a human friendly utility function. Likewise the maximization of future freedom of action proposal from Wissner-Gross is pretty simple. But both probably result in very dangerous agents. I think Ex Machina illustrated the most likely type of dangerous agent - it isn't a paperclip maximizer. It's more like a sociopath. A ULM with a too-simple initial utility function is likely to end up something like a sociopath. I hope not too simple! This topic was beyond the scope of this article. If I have time in the future I will do a follow up article that focuses on the reward system, the human utility function, and neuroscience inspired value learning, and related ideas like inverse reinforcement learning. "Be fruitful and multiply" is a subtly more complex goal than "maximize future freedom of action". Humans need to be compelled to find suitable mates and form long lasting relationships stable enough to raise children (or get someone else to do it), etc. Humans perform these functions not becau

I thought the comment was good and I don't have any idea what SanguineEmpiricist was talking about.

0SanguineEmpiricist
It's hard to explain, i'll edit it in later if I think of a good explanation. It's just the overly pedantic style complimented by a lovely personality and the passive framing. It has to do with the organizational style as well, maybe a bit too spruced up? Don't let me get you down though, I didn't mean it like that.
0gjm
Well, of course if S.E. is correct that "there are too many posts like the one I'm responding to" then we should expect that other people will like that sort of thing even though s/he doesn't. (Unsurprisingly, I think my comment was perfectly OK too. Thanks for the expression of support.)

A sleeper cell is likely to do something dangerous on a rather short time scale, such as weeks, months, or perhaps a year or two. This is imminent in a much stronger sense than AI, which will take at least decades. Scott Aaronson thinks it more likely to take centuries, and this may well be true, given e.g. the present state of neuroscience, which consists mainly in saying things like "part of the brain A is involved in performing function B", but without giving any idea at all exactly how A is involved, and exactly how function B is performed at all.

Eliezer, here is a reasonably probable just-so story: the reason you wrote this article is that you hate the idea that religion might have any good effects, and you hope to prove that this couldn't happen. However, the idea that the purpose of religion is to make tribes more cohesive does not depend on group selection, and is absurd in no way.

It is likely enough that religions came to be as an extension of telling stories. Telling stories usually has various moralistic purposes, very often including the cohesiveness of the tribe. This does not depend on g... (read more)

1mamert
If you mean that the "binds tribes closer together" and related aspects are being grossly underestimated, I agree. The "costly sacrifices", too, may have been poorly assessed - the net effect for individuals, in their true circumstances at the time, may have been frequently positive. Or - this is not to be discounted either - believed to be positive.
Load More