All of CuSithBell's Comments + Replies

I agree with your analysis, and further:

Gurer ner sbhe yvarf: gur gjb "ebgngvat" yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

5pnrjulius
Well yes, exactly. If it takes a certain degree of hypocrisy to get campaign contributions, advertising, etc., and it takes these things to get elected... then you're going to have to have a little hypocrisy in order to win. And we do want to win, right? We want to actually reduce existential risk, and not just feel like we are? If you can find a way to persuade people (and win elections, never forget that making policy in a democracy means winning elections) that doesn't involve hypocrisy, I'm all ears.

Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.

For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.

Disagree? What do you mean by this?

Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.

0TimS
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method). At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn't an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here - the source of my reference to godshatter. It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position. In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior. That said, my summary of the position may be a bit thin, because I'm a moral anti-realist and don't believe the evo. psych -> morality story.

But the theory fails because this fits it but isn't wireheading, right? It wouldn't actually be pleasing to play that game.

0Will_Newsome
Meh, yeah, maybe? Still seems like other, more substantive objections could be made. Relatedly, I'm not entirely sure I buy Steve's logic. PRNGs might not be nearly as interesting as short mathematical descriptions of complex things, like Chaitin's omega. Arguably collecting as many bits of Chaitin's omega as possible, or developing similar maths, would in fact be interesting in a human sense. But at that point our models really break down for many reasons, so meh whatever.
5wedrifid
I think you are right. The two are errors that practically, with respect to hedonistic extremism, operate in opposing directions. They are similar in form in as much as they fit the abstract notion "undesirable outcomes due to lost purposes when choosing to optimize what turns out to be a poor metric for approximating actual preferences".

Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.

neurotypical

Are you using this to mean "non-autistic person", or something else?

a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

A GAI with the utility of burning itself? I don't think that's viable, no.

What do you mean by "viable"? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?

As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said "why would an AI try to maximize its original utility function, i... (read more)

0royf
Intelligence is expensive. More intelligence costs more to obtain and maintain. But the sentiment around here (and this time I agree) seems to be that intelligence "scales", i.e. that it doesn't suffer from diminishing returns in the "middle world" like most other things; hence the singularity. For that to be true, more intelligence also has to be more rewarding. But not just in the sense of asymptotically approaching optimality. As intelligence increases, it has to constantly find new "revenue streams" for its utility. It must not saturate its utility function, in fact its utility must be insatiable in the "middle world". A good example is curiosity, which is probably why many biological agents are curious even when it serves no other purpose. Suicide is not such a utility function. We can increase the degree of intelligence an agent needs to have to successfully kill itself (for example, by keeping the gun away). But in the end, it's "all or nothing". Gödel's theorem doesn't prevent any specific thing. In this case I was referring to information-theoretic reasons. And indeed, suicide is not a typical human behavior, even without considering that some contributing factors are irrelevant for our discussion. In that sense, I completely agree with you. I usually don't like making the technology distinction, because I believe there's more important stuff going on in higher levels of abstraction. But if that's where you're coming from then I guess we have resolved our differences :)

I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.

1witzvo
Serious question: is the cyborg part a joke? I can't tell around here.

I find this quite aesthetically pleasing :D

I tend to agree. Customizable contracts would be the best solution.

For some reason I'm picturing the Creative Commons licenses.

[anonymous]110

I had exactly that as a sort of model in my brain. :)

If polygamous people where high status they wouldn't voice nor perhaps even think of these objections.

Why isn't it the other way around?

Hm. Some sort of standardized institution in place to take care of the pet in case the human dies, perhaps? Tax breaks?

I don't care what other people are convinced.

When you said above that status was the real reason LW-associates oppose legal polygamy, you were implying that these people are not actually convinced of these issues, or only pretend to care about them for status reasons.

I'm in a happy polygamous relationship and I know I'm not the only one.

Certainly! I'd like to clarify that I don't think polyamory is intrinsically oppressive, and that I am on the whole pretty darn progressive (philosophically) regarding sexual / relationship rights etc. (That is, I th... (read more)

9[anonymous]
If polygamous people where high status they wouldn't voice nor perhaps even think of these objections. I tend to agree. Customizable contracts would be the best solution. This way we wouldn't straight jacket people into one size fits all marriage. Some people might like marriages where infidelity is grounds for divorce and the cheating party is penalized somehow. Some people might like marriages that have to be renewed every 10 years, to minimize any hassle with any potential divorce or allow a time out on the relationship. ect. This would make everyone from the traditionalists to those seeking novel arrangements happy.

Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.

Regarding your example, I think what Mills is saying is probably a fair point - or rather, it's probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.

I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad.

Same argument can and has been applied to other kinds of marriage.

On the one hand, the argument doesn't need to be correct to be the (or a) real reason. On the other, I'd expect more people to be more convinced that polygamy is more oppressive (as currently instantiated) than vanilla marriage (and other forms, such as arranged marriages or marriage of children to adults, are probably more strongly opposed).

4[anonymous]
.

thus we tend to see forbidding that as a bad idea.

ITYM 'good'?

I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad. Would you say this is rationalization?

FWIW I'm very skeptical of the whole "status explains everything" notion in general.

8[anonymous]
Yes thank you for the correction. Same argument can and has been applied to other kinds of marriage. Yes. Because legalizing such marriage would if anything improve the legal standing and options available to the women in such marriages. It would also ensure fairer distribution of resources, not to mention custody issues in case one of the parents dies. Also Polygamous marriages in the US and Europe are a fact on the ground, a social reality, that we should deal with. Refusing to do so is just perpetuating discrimination. Status doesn't explain everything, it does explain situations like this.

Ah! Well, good to know. Generally I expect "Utahans" and "weird brown foreigners" are to be inflected similarly in both of these versions, anyway.

or that polyamory is when it's done by fashionable white people, and polygamy is when it's done by weird brown foreigners

I thought it was "polyamory is when it's done by New Yorkers (Californians?), polygamy is when it's done by Utahans," and weird brown people have harems and concubines instead.

(Though of course I also don't think this is a fair characterization)

2Emile
Oh, I had forgot about Mormons - here in France, Muslim immigrants are the first thing that comes to mind on discussions of Polygamy.

Yeah, that's certainly a fair clarification. It'd probably take a lot more space to give a really robust definition of "suffering", but that's close enough for gummint work.

Roughly, pain is a sensation typically associated with damage to the body, suffering is an experience of stimuli as intrinsically unpleasant.

I do not suffer if my room is painted a color I do not like, but I still may care about the color my room is painted.

1wedrifid
Agree, with the assumption that "stimuli" as relevant to suffering includes internal stimuli generated from one's own thoughts.

It means "being able to feel pain but not suffering from it."

3A1987dM
Where “pain” and “suffering” are defined, respectively, as... what?

Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a "face recognition" module when it determines that this would be useful for interacting with humans?

The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks).

It seems that you should be able to write a simple program that o

... (read more)
0royf
A GAI with the utility of burning itself? I don't think that's viable, no. At the moment it's little more than professional intuition. We also lack some necessary shared terminology. Let's leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it. I think I'm starting to see the disconnect, and we probably don't really disagree. You said: My thinking is very broad but, from my perspective, not unjustifiably so. In my research I'm looking for mathematical formulations of intelligence in any form - biological or mechanical. Taking a narrower viewpoint, humans "in their current form" are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices. Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

Could you rephrase this somehow? I'm not understanding it. If you actually won the bet and got the extra utility, your median expected utility would be higher, but you wouldn't take the bet, because your median expected utility is lower if you do.

"Enough times" to make it >50% likely that you will win, yes? Why is this the correct cutoff point?

This all seems very sensible and plausible!

Thanks for challenging my position. This discussion is very stimulating for me!

It's a pleasure!

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into

... (read more)
0royf
The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks). Would you consider such a machine an artificial intelligent agent? Isn't it just a glorified printing press? I'm not saying that some configurations of memory are physically impossible. I'm saying that intelligent agency entails typicality, and therefore, for any intelligent agent, there are some things it is extremely unlikely to do, to the point of practical impossibility. I would actually argue the opposite. Are you familiar with the claim that people are getting less intelligent since modern technology allows less intelligent people and their children to survive? (I never saw this claim discussed seriously, so I don't know how factual it is; but the logic of it is what I'm getting at.) The idea is that people today are less constrained in their required intelligence, and therefore the typical human is becoming less intelligent. Other claims are that activities such as browsing the internet and video gaming are changing the set of mental skills which humans are good at. We improve in tasks which we need to be good at, and give up skills which are less useful. You gave yet another example in your comment regarding face recognition. The elasticity of biological agents is (quantitatively) limited, and improvement by evolution takes time. This is where artificial agents step in. They can be better than humans, but the typical agent will only actually be better if it has to. Generally, more intelligent agents are those which are forced to comply to tighter constraints, not looser ones.

knowing the value of Current Observation gives you information about Future Decision.

Here I'd just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time... Sorry, I cannot condone this model as presented.)

Well... okay, but the point I was making was milder and pretty uncontroversial. Are you familiar with bayesian networks?

Perhaps it can observe your neurochemistry in detail and in real time.

I already mentioned

... (read more)

Ah! Sorry for the mixed-up identities. Likewise, I didn't come up with that "51% chance to lose $5, 49% chance to win $10000" example.

But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn't work on its own, though it still looks like we don't know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.

Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won't make the wrong dec... (read more)

2faul_sname
Ah, it appears that I'm mixing up identities as well. Apologies. Yes, I retract the "variance greater than 5". I think it would have to be variance of at least 10,000 for this method to work properly. I do suspect that this method is similar to decision-making processes real humans use (optimizing the median outcome of their lives), but when you have one or two very important decisions instead of many routine decisions, methods that work for many small decisions don't work so well. If, instead of optimizing for the median outcome, you optimized for the average of outcomes within 3 standard deviations of the median, I suspect you would come up with a decision outcome quite close to what people actually use (ignoring very small chances of very high risk or reward).

google

Googol. Likewise, googolplex.

But the median outcome is losing 5 utils?

Edit: Oh, wait! You mean the median total utility after some other stuff happens (with a variance of more than 5 utils)?

Suppose we have 200 agents, 100 of which start with 10 utils, the rest with 0. After taking this offer, we have 51 with -5, 51 with 5, 49 with 10000, and 49 with 10010. The median outcome would be a loss of -5 for half the agents, a gain of 5 for half, but only the half that would lose could actually get that outcome...

And what do you mean by "the possibility of getting tortured will manifest... (read more)

0faul_sname
I don't. I didn't write that. Your formulation requires that there be a single, high probability event that contributes most of the utility an agent has the opportunity to get over its lifespan. In situations where this is not the case (e.g. real life), the decision agent in question would choose to take all opportunities like that. The closest real-world analogy I can draw to this is the decision of whether or not to start a business. If you fail (which there is a slightly more than 50% chance you will), you are likely to be in debt for quite some time. If you succeed, you will be very rich. This is not quite a perfect analogy, because you will have more than one chance in your life to start a business, and the outcomes of business ownership are not orders of magnitude larger than the outcomes in real life. However, it is much closer than the "51% chance to lose $5, 49% chance to win $10000" that your example intuitively brings to mind.

You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?

I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?).

... (read more)
0royf
Thanks for challenging my position. This discussion is very stimulating for me! I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses? It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree. I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.

Unlike my (present) traits, my future decisions don't yet exist, and hence cannot leak anything or become entangled with anyone.

Your future decisions are entangled with your present traits, and thus can leak. If you picture a Bayesian network with the nodes "Current Brain", "Future Decision", and "Current Observation", with arrows from Current Brain to the two other nodes, then knowing the value of Current Observation gives you information about Future Decision.

Obviously the alien is better than a human at running this game... (read more)

0halcyon
Here I'd just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time... Sorry, I cannot condone this model as presented.) I already mentioned this possibility. Fallible models make the situation gameable. I'd get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How's that for rationality? To get around this, the alien needs to predict our plan and - do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design. (Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input - everything counts in charting an optimal solution. You can't just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.)) We're veering into uncertain territory again... (Which would be fine if it weren't for the vagueness of mechanism inherent in magical algorithms.) Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you're tweaking the alien's method, then disregard all that. From the alien's point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly co

Having asserted that your claim is, in fact, new information

I wouldn't assert that. I thought I was stating the obvious.

Yes, I think I misspoke earlier, sorry. It was only "new information" in the sense that it wasn't in that particular sentence of Eliezer's - to anyone familiar with discussions of GAI, your assertion certainly should be obvious.

0wedrifid
Ahh. That's where the "new information" thing came in to it. I didn't think I'd said anything about new so I'd wondered.

You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?

Generally GAIs are ascribed extreme powers around here - if it has low-level access to its code, then it will be able to determine how its "desires" derive from this code, and will be able to produced whatever changes it wants. Similarly, it will be able to hack human brains with equal finesse.

3royf
I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?). Let's instead make an attempt to explain. A complete control of an agent's own code, in the strict sense, is in contradiction of Gödel's incompleteness theorem. Furthermore, information-theoretic considerations significantly limit the degree to which an agent can control its own code (I'm wondering if anyone has ever done the math. I expect not. I intend to look further into this). In information-theoretic terminology, the agent will be limited to typical manipulations of its own code, which will be a strict (and presumably very small) subset of all possible manipulations. Can an agent be made more effective than humans in manipulating its own code? I have very little doubt that it can. Can it lead to agents qualitatively more intelligent than humans? Again, I believe so. But I don't see a reason to believe that the code-rewriting ability itself can be qualitatively different than a human's, only quantitatively so (although of course the engineering details can be much different; I'm referring to the algorithmic level here). As you've probably figured out, I'm new here. I encountered this post while reading the sequences. Although I'm somewhat learned on the subject, I haven't yet reached the part (which I trust exists) where GAI is discussed here. On my path there, I'm actively trying to avoid a certain degree of group thinking which I detect in some of the comments here. Please take no offense, but it's phrases like the above quote which worry me: is there really a consensus around here about such profound questions? Hopefully it's only the terminology which is agreed upon, in which case I will learn it in time. But please, let's make our terminology "pay rent".
0wedrifid
(Yes, and this is partly just because AIs that don't meet a certain standard are implicitly excluded from the definition of the class being described. AIs below that critical threshold are considered boring and irrelevant for most purposes.)

An advanced AI could reasonably be expected to be able to explicitly edit any part of its code however it desires. Humans are unable to do this.

0royf
I believe that is a misconception. Perhaps I'm not being reasonable, but I would expect the level at which you could describe such a creature in terms of "desires" to be conceptually distinct from the level at which it can operate on its own code. This is the same old question of "free will" again. Desires don't exist as a mechanism. They exist as an approximate model of describing the emergent behavior of intelligent agents.

Not meant as an attack. I'm saying, "to be fair it didn't actually say that in the original text, so this is new information, and the response is thus a reasonable one". Your comment could easily be read as implying that this is not new information (and that the response is therefore mistaken), so I wanted to add a clarification.

But 'value is fragile' teaches us that it can't be a 1-dimensional number like the reals.

This is not in fact what "value is fragile" teaches us, and it is false. Without intending offense, I recommend you read about utility a bit more before presenting any arguments about it here, as it is in fact a 1-dimensional value.

What you might reasonably conclude, though, is that utility is a poor way to model human values, which, most of the time, it is. Still, that does not invalidate the results of properly-formed thought experiments.

To be fair, when structured as

Sadly, we humans can't rewrite our own code, the way a properly designed AI could.

then the claim is in fact "we humans can't rewrite our own code (but a properly designed AI could)". If you remove a comma:

Sadly, we humans can't rewrite our own code the way a properly designed AI could.

only then is the sentence interpreted as you describe.

0wedrifid
To be even more fair I also explicitly structured my own claim such that it still technically applies to your reading. That allowed me to make the claim both technically correct to a pedantic reading and an expression of the straightforward point that the difference is qualitative. (The obvious alternative response was to outright declare the comment a mere equivocation.) Meaning that I didn't, in fact, describe.

Many find that sort of discounting to be contrary to intuition and desired results, e.g. the suffering of some particular person is more or less significant depending on how many other people are suffering in a similar enough way.

It would be grating if a dozen companies made posts like this every month, but that isn't the case.

I'm a little wary of this. You think it would be bad if other people acted in a way similar to you in sufficient number? What determines who "gets" to reap the benefits of being the exception?

The power, without further clarification, is not incoherent. People predict the behavior of other people all the time.

Ultimately, in practical terms the point is that the best thing to do is "be the sort of person who picks one box, then pick both boxes," but that the way to be the sort of person that picks one box is to pick one box, because your future decisions are entangled with your traits, which can leak information and thus become entangled with other peoples' decisions.

0halcyon
And they're proved wrong all the time. So what you're saying is, the alien predicts my behavior using the same superficial heuristics that others use to guess at my reactions under ordinary circumstances, except he uses a more refined process? How well can that kind of thing handle indecision if my choice is a really close thing? If he's going with a best guess informed by everyday psychological traits, the inaccuracies of his method would probably be revealed before long, and I'd be at the numbers immediately. I agree, I would pick both boxes if that were the case, hoping I'd lived enough of a one box picking life before. I beg to differ on this point. Whether or not I knew I would meet Dr. Superintelligence one day, an entire range of more or less likely behaviors is very much conceivable that violate this assertion, from "I had lived a one box picking life when comparatively little was at stake," to "I just felt like picking differently that day." You're taking your reification of selfhood WAY too far if you think Being a One Box Picker by picking one box when the judgement is already over makes sense. I'm not even sure I understand what you're saying here, so please clarify if I've misunderstood things. Unlike my (present) traits, my future decisions don't yet exist, and hence cannot leak anything or become entangled with anyone. But what this disagreement boils down to is, I don't believe that either quality is necessarily manifest in every personality with anything resembling steadfastness. For instance, I neither see myself as the kind of person who would pick one box, nor as the kind who would pick both boxes. If the test were administered to me a hundred times, I wouldn't be surprised to see a 50-50 split. Surely I would be exaggerating if I said you claim that I already belong to one of these two types, and that I'm merely unaware of my true inner box-picking nature? If my traits haven't specialized into either category, (and I have no rational motive t

Well! I may have to take a more in-depth look at it sometime this summer.

Well, it's a thought experiment, involving the assumption of some unlikely conditions. I think the main point of the experiment is the ability to reason about what decisions to make when your decisions have "non-causal effects" - there are conditions that will arise depending on your decisions, but that are not caused in any way by the decisions themselves. It's related to Kavka's toxin and Parfit's hitchhiker.

-2halcyon
But even thought experiments ought to make sense, and I'm not yet convinced this one does, for the reasons I've been ranting about. If the problem does not make sense to begin with, what is its "answer" worth? For me, this is like seeing the smartest minds in the world divided over whether 5 + Goldfish = Sky or 0. I'm asking what the operator "+" signifies in this context, but the problem is carefully crafted to make that very question seem like an unfair imposition. Here, the power ascribed to the alien, without further clarification, appears incoherent to me. Which mental modules, or other aspects of reality, does it read to predict my intentions? Without that being specified, this remains a trick question. Because if it directly reads my future decision, and that decision does not yet exist, then causality runs backwards. And if causality runs backwards, then the money already being in box B or not makes no difference, because your actual decision NOW is going to determine whether it will have been placed there in the past. So if you're defying causality, and then equating reason with causality, then obviously the "irrational", ie. acausal, decision will be rewarded, because the acausal decision is the calculating one. God I wish I could draw a chart in here.

Well, try using numbers instead of saying something like "provided luck prevails".

If p is the chance that Omega predicts you correctly, then the expected value of selecting one box is:

1,000,000(p) + 0(1-p)

and the expected value of selecting both is:

1,000(p) + 1,001,000(1-p)

So selecting both is only higher expected value if Omega guesses wrong about half the time or more.

1halcyon
Well, the more I think about this, the more it seems to me that we're dealing with a classic case of the unspecified problem. * You are standing on the observation deck of Starship Dog Poo orbiting a newly discovered planet. The captain inquires as to its color. What do you answer him? * Uh, do I get to look at the planet? * No. * ... Let me look up the most common color of planets across the universe. In the given account, the ability attributed to our alien friend is not described in terms that are meaningful in any sense, but is instead ascribed to his "superintelligence", which is totally irrelevant as far as our search for solutions is concerned. And yet, we're getting distracted from the problem's fundamentally unspecified nature by these yarn balls of superintelligence and paradoxical choice, which are automatically Clever Things to bring up in our futurist iconography. If you think I'm mistaken, then I'd really appreciate criticism. Thanks!
1halcyon
Something along those lines, but anyway, how does that NOT bring this decision into the realm of calculation? Thinking about it soberly, the framing of this problem reveals even more of a lack of serious scrutiny of its premises. A rational thinker's first question ought to be: How is it even possible to construct a decision tree that predicts my intentions with near-perfect success before I myself am aware of them? The accuracy of such a system would depend on knowledge of human neurology, time travel, and/or who knows what else, that our civilization is nowhere near obtaining, placing the calculation of odds associated with this problem far beyond the purview of present day science. (IOW, I believe the failure to reason along lines that combine statistics with real world scientific understanding is responsible for the problem's rather mystical overtones at first sight. Pay no attention to the man behind the curtain! And really, rare events are rare, but they do happen, and are no less real on account of their rarity.) In any case, thanks for the response. (Actually, I'm not even clear on the direction of causality under the predictor's hood. Suppose the alien gazes into a crystal ball showing a probable future and notes down my choice. If so, then he can see the course of action he'd probably go with as well! If he changes that choice, does that say anything about my fidelity to the future he saw? Depends on the mechanism of his crystal ball, among many other things. Or does he scan my brain and simply simulate the chemical reactions it will undergo in the next five minutes? How accurate is the model carrying out this simulation? How predictable is the outcome via these techniques in the first place? There are such murky depths here that no matter what method one imagines, the considerations based on which he ultimately places the million dollars is of supreme importance.) (What, total karma doesn't reach the negatives? Why not?)

I read this as "people who aren't ( (clownsuit enjoyers) and (autistic) ) ...", but it looks like others have read it as "people who aren't (clownsuit enjoyers) and aren't (autistic)" = "people who aren't ( (clownsuit enjoyers) or (autistic) )", which might be the stricter literal reading. Would you care to clarify which you meant?

It certainly could be - I read the anecdote from a book I picked idly off a shelf in a bookstore, and I retained the vague impression that it was from a book about the importance of social factors and the effects of technology on our social/psychological development, but I could have been conflating it with another such book. After reading an excerpt from "The Boy who was Raised as a Dog", the style matches, so that probably was the one I read. Would you recommend it?

3Swimmer963 (Miranda Dixon-Luinenburg)
Yes yes yes! An awesome book!

I heard a horror story (anecdote from a book, for what it's worth) of a child basically raised in front of a TV, who learned from it both language and a general rule that the world (and social interaction) is non-interactive. If you could get his attention, he'd cheerfully recite some memorized lines then zone out.

4Swimmer963 (Miranda Dixon-Luinenburg)
Was the book "The boy who was raised as a dog?" Because I remember reading the same story in that book.

My take on it is - "rationality" isn't the point. Don't try to do things "rationally" (as though it's a separate thing), try to do them right.

It's actually something we see with the nuts that occasionally show up here - they're obsessed with the notion of rationality as a concrete process or something, insisting (e.g.) that we don't need to look at the experimental evidence for a theory if it is "obviously false when subjected to rational thought", or that it's bad to be "too rational".

Load More