All of Irgy's Comments + Replies

Irgy20

I think the standard for accuracy would be very different. If Watson gets something right you think "Wow that was so clever", if it's wrong you're fairly forgiving. On that other hand, I feel like if an automated fact checker got even 1/10 things wrong it would be subject to insatiable rage for doing so. I think specifically correcting others is the situation in which people would have the highest standard for accuracy.

And that's before you get into the levels of subjectivity and technicality in the subject matter which something like Watson would never be subjected to.

2ChristianKl
Given that Watson get's used to make medical decisions about how to cure cancer I don't think people are strongly forgiving.
Irgy50

You can do it first or you can do it best, usually both are different artists and each is well known. I think there's plenty of examples of both in all fields. Rachmaninov for instance is another classical (in the broad sense) composer in the "do it well" rather than "do it first" camp, he was widely criticised as behind the times in his own era, but listening now no-one cares that his writing music sounds like it's ~150 years old but written only ~100.

Irgy10

That's the result of compulsory voting not of preference voting.

Irgy90

As an Australian I can say I'm constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow's impossibility theorem to justify hanging on to whatever terrible system they have, but there's a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that ... (read more)

Irgy00

(meta) Well, I'm quite relieved because I think we're actually converging rather than diverging finally.

No. Low complexity is not the same thing as symmetry.

Yes sorry symmetry was just how I pictured it in my head, but it's not the right word. My point was that the particles aren't acting independently, they're constrained.

Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them.

By t... (read more)

0AlexMennen
Still disagree. As I pointed out, it is possible to for a short program to generate outputs with a very large number of complex components. Given only partial failure of observation or logic (where most of your observations and deductions are still correct), you still have something to go on, so you shouldn't have symmetry there. For everything to cancel so that your 1/3^^^^3-probability hypothesis dominates your decision-making, it would require a remarkably precise symmetry in everything else. I have also argued against the median utility maximization proposal already, actually.
Irgy00

there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect.

I have to say, this caused me a fair bit of thought.

Firstly, I just want to confirm that you agree a universe as we know it has complexity of the order of its size. I agree that an equivalently "large" universe with low complexity could be imagined, but its laws would have to be quite different to ours. Such a universe, while large, would be locked in symmetry to preserve its low complexity.

Just an aside on randomne... (read more)

1AlexMennen
No. Low complexity is not the same thing as symmetry. For example, you can write a short program to compute the first 3^^^^3 digits of pi. But it is widely believed that the first 3^^^^3 digits of pi have almost no symmetry. Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them. It is only when you try to run it on one particular high-complexity input without also running it on the others that it requires high complexity. Thus the lack of ability for a low-complexity program to use randomness does not prevent it from producing objects in its output that look like they were generated using randomness. Oh, I see. This claim is correct. However, it does not seem that important to me, since p(A|E) will still be negligible. It would be quite surprising if none of the "C-like" theories could influence action, given that there are so many of them (the only requirement to be "C-like" is that it is impossible in practice to convince you that C is less likely than A, which is not a strong condition, since the prior for A is < 1/3^^^^3). Ah, I think you're actually right that utility function boundedness is not a solution here (I actually still think that the utility function should be bounded, but that this is not relevant under certain conditions that you may be pointing at). Here's my attempt at an analysis: Assume for simplicity that there exist 3^^^^3 people (this seems okay because the ability of the mugger to affect them is much more implausible than their existence). The probability that there exists any agent which can affect on the order of 3^^^^3 people, and uses this ability to do bizarre Pascal's mugging-like threats, is small (let's say 10^-20).The probability that a random person pretends to be Pascal's mugger is also small, but not as small (let's say 10^-6). Thus if people pay Pascal's m
Irgy00

Well, you'd need a method of handling infinite values in your calculations. Some methods exist, such as taking limits of finite cases (though much care needs to be taken), using a number system like the Hyperreals or the Surreals if appropriate, or comparing infinite cardinals, it would depend a little on the details of how such an infinite threat was made plausible. I think in most cases my argument about the threat being dominated by other factors would not hold in this case.

While my point about specific other actions dominating may not hold in this case... (read more)

Irgy-30

Sorry, but I don't know which section of my reply this is addressing and I can't make complete sense of it.

an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3)

The OP is broken into two main sections, one assuming finite resources and one assuming infinite.

Our universe has finite resources, why would an assumption of finite resources in an alternative universe be vanishingly unlikely? Personally I would expect finite resources with probability ~=1. I'm not including time as a &... (read more)

Irgy10

This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program.

I know that.

The point about memory is the memory required to store the program data, not the memory required to run the program. The program data is part of the program, thus part of the complexity. A mistake I maybe made though was to talk about the current state rather than the initial conditions, since the initial conditions ar... (read more)

1AlexMennen
If by program data you mean the input to the program, then that is correct, but there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect. Ok yes, unbounded utility functions have nonconvergence all over the place unless they are carefully constructed not to. This does not require anyone spouting some piece of gibberish at you, or even anything out of the ordinary happening. I already explained why this is incorrect, and you responded by defending your separate point about action guidance while appearing to believe that you had made a rebuttal. As I said, there will be hypotheses with priors much higher than 1/3^^^^3 that can explain whatever observations you see and do reward your possible actions differently, and then the hypotheses will probability less 1/3^^^^3 will not contribute anything non-negligible to your expected utility calculations. If you're saying that the extent to which an individual cares about the desires of an unbounded number of agents is unbounded, then you are contradicting yourself. If you aren't saying that, then I don't see why you wouldn't accept boundedness of your utility function as a solution to Pascal's mugging.
Irgy-20

No, it isn't. It can't be used against agents with bounded utility functions.

Ok fully general counterargument is probably an exaggeration but it does have some similar undesirable properties:

  • Your argument does not actually address the argument it's countering in any way. If 1/n is the correct prior to assign to this scenario surely that's something we want to know? Surely I'm adding value by showing this?

  • If your argument is accepted then it makes too broad a class of statements into muggings. In fact I can't see why "arglebargle 3^^^^3 banana&

... (read more)
0AlexMennen
Not much, since the correct solution doesn't rely on this anyway. Besides, your attempt to show this was incorrect; perhaps I should have addressed that in my original comment, but I'll do that now: This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program. There are short programs that use extremely large amounts of memory; hence there are extremely large objects with low complexity. Huh? You've changed the claim you're defending without acknowledging that you've done so; earlier you were saying that you actually can receive evidence that could convince you of hypotheses with probability 1/3^^^^3, not just that you could receive evidence that would make you act as if you were convinced. Your new argument is still wrong though. It is plausibly true that the specific hypothesis that all of your observations have been random noise does not offer any action guidance at all. But it is not true for the hypothesis that your observations temporarily lapsed into random noise when you saw what looked like convincing evidence for a 1/3^^^^3-probability event. In this hypothesis, you still have a significant amount of information that you can use to compare possible actions, and the probability of that hypothesis is still a hell of a lot more than 1/3^^^^3. Hypotheses with a prior of 1/3^^^^3 will never dominate your decision-making procedure. Your undermining attempt failed because the ridicule is warranted. I'm confused because it sounds like you're conceding here that bounded utility is correct, but elsewhere you say otherwise. Fine so far, but... No, that does not give you a well-defined utility function. You can see this if you try to use it to compare three or more different outcomes.
Irgy00

I'm not trying to ignore the problem I'm trying to progress it. If for example I reduce the mugging to just a non-special example of another problem, then I've reduced the number of different problems that need solving by one. Surely that's useful?

Irgy-10

This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.

Standard counterargument it may be but it seems pretty rubbish to me. It seems to have the form "You can't be sure you're right about... (read more)

0hairyfigment
Either I've misunderstood the OP completely, or the prior is based on an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3), though in everyday circumstances we can pretty much call it 'certainty'. So no, the counterargument is absolutely valid. Also, as you should know if you read the Muggle post, Eliezer most certainly did mean Pascal's Mugging to draw attention to the failure of expected utility to converge. So you should be clearer at the start about what you think your argument does. What you have now almost seems like a quick disclaimer added when you realized the OP had failed. (Edited to fix typo.)
1AlexMennen
No, it isn't. It can't be used against agents with bounded utility functions. Agents with utility functions that are unbounded but defined in terms of what the true probability distribution should be, so that they can be proved to converge, are also immune. That is correct. People are in general unable to specify probability distributions over an infinite number of outcomes with asymptotics that correctly reflect their actual state of uncertainty. You said: I was trying to point out that it actually is an impossible burden. The universe does not contain enough room for the amount of information it would take to raise hypotheses with a prior of 1/3^^^^3 to a posterior above that for "I'm seeing random noise". It's not like there is some True Ethical Utility Function, and your utility function is some combination of that with your personal preferences. The moral component of your utility function just reflects the manner in which you care about other people, and there is no reason it should be linear. Bounded utility functions are the correct solution to Pascal's mugging because using them is the way we came to the inclusion that paying the mugger is the wrong move in the first place. You did not conclude that you shouldn't pay the mugger because of your confidence that the probability that he tells the truth is less than 1/n (n being the scale of the consequences described by the mugger); human brains cannot process probabilities that small when making intuitive judgments, and if you came to doubt your reasoning for the probability decreasing faster than 1/n, I bet you would not say "Oh, in that case I guess I'd just pay the mugger". Instead, your actual reason for confidence that you shouldn't pay the mugger is probably just that you don't think you should let tiny probabilities throw you around, but this is the reasoning of an agent with bounded utility.
2[anonymous]
The problem with Pascal's mugging is that it IS a fully general counterargument under classical decision theory. That's why it's a paradox right now. But saying "There's a problem with this paradox - therefore, I'll just ignore the problem' is not a solution.
Irgy10

If you view morality as entirely a means of civilisation co-ordinating then you're already immune to Pascal's Mugging because you don't have any reason to care in the slightest about simulated people who exist entirely outside the scope of your morality. So why bother talking about how to bound the utility of something to which you essentially assign zero utility to in the first place?

Or, to be a little more polite and turn the criticism around, if you do actually care a little bit about a large number of hypothetical extra-universal simulated beings, you ... (read more)

Irgy20

This assumes you have no means of estimating how good your current secretary/partner is, other than directly comparing to past options. While it's nice to know what the optimal strategy is in that situation, don't forget that it's not an assumption which holds in practice.

0Elo
Definitely relevant; most of our experience comes to us through just that - experiencing things. but also it is possible to read books; or borrow knowledge from other people (and we do that) as to good or bad relationships. ultimately so many people still do so much of their learning by trial and error. I hope many people here have found pathways to shortcut the very long process of trial and error. There are still some errors that are very difficult for one person to trust other people about making. If we could learn what leads so many marriages to divorce (noting that divorce might not be a bad thing), (noting the intent of a marriage is usually to stay together for a long and maybe infinite time), the failure to stay together of so many humans all the time still is an indication that humans are really bad at learning what causes marriages to end in divorce over and over and over again (hopefully a related area to this problem). I want to agree with you; but I see the assumptions here valid in certain ways; after all - the world is a complicated beast :) I hope you actually liked the writing and also got something out of it. A hard model is no representative of the real world; I tried to start somewhere (in the secretary problem) and diverge. To me there is value in considering the similarities, although limited. I am not sure that you feel the same... (and we may disagree about how much value there is to be had - I felt: enough to mention it)
Irgy00

Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.

And one that's also been solved, certainly to my satisfaction. Logarithmic utility and/or the Kelly Criterion will both tell you not to bet if the payout is in money, and for the right reasons rather than arbitrary, value-ignoring reasons (in that they'll tell you exactly what you should pay for the bet). If the payout is dir... (read more)

0Houshalter
Well there are two separate points of the St Petersburg paradox. One is the existence of relatively simple distributions that have no mean. It doesn't converge on any finite value. Another example of such a distribution, which actually occurs in physics, is the Cauchy distribution. Another, which the original Pascal's Mugger post was intended to address, was Solomonoff induction. The idealized prediction algorithm used in AIXI. EY demonstrated that if you use it to predict an unbounded value like utility, it doesn't converge or have a mean. The second point is just that the paying more than a few bucks to pay the game is silly. Even in a relatively small finite version of it. The probability of losing is very high. Even though it has a positive expected utility. And this holds even if you adjust the payout tables to account for utility != dollars. You can bite the bullet and say that if the utility is really so high, you really should take that bet. And that's fine. But I'm not really comfortable betting away everything on such tiny probabilities. You are basically guaranteed to lose and end up worse than not betting. You can do a tradeoff between median maximizing and expected utility with mean of quantiles. This basically gives you the best average outcome ignoring incredibly unlikely outcomes. Even median maximizing by itself, which seems terrible, will give you the best possible outcome >50% of the time. The median is fairly robust. Whereas expected utility could give you a shitty outcome 99% of the time or 99.999% of the time, etc. As long as the outliers are large enough. If you are assigning 1/3^^^3 probability to something, then no amount of evidence will ever convince you. I'm not saying that unbounded computing power is likely. I'm saying you shouldn't assign infinitely small probability to it. The universe we live in runs on seemingly infinite computing power. We can't even simulate the very smallest particles because of how large the number of com
Irgy00

I do acknowledge that my comment was overly negative, certainly the ideas behind it might lead to something useful.

I think you misunderstand my resolution of the mugging (which is fair enough since it wasn't spelled out). I'm not modifying a probability, I'm assigning different probabilities to different statements. If the mugger says he'll generate 3 units of utility difference that's a more plausible statement than if the mugger says he'll generate 3^^^3, etc. In fact, why would you not assign a different probability to those statements? So long as the i... (read more)

0Houshalter
The whole point of the Pascal's Mugging scenario is that the probability doesn't decrease faster than the reward. If for example, you decrease the probability by half for each additional bit it takes to describe, 3^^^3 still only takes a few bits to write down. Do you believe it's literally impossible that there is a matrix? Or that it can't be 3^^^3 large? Because when you assign these things so low probability, you are basically saying they are impossible. No amount of evidence could convince you otherwise. I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit the data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans. Therefore anyone alive today has a small probability of affecting large amounts of simulated people. Since that is impossible, the theory must be wrong. It doesn't matter if it's simple or if it fits the data perfectly. Even in finite case, I believe it can grow quite large as the number of iterations increases. It's one expected dollar each step. Each step having half the probability of the previous step, and twice the reward. Imagine the game goes for n finite steps. An expected utility maximizer would still spend $n to play the game. A median maximizer would say "You are never going to win in the liftetime of the universe and then some, so no thanks." The median maximizer seems correct to me.
Irgy20

This seems to be a case of trying to find easy solutions to hard abstract problems at the cost of failing to be correct on easy and ordinary ones. It's also fairly trivial to come up with abstract scenarios where this fails catastrophically, so it's not like this wins on the abstract scenarios front either. It just fails on a new and different set of problems - ones that aren't talked about because no-one's ever found a way to fail on them before.

Also, all of the problems you list it solving are problems which I would consider to be satisfactorily solved a... (read more)

0Houshalter
Median utility does fail trivially. But it opens the door to other systems which might not. He just posted a refinement on this idea, Mean of Quantiles. IMO this system is much more robust than expected utility. EU is required to trade away utility from the majority of possible outcomes to really rare outliers, like the mugger. Median utility will get you better outcomes at least 50% of the time. And tradeoffs like the one above, will get you outcomes that are good in the majority of possible outcomes, ignoring rare outliers. I'm not satisfied it's the best possible system, so the subject is still worth thinking about and debating. I don't think any of your paradoxes are solved. You can't get around Pascal's mugging by modifying your probability distribution. The probability distribution has nothing to do with your utility function or decision theory. Besides being totally inelegant and hacky, there might be practical consequences. Like you can't believe in the singularity now. The singularity could lead to vastly high utility futures, or really negative ones. Therefore it's probability must be extremely small. The St Petersburg casino is silly of course, but there's no reason a real thing couldn't produce a similar distribution. If you have some sequence of probabilities dependent on each other, that each have 1/2 probability, and give increasing utility.
Irgy30

I know about both links but still find it annoying that the default behavior for main is to list what to me seems like just an arbitrary subset of the posts, and I need to then click another button to get the rest of them. Unless there's some huge proportion of the reader-base who only care about "promoted" posts and don't want to see the others, the default ought to be to show everything. I'm sure there's people who miss a lot of content and don't even know they're missing it.

Irgy10

Meta comment (I can PM my actual responses when I work out what I want them to be); I found I really struggled with this process, because of the awkward tension between answering the questions and playing a role. I just don't understand what my goal is.

Let me call my view position 1, and the other view position A. The first time I read just this post and I thought it was just a survey, where I should "give my honest opinion", but where some of the position A questions would be non-sensical for someone of position 1 so just pretend a little in ord... (read more)

Irgy00

I think this shows how the whole "language independent up to a constant" thing is basically just a massive cop-out. It's very clever for demonstrating that complexity is a real, definable thing, with properties which at least transcend representation in the infinite limit. But as you show it's useless for doing anything practical.

My personal view is that there's a true universal measure of complexity which AIXI ought to be using, and which wouldn't have these problems. It may well be unknowable, but AIXI is intractable anyway so what's the differ... (read more)

2Stuart_Armstrong
Yes. The problem is not the Hell scenarios, the problem is that we can make them artificially probable via language choice. Some results are still true. An exploring agent (if it survives) will converge on the right environment, independent of language. And episodic environments do allow AIXI to converge on optimal behaviour (as long as the discount rate is gradually raised).
Irgy00

Thanks, interesting reading.

Fundamental or not I think my point still stands that "the prior is infinite so the whole thing's wrong" isn't quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9/10 case can't be made without incorrect use of an improper prior. Though I guess it's still showing where the problem most likely is which is helpful.

As far as being part of the foundations goes, I was just going by the fact that ... (read more)

Irgy10

To my view, the 1/36 is "obviously" the right answer, what's interesting is exactly how it all went wrong in the other case. I'm honestly not all that enlightened by the argument given here nor in the links. The important question is, how would I recognise this mistake easily in the future? The best I have for the moment is "don't blindly apply a proportion argument" and "be careful when dealing with infinite scenarios even when they're disguised as otherwise". I think the combination of the two was required here, the proporti... (read more)

1ksvanhorn
Actually, no, improper priors such as you suggest are not part of the foundations of Bayesian probability theory. It's only legitimate to use an improper prior if the result you get is the limit of the results you get from a sequence of progressively more diffuse priors that tend to the improper prior in the limit. The Marginalization Paradox is an example where just plugging in an improper prior without considering the limiting process leads to an apparent contradiction. My analysis (http://ksvanhorn.com/bayes/Papers/mp.pdf) is that the problem there ultimately stems from non-uniform convergence. I've had some email discussions with Scott Aaronson, and my conclusion is that the Dice Room scenario really isn't an appropriate metaphor for the question of human extinction. There are no anthropic considerations in the Dice Room, and the existence of a larger population from which the kidnap victims are taken introduces complications that have no counterpart when discussing the human extinction scenario. You could formalize the human extinction scenario with unrealistic parameters for growth and generational risk as follows: * Let n be the number of generations for which humanity survives. * The population in each generation is 10 times as large as the previous generation. * There is a risk 1/36 of extinction in each generation. Hence, P(n=N+1 | n >= n) = 1/36. * You are a randomly chosen individual from the entirety of all humans who will ever exist. Specifically, P(you belong to generation g) = 10^g / N, where N is the sum of 10^t for 1 <= t <= n. Analyzing this problem, I get P(extinction occurs in generation t | extinction no earlier than generation t) = 1/36 P(extinction occurs in generation t | you are in generation t) = about 9/10 That's a vast difference depending on whether or not we take into account anthropic considerations. The Dice Room analogy would be if the madman first rolled the dice until he got snake-eyes, then went out and kidnapped a
Irgy20

My prior expectation would be: A long comment from a specific user has more potential to be interesting than a short one because it has more content. But, A concise commenter has more potential to write interesting comments of a given length than a verbose commenter.

So while long comments might on average be rated higher, shorter versions of the same comment may well rate higher than longer versions of the same comment would have. It seems like this result does nothing to contradict that view but in the process seems to suggest people should write longer c... (read more)

Irgy00

Well that's why I called it steel-manning, I can't promise anything about the reasonableness of the common interpretation.

Irgy10

In the interest of steel-manning the Christian view; there's a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.

If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there's a point where you could have made the choice to stop.

1Lumifer
Yes, of course, there is a whole range of, let's say, involvement in these thoughts. But if I understand mainstream Catholicism correctly, even a brief lustful glance at the neighbor's wife is a sin. Granted, a lesser sin than constructing a whole porn movie in your head, but still a sin.
Irgy40

Another small example. I have a clock near the end of my bed. It runs 15 minutes fast. Not by accident, it's been reset many times and then set back to 15 minutes fast. I know it's fast, we even call it the "rocket clock". None of this knowledge diminishes it's effectiveness at getting me out of bed sooner, and making me feel more guilty for staying up late. Works very well.

Glad to discover I can now rationalise it as entirely rational behaviour and simply the dark side (where "dark side" only serves to increase perceived awesomeness anyway).

Irgy20

Daisy isn't in a loop at all. There's apparently evidence for Dark and that is tempered by the fact that its existance indicates a failing on Dark's part.

For Bob, to make an analogy, imagine Bob is wet. For you, that is evidence that it is raining. It could be argued that being wet is evidence that it's raining for Bob as well. But generally speaking Bob will know why Bob is wet. Given the knowedge of why Bob is wet, the wetness itself is masked off and no longer relevant. If Bob has just had a bath, then being wet no longer constitutes any evidence of rai... (read more)

2ialdabaoth
It seems like, at this level, thinking of things in terms of "evidence" and "priors" at all is no longer really relevant - Bayesian updating just a way of computing and maintaining "belief caches", which are a highly compressed map of our "evidence" (which is itself a highly compressed map of our phenomenal experience).
Irgy420

I found myself geuinely confused by the question "You are a certain kind of person, and there's not much that can be done either way to really change that" - not by the general vagueness of the statement (which I assume is all part of the fun) but by a very specific issue, the word "you". Is it "you" as in me? Or "you" as in "one", i.e. a hypothetical person essentially referring to everyone? I interpreted it the first way then changed my mind after reading the subsequent questions which seemed to be more clearly using it the second way.

Unnamed190

(Dan from CFAR here) - That question (and the 3 similar ones) came from a standard psychology scale. I think the question is intentionally ambiguous between "you in particular" and "people in general" - the longer version of the scale includes some questions that are explicitly about each, and some others that are vaguely in the middle. They're meant to capture people's relatively intuitive impressions.

You can find more information about the questions by googling, although (as with the calibration question) it's better if that informa... (read more)

6selylindi
I answered that section quickly and on the basis of intuition in the hope that those questions were chosen because there is some interesting cognitive bias affecting the answers that I was unaware of. :D
Irgy-20

Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?

No, not really. I mean, it's not that far from something I said, but it's departing from what I meant and it's not in any case the point of my reply. The mistake I'm making is persisting in trying to clarify a particular way of viewing the problem which is not the best way and which is leading us both down the g... (read more)

0Ishaan
Taboo "justification". Justification is essentially a pointer to evidence or inference. After all the inference is said and done, the person who needs to provide more evidence is the person who has the more un-parsimonious hypothesis. You reject fairies based on a lack of justification because it's not parsimonious. You can't reject Many-Worlds on those same grounds, at least not without explaining more. The difference is that the fairies interpretation of raindrops has different maths than the non-fairy interpretation of raindrops. When the mathematically-rigorous descriptions for two different hypotheses are different, there is a clear correct answer as to which is more parsimonious. Many-worlds has exactly the same mathematical description as the alternative, so it's hard to say which is more parsimonious. You can't say that Single-World is default and Many Worlds requires justification. This is why I claim that it is first a question of ontology (a question of what we choose to define as reality), and then maybe we can talk about the epistemology and whether or not the statement is "True" within our definitions...after we clarify our ontology and define the relationship between ontology and parsimony, not before.
Irgy00

By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event.

The falling of raindrops is also observable, you appear to have missed the point of my reply.

To look at it another way, there is strong empyrical evidence that sentient beings will continue to exist on the colony-ship after it has left, and I do not believe there is analogous evidence for the continued existence of split-off parallel universes.

The spirit of the question is basically this:

Can the most parsimonious hypothesis ever

... (read more)
3Ishaan
Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing? Before the ship leaves, you know that sometime in the future there will be a future-ship in a location where it cannot interact with future-you. By the same token, you can observe the laws of physics and the present-state of the universe. If, for some reason, your interpretation of those laws involves Many Worlds splitting off from each other, then, before the worlds split, you know that sometime in the future there will be a future-world unable to interact with future you. For future-you, the existence of the future-ship is not a testable theory, but the fact that you have a memory of the ship leaving counts as evidence. For future-you, the existence of the Other-Worlds is not a testable theory, but if Many-Worlds is your best model, then your memory of the past-state of the universe, combined with your knowledge of physics, counts as evidence for the existence of certain specific other worlds. In your Faeries example, the Faeries do not merit consideration because it is impossible to get evidence for their existence. That's not true in the quantum bomb scenario - if we except Many Worlds, then for the survivors of the quantum bomb, the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb. So, the actual question should be: 1) Does Many-Worlds fit in our ontology - as in, do universes on other branches constructed in the Many-World format even fit within the definition of "Reality" or not? (For example, if you told me there was a parallel universe which never interacted with us in any way, I'd say that your universe wasn't Real by definition. Many Worlds branches are a gray area because they do interact, but current Other Worlds only
Irgy20

Fair point, it sounds like it's a co-incidental victory for total-utilitarianism in this particular case.

Irgy20

The depature of an intergalactic colony-ship is an observable event. It's not that the future of other worlds is unobservable, it's that their existance in the first place is not a testable theory (though see army1987's comment on that issue).

To make an analogy (though admittedly an unfair one for being a more complex rather than an arguably less complex explanation): I don't care about the lives of the fairies who carry raindrops to the ground either, but it's not because fairies are invisible (well, to grown-ups anyway).

7Ishaan
By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event. I think the spirit of the question is basically: In what situations do we give credence to hypotheses which posit systems that we can influence, but cannot influence us?
Irgy20

But the point would remain in that case that there is in principle an experiment to distinguish the theories, even if such an experiment has yet to be performed?

Although (and I admit my understanding of the topic is being stretched here) it still doesn't sound like the central issue of the existance of parallel universes with which we may no longer interact would be resolved by such an experiment. It seems more like Copenhagen's latest attempt to define the conditions for collapse would be disproven without particularly necessitating a fundamental change of interpretation.

4Baughn
For Copenhagen, yes, but MWI and Copenhagen aren't the only two interpretations of quantum mechanics worth thinking about. In truth, you'll find few physicists who treat the Copenhagen Interpretation as anything but convenient shorthand, not usually for MWI.
2A1987dM
If we managed to put human-sized systems into superposition, that'd rule out CI AFAICT. And before that, the larger the systems we manage to put into superposition the less likely CI will seem.
Irgy80

It seems like something has gone terribly wrong when our ethical decisions depend on our interpretation of quantum mechanics.

My understanding was that many-worlds is indistiguishable by observation from the Copenhagen interpretation. Has this changed? If not, it frightens me that people would choose a higher chance of the world ending to rescue hypothetical people in unobservable universes.

If anything this seems like a (weak) argument in favour of total utilitarianism, in that it doesn't suffer from giving different answers according to one's choice among indistiguishable theories.

8Rob Bensinger
'Copenhagen' isn't so much an interpretation as a relatively traditional, relatively authoritative body of physics slogans. Depending on which Copenhagenist you speak to, the interpretation might amount to Objective Collapse, or Operationalism, or Metaphysical Idealism, or Quietism. The latter three aren't so much alternatives to MWI as alternatives to the very practice of mainstream scientific realism; and Objective Collapse is generally empirically distinct from MWI (and, to the extent that it has made testable predictions, these have always been falsified.) Bohmian Mechanics is an alternative to the MWI family of interpretations that really does look empirically indistinguishable. But it's about as different from Copenhagenism as you can get, and is almost universally dismissed by physicists. Also, it may not solve this problem; I haven't seen discussion of whether the complexity of the BM pilot wave is likely to itself encode an overwhelming preponderance of mental 'ripples' that crowd out the moral weight of our own world. Are particles needed for complex biology-like structure in BM?
9Luke_A_Somers
Yes. Someone has hooked up a universe-destroying bomb and is offering to make the outcome quantum. I think that covers it.
7A1987dM
According to MWI you can put arbitrarily large systems into quantum superposition, whereas according to CI when the system is sufficiently large the wavefunction will collapse.
2Stuart_Armstrong
Oh, total utilitarianism has its own problem with indistinguishable theories :-) See http://lesswrong.com/lw/g9n/false_vacuum_the_universe_playing_quantum_suicide/ and http://lesswrong.com/r/discussion/lw/j3i/another_problem_with_quantum_measure/
pengvado180

Why should you not have preferences about something just because you can't observe it? Do you also not care whether an intergalactic colony-ship survives its journey, if the colony will be beyond the cosmological horizon?

Irgy-30

I don't think the two are at odds in an absolute sense, but I think there is a meaningful anticorrelation.

tl;dr: Real morals, if they exist, provide one potential reason for AIs to use their intelligence to defy their programmed goals if those goals conflict with real morals.

If true morals exist (i.e. moral realism), and are discoverable (if they're not then they might as well not exist), then you would expect that a sufficiently intelligent being will figure them out. Indeed most atheistic moral realists would say that's what humans and progress are doing... (read more)

-3TheAncientGeek
Exactly: the space of self-improving minds can;t have such a wide range of goals as total mindspace, since not all goals are conducive to self-improvement.
Irgy30

If luke is naturally good at putting stuff he's read into practical use, and particularly if he knows it (at least subconsciously), then he would be likely to want to read a lot of self-help books. So the causality in your argument makes more sense to me the other way around. Not sure if I'm helping at all here though.

Irgy00

I've actually lost track of how this impacts my original point. As stated, it was that we're worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.

I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you've convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.

3Luke_A_Somers
My point is he's clearly not drawing a box tightly around what's human or not. If he's concerned with clearly-sub-human AI, then he's casting a significantly wider net than it seems you're assuming he is. And considering that he's written extensively on the variety of mind-space, assuming he's taking a tightly parochial view is poorly founded.
Irgy00

This worry about the creation and destruction of simulations doesn't make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It's just guilt by association - "ending a simulation is like death, death is bad, therefore simulations are bad". Yes death is bad, but only for reasons which don't necessarily apply here.

To me, if anything worrying about the simulation... (read more)

Irgy00

Really? Where? I just reread it with that in mind and I still couldn't find it. The closest I came was that he once used the term "sentient simulation", which is at least technically broad enough to cover both. He does make a point there about sentience being something which may not exactly match our concept of a human, is that what you're referring to? He then goes on to talk about this concept (or, specifically, the method needed to avoid it) as a "nonperson predicate", again suggesting that what's important is whether it's like a human-like rather than anything more fundamental. I don't see how you could think "nonperson predicate" is covering both human and nonhuman intelligence equally.

5nshepperd
"Person" seems to be used here as the philosophical term meaning something like "sentient entity with moral value". Personhood is not limited to human beings. ETA: Also, wrt the AI itself, the directly next two articles in this sequence explicitly deal with the issue of making the AI itself nonsentient, as I'm surprised to find a comment from myself in 2011 pointing out. Did you really not read the surrounding articles?
-1Luke_A_Somers
I read this as being simpler than a real human mind. Since it's simpler, the abstractions used are going to be imperfect, and the design would end up being something that is in some way artificial. It's not as explicit as I said, but I still think the implication is pretty strong.
Irgy10

Now if you want to extend this to simulate real voter behaviour, add multiples of the rhetoric and lotteries, and then entirely remove all information about the allocator's output.

Irgy00

I tried to cover what you're talking about with my statement in brackets at the end of the first paragraph. Set the value for disagreeing too high and you're rewarding it, in which case people start deliberately making randomised choices in order to disagree. Too low and they ought to be going out of their way to try and agree above all else - except there's no way to do that in practice, and no way not to do it in the abstract analysis that assumes they think the same. A value of 9 though is actually in between these two cases - it's exactly the average o... (read more)

0Manfred
Yeah, I didn't know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn't know "9" was particularly special. Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy. I guess I'm not seeing any middle ground between "be honest," and "pick randomization as an action," even for naive CDT where "be honest" gets the problem wrong. Somewhere in Stuart Armstrong's bestiary of non-probabilistic decision procedures you can get an effective 3/4 on the sleeping beauty problem, but I wouldn't worry about it - that bestiary is silly anyhow :P
Irgy00

Ok, thanks, that makes more sense than anything I'd guessed.

There's a difference between shortcutting a calculation and not accounting for something in the first place. In the debate between all the topics mentioned in the paper (e.g. SSI/SSA, split responsibility, precommitments and so on) not one method would give a different answer if that 0 was a 5, a 9, or a -100. It's not because they're shortcutting the maths, it's because, as I said in my first comment, they assume that it's effectively not possible for the two people to vote differently anyway. Wh... (read more)

0Manfred
Oh, okay. Looks like I didn't really understand your point when I commented :) Perhaps I still don't - you say "no method gives a probability higher than 3/4 for the coin being tails," but you've in fact been given information that should cause you to update that probability. It's like someone had a bag with 10 balls in it. That person flipped a coin, and if the coin was heads the bag has 9 black balls and 1 white ball, but if the coin was tails the bag has 9 white balls and 1 black ball. They reach into the bag and hand you a ball at random, and it's black - what's the probability the coin was heads? If you reward disagreement, then what you're really rewarding in this case are mixed (probabilistic) actions. The reward only pays out if the coin landed tails, so that there's someone else to disagree with. So people will give what seems to them to be the same honest answer when you change the result of disagreeing from 0 to 0+epsilon. But when the payoff from disagreeing passes the expected payoff of honesty, agents will pick mixed actions. To be more precise: If we simplify a little and only let them choose 50/50 if they want to disagree, then we have that the expected utility of honesty is P(heads)U(choice,heads) + P(tails)U(choice,heads), while the expected utility of coin-flipping is pretty much P(heads)U(average,heads) + P(tails)*U(disagree,tails). These will pass each other at different values of U(disagree, tails) depending on that you think P(heads) and P(tails) are, and also depending on which choice you think is best.
Irgy00

? multiply what by that zero? There's so many things you might mean by that, and if even one of them made any sense to me I'd just assume that was it, but as it stands I have no idea. Not a very helpful comment.

0Manfred
Well, suppose you're doing an expected utility calculation, and the utility of outcome one is U1, the utility of outcome 2 is U2, and so on. Then your expected utility looks like (some stuff)*U1 + (some other stuff)*U2, and so on. The stuff in parentheses is usually the probability of outcome N occurring, but some systems might include a correction based on collective decision-making or something, and that's fine. Now suppose that U1=0. Then your expected utility looks like (some stuff)*0 + (some other stuff)*U2, and so on. Which is equal to (that other stuff)*U2, etc, because you just multiplied the first term by 0. So the zero is in there. You've just multiplied by it.
Irgy20

I have an interesting solution to the non-anthropic problem. Firstly, the reward of 0 for voting differently is ignored in all the calculations, as it is assumed the other agent is acting identically. Therefore, its value is irrelevant (unless of course it becomes so high that the agents start deliberately employing randomisation in an attempt to try and vote differently, which would distort the problem).

However, consider what happens if you set the value to 9. In this case, you can forget about the other agent entirely. Voting heads if the coin was tails ... (read more)

0Manfred
So you'd, for example, multiply by that zero.
Irgy190

Devil's advocate time:

They don't know nothing about it. They know two things.

  1. It's a debt reduction plan
  2. It's named after Panetta and Burns

Here are some reasons to oppose the plan, based on the above knowledge:

  • We don't need a debt reduction plan, just keep doing what we're doing and it will sort itself out.

  • I like another existing plan, and this is not that one, so I oppose it.

  • I've heard of Panetta and (s)he's a complete douchebag. Anything they've come up with is clearly junk.

  • I haven't even heard of either of them, so what the heck would they kn

... (read more)
Irgy00

rightness plays no role in that-which-is-maximized by the blind processes of natural selection

That being the case, what is it about us that makes us care about "rightness" then? What reason do you have for believing that the logical truth of what is right will has more influence on human behaviour than it would on any other general intelligence?

Certainly I can agree that there's reasons to worry another intelligence might not care about what's "right", since not every human really cares that much about it either. But it feels like yo... (read more)

0Manfred
Biology and socialization? You couldn't raise a human baby to have any arbitrary value system, our values are already mostly set by evolution. And it so happens that evolution has pointed us in the direction of valuing rightness, for sound causal reasons of course.
Irgy40

This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God's existance is about as meaningful as asking "do you believe in the axiom of choice?". Then, after you've failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It's this part that's the weak link. The idea that the bible tells us something about God (and therefore by exte... (read more)

Irgy140

I think in that specific example, they're not arguing about the meaning of the word "immoral" so much as morality itself. So the actual argument is meta-ethical, i.e. "What is the correct source of knowledge on what is right and wrong?". Another argument they won't ever resolve of course, but at least a genuine one not a semantic one.

In other situations, sometimes the argument really boils down to something more like "Was person A an asshole for calling person B label X?". Here they can agree that person B has label X accordin... (read more)

0buybuydandavis
That's my feeling as well. Morality is usually treated as a conceptual primary. If I were to substitute it out of a statement, I would substitute in it's place comments about my own recursive levels of approval/disapproval and associated reward/punishment. For the most part, people can't and won't substitute out for "morality", because there is just a lot of conceptual nonsense generally tied up in it. If the OP hadn't started with morality, but had taken the more general case, I'd say he was correct. People argue over definition of terms in an attempt to lay definitive claim to a connotation of a word. But the point of claim is to thereby smuggle in an associated moral connotation. The characterization of what one approves/disapproves of in terms of harm, fairness, in group solidarity, authority, etc., is usually identified as a disagreement over the "nature of morality". and not the definition of the term.
Irgy10

There's one flaw in the argument about Buy a Brushstroke vs African sanitation systems, which is the assumption/implication that if they hadn't given that money to Buy a Brushstroke they would have given it to African sanitation systems instead. It's a false dichotomy. Sure, the money would have been better spent on African sanitation systems, but you can say that about anything. The money they spent on their cars, the money I just spent on my lunch, in fact somewhere probably over 99.9% of all non-African-sanitation-system-purchases made in the first-worl... (read more)

Irgy30

So, there's direct, deterministic causation, like people usually talk about. Then there's stochasitic causation, where stuff has a probabilistic influence on other stuff. Then there's pure spontenaity, things simply appearing out of no-where for no reason, but according to easily modeled rules and probabilities. Even that last is at least theorised to exist in our universe - in particular as long as the total energy and time multiply to less than planck's constant (or something like that). At no point in this chain have we stopped calling our universe caus... (read more)

Load More