You can do it first or you can do it best, usually both are different artists and each is well known. I think there's plenty of examples of both in all fields. Rachmaninov for instance is another classical (in the broad sense) composer in the "do it well" rather than "do it first" camp, he was widely criticised as behind the times in his own era, but listening now no-one cares that his writing music sounds like it's ~150 years old but written only ~100.
That's the result of compulsory voting not of preference voting.
As an Australian I can say I'm constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow's impossibility theorem to justify hanging on to whatever terrible system they have, but there's a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that ...
(meta) Well, I'm quite relieved because I think we're actually converging rather than diverging finally.
No. Low complexity is not the same thing as symmetry.
Yes sorry symmetry was just how I pictured it in my head, but it's not the right word. My point was that the particles aren't acting independently, they're constrained.
Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them.
By t...
there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect.
I have to say, this caused me a fair bit of thought.
Firstly, I just want to confirm that you agree a universe as we know it has complexity of the order of its size. I agree that an equivalently "large" universe with low complexity could be imagined, but its laws would have to be quite different to ours. Such a universe, while large, would be locked in symmetry to preserve its low complexity.
Just an aside on randomne...
Well, you'd need a method of handling infinite values in your calculations. Some methods exist, such as taking limits of finite cases (though much care needs to be taken), using a number system like the Hyperreals or the Surreals if appropriate, or comparing infinite cardinals, it would depend a little on the details of how such an infinite threat was made plausible. I think in most cases my argument about the threat being dominated by other factors would not hold in this case.
While my point about specific other actions dominating may not hold in this case...
Sorry, but I don't know which section of my reply this is addressing and I can't make complete sense of it.
an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3)
The OP is broken into two main sections, one assuming finite resources and one assuming infinite.
Our universe has finite resources, why would an assumption of finite resources in an alternative universe be vanishingly unlikely? Personally I would expect finite resources with probability ~=1. I'm not including time as a &...
This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program.
I know that.
The point about memory is the memory required to store the program data, not the memory required to run the program. The program data is part of the program, thus part of the complexity. A mistake I maybe made though was to talk about the current state rather than the initial conditions, since the initial conditions ar...
No, it isn't. It can't be used against agents with bounded utility functions.
Ok fully general counterargument is probably an exaggeration but it does have some similar undesirable properties:
Your argument does not actually address the argument it's countering in any way. If 1/n is the correct prior to assign to this scenario surely that's something we want to know? Surely I'm adding value by showing this?
If your argument is accepted then it makes too broad a class of statements into muggings. In fact I can't see why "arglebargle 3^^^^3 banana&
I'm not trying to ignore the problem I'm trying to progress it. If for example I reduce the mugging to just a non-special example of another problem, then I've reduced the number of different problems that need solving by one. Surely that's useful?
This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.
Standard counterargument it may be but it seems pretty rubbish to me. It seems to have the form "You can't be sure you're right about...
If you view morality as entirely a means of civilisation co-ordinating then you're already immune to Pascal's Mugging because you don't have any reason to care in the slightest about simulated people who exist entirely outside the scope of your morality. So why bother talking about how to bound the utility of something to which you essentially assign zero utility to in the first place?
Or, to be a little more polite and turn the criticism around, if you do actually care a little bit about a large number of hypothetical extra-universal simulated beings, you ...
This assumes you have no means of estimating how good your current secretary/partner is, other than directly comparing to past options. While it's nice to know what the optimal strategy is in that situation, don't forget that it's not an assumption which holds in practice.
Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.
And one that's also been solved, certainly to my satisfaction. Logarithmic utility and/or the Kelly Criterion will both tell you not to bet if the payout is in money, and for the right reasons rather than arbitrary, value-ignoring reasons (in that they'll tell you exactly what you should pay for the bet). If the payout is dir...
I do acknowledge that my comment was overly negative, certainly the ideas behind it might lead to something useful.
I think you misunderstand my resolution of the mugging (which is fair enough since it wasn't spelled out). I'm not modifying a probability, I'm assigning different probabilities to different statements. If the mugger says he'll generate 3 units of utility difference that's a more plausible statement than if the mugger says he'll generate 3^^^3, etc. In fact, why would you not assign a different probability to those statements? So long as the i...
This seems to be a case of trying to find easy solutions to hard abstract problems at the cost of failing to be correct on easy and ordinary ones. It's also fairly trivial to come up with abstract scenarios where this fails catastrophically, so it's not like this wins on the abstract scenarios front either. It just fails on a new and different set of problems - ones that aren't talked about because no-one's ever found a way to fail on them before.
Also, all of the problems you list it solving are problems which I would consider to be satisfactorily solved a...
I know about both links but still find it annoying that the default behavior for main is to list what to me seems like just an arbitrary subset of the posts, and I need to then click another button to get the rest of them. Unless there's some huge proportion of the reader-base who only care about "promoted" posts and don't want to see the others, the default ought to be to show everything. I'm sure there's people who miss a lot of content and don't even know they're missing it.
Meta comment (I can PM my actual responses when I work out what I want them to be); I found I really struggled with this process, because of the awkward tension between answering the questions and playing a role. I just don't understand what my goal is.
Let me call my view position 1, and the other view position A. The first time I read just this post and I thought it was just a survey, where I should "give my honest opinion", but where some of the position A questions would be non-sensical for someone of position 1 so just pretend a little in ord...
I think this shows how the whole "language independent up to a constant" thing is basically just a massive cop-out. It's very clever for demonstrating that complexity is a real, definable thing, with properties which at least transcend representation in the infinite limit. But as you show it's useless for doing anything practical.
My personal view is that there's a true universal measure of complexity which AIXI ought to be using, and which wouldn't have these problems. It may well be unknowable, but AIXI is intractable anyway so what's the differ...
Thanks, interesting reading.
Fundamental or not I think my point still stands that "the prior is infinite so the whole thing's wrong" isn't quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9/10 case can't be made without incorrect use of an improper prior. Though I guess it's still showing where the problem most likely is which is helpful.
As far as being part of the foundations goes, I was just going by the fact that ...
To my view, the 1/36 is "obviously" the right answer, what's interesting is exactly how it all went wrong in the other case. I'm honestly not all that enlightened by the argument given here nor in the links. The important question is, how would I recognise this mistake easily in the future? The best I have for the moment is "don't blindly apply a proportion argument" and "be careful when dealing with infinite scenarios even when they're disguised as otherwise". I think the combination of the two was required here, the proporti...
My prior expectation would be: A long comment from a specific user has more potential to be interesting than a short one because it has more content. But, A concise commenter has more potential to write interesting comments of a given length than a verbose commenter.
So while long comments might on average be rated higher, shorter versions of the same comment may well rate higher than longer versions of the same comment would have. It seems like this result does nothing to contradict that view but in the process seems to suggest people should write longer c...
Well that's why I called it steel-manning, I can't promise anything about the reasonableness of the common interpretation.
In the interest of steel-manning the Christian view; there's a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.
If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there's a point where you could have made the choice to stop.
Another small example. I have a clock near the end of my bed. It runs 15 minutes fast. Not by accident, it's been reset many times and then set back to 15 minutes fast. I know it's fast, we even call it the "rocket clock". None of this knowledge diminishes it's effectiveness at getting me out of bed sooner, and making me feel more guilty for staying up late. Works very well.
Glad to discover I can now rationalise it as entirely rational behaviour and simply the dark side (where "dark side" only serves to increase perceived awesomeness anyway).
Daisy isn't in a loop at all. There's apparently evidence for Dark and that is tempered by the fact that its existance indicates a failing on Dark's part.
For Bob, to make an analogy, imagine Bob is wet. For you, that is evidence that it is raining. It could be argued that being wet is evidence that it's raining for Bob as well. But generally speaking Bob will know why Bob is wet. Given the knowedge of why Bob is wet, the wetness itself is masked off and no longer relevant. If Bob has just had a bath, then being wet no longer constitutes any evidence of rai...
I found myself geuinely confused by the question "You are a certain kind of person, and there's not much that can be done either way to really change that" - not by the general vagueness of the statement (which I assume is all part of the fun) but by a very specific issue, the word "you". Is it "you" as in me? Or "you" as in "one", i.e. a hypothetical person essentially referring to everyone? I interpreted it the first way then changed my mind after reading the subsequent questions which seemed to be more clearly using it the second way.
(Dan from CFAR here) - That question (and the 3 similar ones) came from a standard psychology scale. I think the question is intentionally ambiguous between "you in particular" and "people in general" - the longer version of the scale includes some questions that are explicitly about each, and some others that are vaguely in the middle. They're meant to capture people's relatively intuitive impressions.
You can find more information about the questions by googling, although (as with the calibration question) it's better if that informa...
Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?
No, not really. I mean, it's not that far from something I said, but it's departing from what I meant and it's not in any case the point of my reply. The mistake I'm making is persisting in trying to clarify a particular way of viewing the problem which is not the best way and which is leading us both down the g...
By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event.
The falling of raindrops is also observable, you appear to have missed the point of my reply.
To look at it another way, there is strong empyrical evidence that sentient beings will continue to exist on the colony-ship after it has left, and I do not believe there is analogous evidence for the continued existence of split-off parallel universes.
...The spirit of the question is basically this:
Can the most parsimonious hypothesis ever
Fair point, it sounds like it's a co-incidental victory for total-utilitarianism in this particular case.
The depature of an intergalactic colony-ship is an observable event. It's not that the future of other worlds is unobservable, it's that their existance in the first place is not a testable theory (though see army1987's comment on that issue).
To make an analogy (though admittedly an unfair one for being a more complex rather than an arguably less complex explanation): I don't care about the lives of the fairies who carry raindrops to the ground either, but it's not because fairies are invisible (well, to grown-ups anyway).
But the point would remain in that case that there is in principle an experiment to distinguish the theories, even if such an experiment has yet to be performed?
Although (and I admit my understanding of the topic is being stretched here) it still doesn't sound like the central issue of the existance of parallel universes with which we may no longer interact would be resolved by such an experiment. It seems more like Copenhagen's latest attempt to define the conditions for collapse would be disproven without particularly necessitating a fundamental change of interpretation.
It seems like something has gone terribly wrong when our ethical decisions depend on our interpretation of quantum mechanics.
My understanding was that many-worlds is indistiguishable by observation from the Copenhagen interpretation. Has this changed? If not, it frightens me that people would choose a higher chance of the world ending to rescue hypothetical people in unobservable universes.
If anything this seems like a (weak) argument in favour of total utilitarianism, in that it doesn't suffer from giving different answers according to one's choice among indistiguishable theories.
Why should you not have preferences about something just because you can't observe it? Do you also not care whether an intergalactic colony-ship survives its journey, if the colony will be beyond the cosmological horizon?
I don't think the two are at odds in an absolute sense, but I think there is a meaningful anticorrelation.
tl;dr: Real morals, if they exist, provide one potential reason for AIs to use their intelligence to defy their programmed goals if those goals conflict with real morals.
If true morals exist (i.e. moral realism), and are discoverable (if they're not then they might as well not exist), then you would expect that a sufficiently intelligent being will figure them out. Indeed most atheistic moral realists would say that's what humans and progress are doing...
If luke is naturally good at putting stuff he's read into practical use, and particularly if he knows it (at least subconsciously), then he would be likely to want to read a lot of self-help books. So the causality in your argument makes more sense to me the other way around. Not sure if I'm helping at all here though.
I've actually lost track of how this impacts my original point. As stated, it was that we're worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.
I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you've convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.
This worry about the creation and destruction of simulations doesn't make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It's just guilt by association - "ending a simulation is like death, death is bad, therefore simulations are bad". Yes death is bad, but only for reasons which don't necessarily apply here.
To me, if anything worrying about the simulation...
Really? Where? I just reread it with that in mind and I still couldn't find it. The closest I came was that he once used the term "sentient simulation", which is at least technically broad enough to cover both. He does make a point there about sentience being something which may not exactly match our concept of a human, is that what you're referring to? He then goes on to talk about this concept (or, specifically, the method needed to avoid it) as a "nonperson predicate", again suggesting that what's important is whether it's like a human-like rather than anything more fundamental. I don't see how you could think "nonperson predicate" is covering both human and nonhuman intelligence equally.
Now if you want to extend this to simulate real voter behaviour, add multiples of the rhetoric and lotteries, and then entirely remove all information about the allocator's output.
I tried to cover what you're talking about with my statement in brackets at the end of the first paragraph. Set the value for disagreeing too high and you're rewarding it, in which case people start deliberately making randomised choices in order to disagree. Too low and they ought to be going out of their way to try and agree above all else - except there's no way to do that in practice, and no way not to do it in the abstract analysis that assumes they think the same. A value of 9 though is actually in between these two cases - it's exactly the average o...
Ok, thanks, that makes more sense than anything I'd guessed.
There's a difference between shortcutting a calculation and not accounting for something in the first place. In the debate between all the topics mentioned in the paper (e.g. SSI/SSA, split responsibility, precommitments and so on) not one method would give a different answer if that 0 was a 5, a 9, or a -100. It's not because they're shortcutting the maths, it's because, as I said in my first comment, they assume that it's effectively not possible for the two people to vote differently anyway. Wh...
? multiply what by that zero? There's so many things you might mean by that, and if even one of them made any sense to me I'd just assume that was it, but as it stands I have no idea. Not a very helpful comment.
I have an interesting solution to the non-anthropic problem. Firstly, the reward of 0 for voting differently is ignored in all the calculations, as it is assumed the other agent is acting identically. Therefore, its value is irrelevant (unless of course it becomes so high that the agents start deliberately employing randomisation in an attempt to try and vote differently, which would distort the problem).
However, consider what happens if you set the value to 9. In this case, you can forget about the other agent entirely. Voting heads if the coin was tails ...
Devil's advocate time:
They don't know nothing about it. They know two things.
Here are some reasons to oppose the plan, based on the above knowledge:
We don't need a debt reduction plan, just keep doing what we're doing and it will sort itself out.
I like another existing plan, and this is not that one, so I oppose it.
I've heard of Panetta and (s)he's a complete douchebag. Anything they've come up with is clearly junk.
I haven't even heard of either of them, so what the heck would they kn
rightness plays no role in that-which-is-maximized by the blind processes of natural selection
That being the case, what is it about us that makes us care about "rightness" then? What reason do you have for believing that the logical truth of what is right will has more influence on human behaviour than it would on any other general intelligence?
Certainly I can agree that there's reasons to worry another intelligence might not care about what's "right", since not every human really cares that much about it either. But it feels like yo...
This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God's existance is about as meaningful as asking "do you believe in the axiom of choice?". Then, after you've failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It's this part that's the weak link. The idea that the bible tells us something about God (and therefore by exte...
I think in that specific example, they're not arguing about the meaning of the word "immoral" so much as morality itself. So the actual argument is meta-ethical, i.e. "What is the correct source of knowledge on what is right and wrong?". Another argument they won't ever resolve of course, but at least a genuine one not a semantic one.
In other situations, sometimes the argument really boils down to something more like "Was person A an asshole for calling person B label X?". Here they can agree that person B has label X accordin...
There's one flaw in the argument about Buy a Brushstroke vs African sanitation systems, which is the assumption/implication that if they hadn't given that money to Buy a Brushstroke they would have given it to African sanitation systems instead. It's a false dichotomy. Sure, the money would have been better spent on African sanitation systems, but you can say that about anything. The money they spent on their cars, the money I just spent on my lunch, in fact somewhere probably over 99.9% of all non-African-sanitation-system-purchases made in the first-worl...
So, there's direct, deterministic causation, like people usually talk about. Then there's stochasitic causation, where stuff has a probabilistic influence on other stuff. Then there's pure spontenaity, things simply appearing out of no-where for no reason, but according to easily modeled rules and probabilities. Even that last is at least theorised to exist in our universe - in particular as long as the total energy and time multiply to less than planck's constant (or something like that). At no point in this chain have we stopped calling our universe caus...
I think the standard for accuracy would be very different. If Watson gets something right you think "Wow that was so clever", if it's wrong you're fairly forgiving. On that other hand, I feel like if an automated fact checker got even 1/10 things wrong it would be subject to insatiable rage for doing so. I think specifically correcting others is the situation in which people would have the highest standard for accuracy.
And that's before you get into the levels of subjectivity and technicality in the subject matter which something like Watson would never be subjected to.