All of The_Duck's Comments + Replies

I already have this and it's horrible.

0lululu
Somewhere in between your level of discomfort from not doing things and my level (which is 0)... I think it would be kind of nice to have it embodied in an actual physical sensation like needing to pee, instead of a nagging and building sense of guilt and self-directed frustration? You could externalize those feelings and maybe it would let you train those skills without developing the same emotional ugh fields.

What about the fact that the best compression algorithm may be insanely expensive to run? We know the math that describes the behavior of quarks, which is to say, we can in principle generate the results of all possible experiments with quarks by solving a few equations. However doing computations with the theory is extremely expensive and it takes something like 10^15 floating point operations to compute, say, some basic properties of the proton to 1% accuracy.

0Daniel_Burfoot
Good point. My answer is: yes, we have to accept a speed/accuracy tradeoff. That doesn't seem like such a disaster in practice. Some people, primarily Matt Mahoney, have actually organized data compression contests similar to what I'm advocating. Mahoney's solution is just to impose a certain time limit that is reasonable but arbitrary. In the future, researchers could develop a spectrum of theories, each of which achieves a non-dominated position on a speed/compression curve. Unless something Very Strange happened, each faster/less accurate theory would be related to its slower/more accurate cousin by a standard suite of approximations. (It would be strange - but interesting - if you could get an accurate and fast theory by doing a nonstandard approximation or introducing some kind of new concept).
The_Duck110

I'm pretty sure cost of resurrection isn't his true rejection, his true rejection is more like 'point and laugh at weirdos'.

Also for a number of commenters in the linked thread, the true rejection seems to be, "By freezing yourself you are claiming that you deserve something no one else gets, in this case immortality."

-2Lumifer
Heh. Any true-believer Christian would laugh at cryonics and point out that the way to everlasting life is much simpler -- just accept Jesus... X-D Oh, and any true-believer Buddhist would be confused as to why would you want to linger on your way to enlightenment.
8skeptical_lurker
This is almost identical to the argument against free-market medical care "Why should you get better treatment just because you can afford it?". I wonder how many commentators would agree with both arguments.

Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?

Sure, this would work in principle. But I guess it would be fantastically expensive compared to a simple building. The centrifuge would need to be really big and, unlike in 0g, would have to be powered by a big motor and supported against Mars gravity. And Mars gravity isn't that low, so it's unclear why you'd want to pay this expense.

A big pie, rotating in the sky, should have apparently shorter circumference than a non-rotating one, and both with the same radii.

I can't swallow this. Not because it is weird, but because it is inconsistent.

There is no inconsistency. In one case you are measuring the circumference with moving rulers, while in the other case you are measuring the circumference with stationary rulers. It's not inconsistent for these two different measurements to give different results.

-1Thomas
No. I am measuring from here, from the centre with some measuring device. First I measure stationary pie, then I measure the rotating one. Those red-white stripes are either constant, either they shrink. If they are shrinking, they should multiply as well. If they are not shrinking, what happened with Mr. Lorentz's contraction?
The_Duck-20

You don't need GR for a rotating disk; you only need GR when there is gravity.

0[anonymous]
Rotation drags spacetime.

Having dabbled a bit in evolutionary simulations, I find that, once you have unicellular organisms, the emergence of cooperation between them is only a matter of time, and from there multicellulars form and cell specialization based on division of labor begins.

I'm very curious: in what evolutionary simulations have you seen these phenomena evolve?

6Shmi
If I recall, you start with a single cell like an amoeba, which has to be smart enough to not accidentally eat its own pseudopods, so the relevant mutation sticks, and results in it also not eating its clones and other cells of the same type. This only sticks if there is enough food around so that there is no competition between them. This is how you get cooperation with the same kind. At this point the mutation disappears if you reduce the food supply, as defection (evolving cannibalism) becomes the dominant adaptation. However, if you provide the right conditions for the collections of cells (colonies) to win over single cells (because feeding in packs gives you better odds of eating vs being eaten), then the simple defections do not stick, as single defectors lose to colonies of cooperators. The most fit organisms are those which create colonies right away, with each division, not waiting for a chance to cooperate. Once you have cell colonies competing, the division of labor is next. A relatively simple mutation which lets a cell to become either a hunter, if it is outside-facing, or a food processor, if it is surrounded by the same kind during the first part of its life, is a simple model of how cell specialization might appear. Colonies with two kinds of dedicated cells are more efficient and win out. And so on. The immune system also appears naturally, as hunter cells already perform this role. The models above are, naturally, a gross oversimplification, but they show how the multicellulars could evolve. The simulation code itself is almost trivially simple, I can probably dig it out at some point. I don't recall doing much more than what I've described, but presumably a communication subsystem would increase genetic fitness, eventually resulting in the appearance of the nervous system. I kind of lost interest when it got overly complicated to code. I bet there are people out there who do this for a living.

A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path.

Careful!--a lot of people will bite the bullet and call the rock+stick system conscious if you put a complicated enough pattern of sticks in front of it and provide the rock+stick system with enough input and output channels by which it can interact with its surroundings.

This doesn't seem like a good analogy to any real-world situation. The null hypothesis ("the coin really has two tails") predicts the exact same outcome every time, so every experiment should get a p-value of 1, unless the null-hypothesis is false, in which case someone will eventually get a p-value of 0. This is a bit of a pathological case which bears little resemblance to real statistical studies.

1dvasya
While the situation admittedly is oversimplified, it does seem to have the advantage that anyone can replicate it exactly at a very moderate expense (a two-headed coin will also do, with a minimum amount of caution). In that respect it may actually be more relevant to real world than any vaccine/autism study. Indeed, every experiment should get a pretty strong p-value (though never exactly 1), but what gets reported is not the actual p but whether it is above .95 (which is an arbitrary threshold proposed once by Fisher who never intended it to play the role it plays in science currently, but merely as a rule of thumb to see if a hypothesis is worth a follow-up at all.) But even the exact p-values refer to only one possible type of error, and the probability of the other is generally not (1-p), much less (1-alpha).

The analogy seems pretty nice. The argument seems to be that, based on the historical record, we're doomed to collective inaction in the face of even extraordinarily dangerous risks. I agree that the case of nukes does provide some evidence for this.

I think you paint things a little too grimly, though. We have done at least a little bit to try to mitigate the risks of this particular technology: there are ongoing efforts to prevent proliferation of nuclear weapons and reduce nuclear stockpiles. And maybe a greater risk really would provoke a more serious response.

I think the Born rule falls out pretty nicely in the Bohmian interpretation.

What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?

Having recognized this danger, you should probably be more skeptical of verbal arguments.

This is essentially the standard argument for why we have to quantize gravity. If the sources of the gravitational field can be in superposition, then it must be possible to superpose two different gravitational fields. But (as I think you acknowledge) this doesn't mean that quantum mechanical deviations from GR have to be detectable at low energies.

4Shmi
Sort of. The problem first appears because the LHS of the EFE is a classical tensor, while the RHS is an operator, two different beasts. And using expectation value of the stress energy tensor does not work that well. The cosmological constant problem does not help, either. The MWI ontology just makes the issues starker. That's why I am surprised that Carroll completely avoids discussing it even though GR is his specialty.

I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.

I'm pretty sure I've seen a paper discussing this and probably you can find data if you google around for "iq income correlation" and similar.

Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.

Note that because of Bell's theorem, any classical system is going to have real trouble emulating all of quantum mechanics; entanglement is going to trip it up. I know you said "replicate many aspects of quantum mechanics," but it's probably important to emphasize that this sort of thing is not going to lead to a classical model underlying all of QM.

I read it as saying that people have many interests in common, so pursuing "selfish" interests can also be altruistic to some extent.

6johnlawrenceaspden
If that is the intended reading, then it's an example of sounding wise while saying nothing.
The_Duck180

every time we discover something new we find that there are more questions than answers

I don't think that's really true though. The advances in physics that have been worth celebrating--Newtonian mechanics, Maxwellian electromagnetism, Einsteinian relativity, the electroweak theory, QCD, etc.--have been those that answer lots and lots of questions at once and raise only a few new questions like "why this theory?" and "what about higher energies?". Now we're at the point where the Standard Model and GR together answer almost any quest... (read more)

Fair enough. I can see the appeal of your view if you don't think there's a theory of everything. But given the success of fundamental physics so far, I find it hard to believe that there isn't such a theory!

1Shmi
Given that every time we discover something new we find that there are more questions than answers, I find it hard to believe that the process should converge some day.
The_Duck180

What would it mean then for a Universe to not "run on math"? In this approach it means that in such a universe no subsystem can contain a model, no matter how coarse, of a larger system. In other words, such a universe is completely unpredictable from the inside. Such a universe cannot contain agents, intelligence or even the simplest life forms.

I think when we say that the universe "runs on math," part of what we mean is that we can use simple mathematical laws to predict (in principle) all aspects of the universe. We suspect that ... (read more)

6Shmi
Yeah, I don't see this as likely at all. As I repeatedly said here, it's models all the way down.

Quantum fluctuations are not dynamical processes inherent to a system, but instead reflect the statistical nature of measurement outcomes.

I'm no expert at all, but while that sounds agreeable on an intuitive level, I've read that the opposite is true - ie that QM processed are inherently fuzzy

I don't quite understand why you think that this is the opposite of what you quoted. The point is that the "inherent fuzziness" is there, but it is not because of literal unobserved "fluctuations" of the system over time. Speaking of "fl... (read more)

The_Duck100

something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it

I don't know if I agree with your estimate of the relative probabilities, but I admit that I exaggerated slightly to make my point. I agree that this strategy at least worth thinking about, especially if you think it is at all plausible that we are in a simulation. Something along these lines is the only one of the listed strategies that I thou... (read more)

2Froolow
This is an excellent comment, and it is extremely embarrassing for me that in a post on the plausible 'live forever' strategy space I missed three extremely plausible strategies for living forever, all of which are approximately complementary to cryonics (unless they're successful, in which case; why would you bother). I'd like to take this as evidence that many eyes on the 'live forever' problem genuinely does result in utility increase, but I think it is a more plausible explanation that I'm not very good at visualising the strategy space!
The_Duck100

Personally, I don't find any of the strategies you mention to be plausible enough to be worth thinking about for more than a few seconds. (Most of them seem obviously insufficient to preserve anything I would identify as "me.") I'm worried this may produce the opposite of this post's intended effect, because it may seem to provide evidence that strategies besides cryonics can be easily dismissed.

2Froolow
I think the plausibility of the arguments depends in a very great part on how plausible you think cryonics is; since the average on this site is about 22%, I can see how other strategies which are low likelihood/high payoff might appear almost not worth considering. On the other hand, something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it, and both rely on the invention of technology which appears logically possible but well outside the realms of current science (overcome death vs overcome computational limits on simulations). But simulation preservation is three orders of magnitude cheaper than cryonics, which suggests to me that it might be worthwhile to consider. That is to say, if you seriously dismissed it in a couple of seconds you must have very very strong reasons to think the strategy is - say - about four orders of magnitude less likely than cryonics. What reason is that? I wonder if maybe I assumed the simulation problem was more widely accepted than I thought it might be. I'm a bit concerned about this line of reasoning, because all of my friends dismiss cryonics as 'obviously not worth considering' and I think they adopt this argument because the probabilistic conclusions are uncomfortable to contemplate. With respect to your second point, that this post could be counter-productive, I am hugely interested by the conclusion. A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics) and that both of those conclusions would be anathemic to the other group. If the 'plausible strategy-space' is not large I would take that as evidence that the strategy-space is in fact zero and people are just good at aggregating around plausible-but-flawed strategies. Can you think about any other major human accomplish

"There are numbers you can't remember if I tell them to you" is not at all the same claim that "there are ideas I can't explain to you."

But they might be related. Perhaps there are interesting and useful concepts that would take, say, 100,000 pages of English text to write down, such that each page cannot be understood without holding most of the rest of the text in working memory, and such that no useful, shorter, higher-level version of the concept exists.

Humans can only think about things that can be taken one small piece at a tim... (read more)

I can't turn it into equations.

Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.

I see a future pattern emerging in the United States:

Few atheists among overwhelming Christians -> shrinking Christianity, growing Atheism -> atheism tribalness growing well connected and strong -> Natural tribal impulse to not tolerate different voices -> war between atheists and Christians.

The last arrow seems like quite a jump. In the US we try to restrain the impulse to intolerance with protections for free speech and such. Do you think these protections are likely to fail? Why are religious divisions going to cause a war when other divi... (read more)

I don't think that line makes him a compatibilist, because I don't think that's the notion of free will under discussion.

What exactly is the notion of free will that is under discussion? Or equivalently, can you explain what a "true" compatibilist position might look like? You cited this paper as an example of a "traditionally compatibilist view," but I'm afraid I didn't get much from it. I found it too dense to extract any meaning in the time I was willing to spend reading it, and it seemed to make some assertions that, as I interpr... (read more)

4[anonymous]
Well, I suppose I picked a form of compatibilism I find appealing and called it 'traditional'. It's not really traditional so much as slightly old, and related to a very old compatibilist position described by Kant. But there are lots of compatibilist accounts, and I do think EY's probably counts as compatibilist if one thinks, say, Hobbes is a compatibilist (where freedom means simply 'doing what you want without impediment'). A simple explanation of a version of compatibilism: So, suppose you take free will to be the ability to choose between alternatives, such that an action is only freely willed if you could have done otherwise. The thought is that since the physical universe is a fully determined, timeless mathematical object, it involves no 'forking paths'. Now imagine a scenario like this, courtesy of a the philosopher who came up with this argument: The thought is, Jones is responsible for shooting Smith, he did so freely, he was morally responsible, and in every way one could wish for, he satisfied the notion of 'free will'. Yet there was no 'fork in the road' for Smith, and he couldn't have chosen to do otherwise. Hence, whatever kind freedom we're talking about when we talk about 'free will' has nothing to do with being able to do otherwise. This sort of freedom is wholly compatible with a universe in which there are no 'forking paths'.

I think this is his conclusion:

...if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.

Then I could say something like: "This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals."

This is a condition that can fail in the presence of jai

... (read more)
1[anonymous]
True, and EY seems to be taking up Isaiah Berlin's line about this: suggesting that the problem of free will is a confusion because 'freedom' is about like not being imprisoned, and that has nothing to do with natural law one way or the other. I absolutely grant that EY's definition of free will given in the quote is compatible with natural determinism. I think everyone would grant that, but it's a way of saying that the sense of free will thought to conflict with determinism is not coherent enough to take seriously. So I don't think that line makes him a compatibilist, because I don't think that's the notion of free will under discussion. It's consistent with us having free will in EY's sense, that all our actions are necessitated by natural law (or whatever), and I take it to be typical of compatibilism that one try to make natural law consistent with the idea that actions are non-lawful, or if lawful, nevertheless free. Maybe free will in the relevant sense a silly idea in the first place, but we don't get to just change the topic and pretend we've addressed the question. And he does a very good job of that, but this work shouldn't be confused with something one might call a 'solution' (which is how the sequence is titled), and it's not a compatibilist answer (just because it's not an attempt at an answer at all). I'm not saying EY's thoughts on free will are bad, or even wrong. I'm just saying 'It seems to me that EY is not a compatibilist about free will, on the basis of what he wrote in the free will sequence'.

my confidence that the ultimately correct and most useful Next Great Discovery (e.g. any method to control gravity) will not come from a physics department is above 50%.

If you care to expand on this, I'm curious to hear your reasoning.

0Daniel_Burfoot
Think of Steve Jobs vs. the business school professor who wrote a book about entrepreneurship.
3DanielLC
My interpretation is that having an explanation for something is useless if you can't actually make it happen. And even if you don't fully understand how something works, it's good to be able to use it. For example, I would much rather be able to use a computer than know how it works. Also, if you can't do it, that calls into question whether your explanation is actually valid. Anyone can explain something, so long as they're not required to actually make the explanation useful.
0soreff
I don't know, but it sounds similar to "It's smarter to be lucky than it's lucky to be smart."
3philh
I interpret it as related to expert-at versus expert-on. If you assume that an expert-on is always an expert-at, then someone explaining something they can't do is clearly not an expert. I'm not sure that assumption is true, though I could believe it's a useful rule of thumb.

Computer simulation of the strong interaction part of the Standard Model is a big research area: you may want to read about lattice QCD. I've written a simple lattice QCD simulation in a few hundred lines of code. If you Google a bit you can probably find some example code. The rest of the Standard Model has essentially the same structure and would only be a few more lines of code.

0Strilanc
Assuming you're right about it only being a few more lines of code, and that you didn't use a lot of external libraries, that puts an upper bound at... a few dozen kilobits? I'm guessing that could be made a lot smaller, since you were likely not focused on minimizing the lines of code at all costs.

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post

Yes; I meant for the phrase "divide up food equally" to be shorthand for something more correct but less compact, like "a complicated algorithm whose rough outline includes parts like, '...When a group of people are dividing up resources, divide them according to the following weighted combination of need, ownership, equality, who discovered the resources first, ...'"

The_Duck100

I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc. I like the idea that "fair" points to a logical algorithm whose properties we can discuss objectively, but when you insist on using the word "fair," and no other word, as your pointer to this algorithm, people inevitably get confused. It seems like you are insisting that words have objective meanings, or that your morality is universally compelling, ... (read more)

3[anonymous]
See lukeprog's Pluralistic Moral Reductionism.

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post -- i.e., "three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory." But once you start tampering with these conditions -- suppose that one of them owned the land, or one of them baked the pie, or two were we... (read more)

Upvoted because of the frank and detailed reduction of pleasure, pain, and preferences in general.

This seems very insightful to me. In physics, it's definitely my experience that over time I gain fluency with more and more powerful concepts that let me derive new things in much faster and simpler ways. And I find myself consciously working ideas over in my mind with, I think, the explicit goal of advancing this process.

The funny thing about this is that before I gain these "superpowers," I'll read an explanation in a textbook, which is in terms of high-level ideas that I haven't completely grasped yet, so the reading doesn't help as much as ... (read more)

I'm not disputing that we should factor in the lost utility from the future-that-would-have-been.

The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn't want to die, anywhere in the future history of the universe. [To be clear, by "future history of the universe" I mean everything that ever gets simulated by the simulator's computer, if our universe is a simulation.]

That's the negative utility I'm weighing against whatever utility w... (read more)

-1Rob Bensinger
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone's total well-being. Obviously, it's not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences. Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it's no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm. Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn't be obvious that this has much moral relevance. Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future

I am having my doubts that time travel is even a coherent concept.

But Eliezer gave you a constructive example in the post!

0MugaSofer
In fairness, his example assumed a universal timeframe (experienced by the simulators.)
2scav
OK then, I am having doubts that my mind is coherent enough to discuss time travel usefully.

I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don't see why this can't fall under the umbrella of "utilitarianism." Anyway, if your utility function doesn't do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.

0Rob Bensinger
I'm not disputing that we should factor in the lost utility from the future-that-would-have-been. I'm merely pointing out that we have to weigh that lost utility against the gained utility from the future-created-by-retrocausation. Choosing to go back in time means destroying one future, and creating another. But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree? If we weigh the future just as strongly as the present, why should we not also weigh a different timeline's future just as strongly as our own timeline's future, given that we can pick which timeline will obtain?

If you could push a button and avert nuclear war, saving billions, would you?

Of course.

Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?

Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone's computer I think it's immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, beca... (read more)

6Rob Bensinger
But the cost of destroying this universe has to be weighed against the benefit of creating the new universe. Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.

either way there is an equal set of people-who-won't-exist. It's only a bad thing if you have some reason to favor the status-quo of "A exists"

My morality has a significant "status quo bias" in this sense. I don't feel bad about not bringing into being people who don't currently exist, which is why I'm not on a long-term crusade to increase the population as much as possible. Meanwhile I do feel bad about ending the existence of people who do exist, even if it's quick and painless.

More generally, I care about the process by which we ... (read more)

0chaosmosis
Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can't think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they're rationally morally identical. Of course, if you're using a naturalist based intuitionist approach to morality, then you can recognize that it's illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you're built. This is roughly what I believe, and why I don't push very hard for large population increases.
0wuncidunci
Consider instead of time traveling from time T' to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and "B plus A until time T' when it gets destroyed". If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So "B plus A until time T' when it gets destroyed" is better than B which in turn is better than A. So if you want your preferences to be transitive you should prefer the scenario where you destroy A at time T' by time traveling to B. There are two weaknesses in the above: perhaps A is better than oblivion, but A between the times T and T' is really horrible (ie it is better in long term but negative value in short term). Then you wouldn't prefer having A around for a while over not having it at all. But this is a very exceptional scenario, not the world goes on as usual but you go back and change something to the better that we seem to be discussing. Another way this can fail is if you don't think that saying you have both universes B and A (for a while) around is meaningful. I agree that it is not obvious what this would actually mean, since existence of universes is not something that's measurable inside said universes. You would need to invent some kind of meta-time and meta-universe, kind of like the simulation scenario EY was describing in the main article. But if you are uncomfortable with this you should be equally uncomfortable with saying that A used to exist but now doesn't, since this is also a statement about universes which only makes sense if we posit some kind of meta-time outside of the universes.
2handoflixue
If you could push a button and avert nuclear war, saving billions, would you? Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war? Either way, you're choosing between two alternate time lines. I'm failing to grasp how the "cause" of the choice being time travel changes ones valuations of the outcomes.

Suppose we pick out one of the histories marked with a 1 and look at it. It seems to contain a description of people who remember experiencing time travel.

Now, were their experiences real? Did we make them real by marking them with a 1 - by applying the logical filter using a causal computer?

I'd suggest that if this is a meaningful question at all, it's a question about morality. There's no doubt about the outcome of any empirical test we could perform in this situation. The only reason we care about the answer to such questions is to decide whether it... (read more)

Thanks, I wish someone had pointed out this isomorphism to me earlier. I think angles might well be more intuitive than correlation coefficients.

The_Duck110

The examples make the point that it's possible to be too pessimistic, and too confident in that pessimism. However, maybe we can figure out when we should be confidently pessimistic.

For example, we can be very confidently pessimistic about the prospects for squaring the circle or inventing perpetual motion. Here we have mathematical proofs of impossibility. I think we can be almost as confidently pessimistic about the near-term prospects for practical near-light-speed travel. Here we have a good understanding of the scope of the problem and of the capabil... (read more)

3lukeprog
Yes, an important question, though not one I wanted to tackle in this post! In general, we seem to do better at predicting things when we use a model with moving parts, and we have opportunity to calibrate our probabilities for many parts of the model. If we built a model that made a negative prediction about the near-term prospects for a specific technology after we had calibrated many parts of the model on lots of available data, that should be a way to increase our confidence about the near-term prospects for that technology. The most detailed model for predicting AI that I know of is The Uncertain Future (not surprisingly, an SI project), though unfortunately the current Version 1.0 isn't broken down into parts so small that they are easy to calibrate. For an overview of the motivations behind The Uncertain Future, see Changing the Frame of AI Futurism: From Storytelling to Heavy-Tailed, High-Dimensional Probability Distributions.

How can utilities not be comparable in terms of multiplication?

"The utility of A is twice the utility of B" is not a statement that remains true if we add the same constant to both utilities, so it's not an obviously meaningful statement. We can make the ratio come out however we want by performing an overall shift of the utility function. The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities. But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.

0gwern
Kindly says the ratios do have relevance to considering bets or risks. Yes, I think I see my error now, but I think the force of the numbers is clear: log utility in money may be more extreme than most people would intuitively expect.

Yes, clearly my Google-fu is lacking. I think I searched for phrases like "sun went around the Earth," which fails because your quote has "sun went round the Earth."

7gwern
There's your problem, you got overly specific. When you're formulating a search, you want to balance how many hits you get - the broader your formulation, the more likely the hits will include your target (if it exists) but the more hits you'll return. In this case, my reasoning would go something like this, laid out explicitly: '"Wittgenstein" is almost guaranteed to be on the same page as any instance of this quote, since the quote is about Wittgenstein; LW, however, doesn't discuss Wittgenstein very much, so there won't be many hits in the first place; to find this quote, I only need to narrow down those hits a little, and after "Wittgenstein", the most fundamental core word to this quote is "Earth" or "sun", so I'll toss one of them in and... ah, there's the quote.' If I were searching the general Internet, my reasoning would go more like "'Wittgenstein' will be on like a million websites; I need to narrow that down a lot more to hope to find it; so maybe 'Wittgenstein' and 'Earth' and 'Sun'... nope nothing on the first page, toss in 'goes around' OR 'go around', ah there it is!" (Actually, for the general Internet, just 'Wittgenstein earth sun' turns up a first page mostly about this quote, several of which include all the details one could need aside from Dawkins's truncated version.)

Thanks; I thought it was likely to have been posted, but I tried to search for it and didn't find it.

2gwern
Mm. If you had googled for 'wittgenstein earth', which seems to me to be the most obvious search phrase, you would've found 2 links on the first page...

"Tell me," the great twentieth-century philosopher Ludwig Wittgenstein once asked a friend, "why do people always say it was natural for man to assume that the sun went around the Earth rather than that the Earth was rotating?"

His friend replied, "Well, obviously because it just looks as though the Sun is going around the Earth."

Wittgenstein responded, "Well, what would it have looked like if it had looked as though the Earth was rotating?"

-related by Richard Dawkins in The God Delusion

0tut
Like I was standing still and the earth was rotating.
5gwern
Versions of this quote have been posted twice before; the best version of the quote includes the friend's reply to Wittgenstein: http://lesswrong.com/lw/94r/rationality_quotes_january_2012/5kib

I have an objection to this:

So branching is the consequence of a particular type of physical process: the "measurement" of a microscopic superposition by its macroscopic environment. Not all physical processes are of this type, and its not at all obvious to me that the sorts of processes usually involved in our deaths are of this sort.

I think that essentially all processes involving macroscopic objects are of this type. My understanding is that the wave function of a macroscopic system at nonzero temperature is constantly fissioning into vast... (read more)

1Armok_GoB
And then there's the branch with extremely small amplitude that separated 30 seconds ago where the bus explodes form proton decay.
Load More