...on signing up for cryopreservation with the Cryonics Institute.

(No, it's not a joke.)

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton,
because no matter what else she does wrong, and what else you do right,
all of it together can't outweigh the life consequences of that one little decision.

Congratulations, Paris.  I look forward to meeting you someday.

Addendum:  On Nov 28 '07, Paris Hilton denied being signed up for cryonics.  Oh well.

New Comment
97 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Wow; I'm impressed by her (in a different way than before). Of course that consequence outweighing claim depends crucially on the probability of cryonics working.

She doesn't understand the process: "And if you're immediately cooled, you can be perfectly preserved."

[-]bw200

She does not understand that life gives meaning to life. I am starting to wonder whether she is really as brilliant as I thought.

If this is not a hoax or she does a Leary, we will have her around for a long time. Maybe one day she will even grow up. But seriously, I think Eli is right. In a way, given that I consider cryonics likely to be worthwhile, she has demonstrated that she might be more mature than I am.

To get back to the topic of this blog, cryonics and cognitive biases is a fine subject. There is a lot of biases to go around here, on all sides.

2Benquo
Maybe I should follow the same heuristic and find some magicians to listen to.

I would think that SIAI is a better investment than cryonics.

(I agree, Peter.)

no matter what else she does wrong, and what else you do right, all of it together can't outweigh the life consequences of that one little decision.

I think a person's life should be evaluated by what effect they have on civilization (or more precisely, on the universe) not by how long they live. I think that living a long time is a merely personal end, and that a properly lived life is devoted to ends that transcend the personal. Isn't that what you think, Eliezer?

Reworded. Apparently this comment was very unclear. Original in italics below.

See the comment below: My primary reason for signing up for cryonics was because I got sick of the awkwardness, in important conversations, of trying to explain why cryonics was a good idea but I wasn't signed up for cryonics.

The secondary considerations, though they did not swing the decision, are probably of much greater relevance to readers of this blog.

I have found that it's not possible to be good only in theory. I have to hold open doors for little old ladies, even if i... (read more)

SIAI welcomes its new Director of Sex Appeal.

On cryonics, basically I understand the wager as that for a (reasonably) small sum I can obtain a small, but finite, possibility of immortality, therefore the bet has a very high return, so I should be prepared to make the bet. But this logic to me has the same flaw as Pascal’s wager. There are many bets that appear to have similar payoff. For instance, although I am an atheist, I cannot deny there is a small, probably smaller than cryonics, chance that belief in a God is correct. Could cryonics be like religion in this way, an example of exposure bias, re... (read more)

Tell me again: you advise a person to spend on their personal cryonic preservation before they donate to SI?

(Cryopreservation has very low expected payoff till after the singularity, does it not?)

I perceive a fundamental tension between personal goals and goals that transcend the personal. I.e. until ~400 years ago civilization advanced mainly as a side effect of people's advancing their personal interests, but the more a person's environment diverges from the EEA, the more important it is for the person to choose to advance civilization directly, as an e... (read more)

But this logic to me has the same flaw as Pascal’s wager.

Pascal's Wager, quantifying the complexity penalty in Occam's Razor, has a payoff probability on the order of 2^-bits(Christianity). Imagine a decimal point, followed by a string of 0s the length of the Bible, followed by a 1.

Cryonics simply "looks like it ought to work". The technical probability seems better than one-half, excluding the probability that humanity itself survives.

The problem with Pascal's Wager is not the large payoff but the tiny, unsupported probability.

Cryonics would b... (read more)

“The problem with Pascal's Wager is not the large payoff but the tiny, unsupported probability”

Why is the unsupported probability a problem? As long as there is any probability the positive nature of the wager holds. My problem with Pascal’s Wager is that there are any number of equivalent bets, so why chose Pascal’s bet over any of the others available? Better not to chose any, and spend the resources on a sure bet, i.e. utility in today’s life not a chance at a future one.

On Cryonics, while the technical nature of process is clearly more (by a huge amoun... (read more)

We should make it clear that most Overcoming Bias readers probably place a low probability on cryonics working, and on needing to deal with rogue AIs anytime soon. Some apparently place high probabilities on these. Me, I see them as likely enough to be worth considering, but still far less likely than not.

Holy crap. Paris Hilton actually did something smart for once.

-6timtyler

because you believe humanity needs to move forward and get over the Death thing

In the end, this sort of rhetoric is false. Cryonics offers you more time, that's all. "Something or other must make an end of you some day... Everything happens to everybody sooner or later if there is time enough", as George Bernard Shaw put it in his own immortalist essay. More time to look for genuine immortality, if you wish, but that search has no scientific argument behind it remotely comparable to the argument for reversibility of cryostasis. A chance to reach ... (read more)

I'm horribly confused by this thread.

Eliezer: That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can.

Why? What is wrong with taking an Expected Utility view of your actions? We're all working with limited resources. If we don't choose our battles to maximum effect, we're unlikely to achieve very much.

I understand your primary reason (it's easier to argue for cryonics if you're signed up yourself), but that one only a... (read more)

As shocked as I am by the Paris thing, it doesn't compare to how shocked I am by Eliezer thinking that cryonics is higher priority than SIA, or even than asteroid defense or the very best African Aid charities such as Toby Ord recommends.

I'm totally with Sebastian Hagen here.

Richard Hollerith: We haven't spoken yet, and I think that we should. E-mail me, OK? michaelaruna at yahoo dot com

mitchellporter: Realistically speaking there are some proposals for living forever which make sense, but beyond this there's the chance that our preferences, when converted into utility functions, are satisfiable with the resources that will be at hand post singularity.

I should qualify this. I'm totally with Sebastian in theory. In practice we can't re-write ourselves as altruists, and if we were to do so we would have to ditch lots of precomputed solutions to every-day problems. We have limited willpower both to behave non-automatically and to rewrite our automatic behaviors, and we should be using it in better ways than by not tipping people in restaurants that we aren't going to return to.

CI, not Alcor? That's a little surprising.

A more optimistic take on the (very interesting) cryonics vs. SIAI debate is that, since Ms. Hilton has proven herself open to cryonics, she may be more open than most celebrities to sponsoring and advocating low-visibility, high-impact charities. Her money could do a lot of good and her fame could generate a lot more money for SIAI/Lifeboat/CRN/.... OTOH, as long as she's considered stupid, her support could be bad PR for a fringe-sounding organization that wants to be taken seriously in public policy. Anyway, she... (read more)

Vassar's second post makes me see another good reason to focus on cryonics vs. other luxury goods. Even if cryonics has a higher expected return, it's so deferred, abstract, and uncertain that the willpower cost of not purchasing cryonics is very low compared to other luxuries.

BTW, the first comment on that article is depressing: "I feel sorry for society in the future!"

This isn't an attack on cryonics or Eliezer (I'm in favour of them), just venting frustration at a bias he's quite frustrated with himself that tends to pop up when very smart people predict the future. This is a biases blog after all. To be kinder/less pedantic just replace each "can't" with "won't", it's still pretty bad.

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton, because no matter what else she does wrong, and what else you do right, all of it together can't outweigh the life consequences of ... (read more)

Recovering Irrationalist: Only a couple daft things. OK 6 at least, maybe 7. "cryonics can't fail" has to be replaced by "the chance of cryonics working is not tiny", which seems to be a reasonable evaluation. Likewise, for all subsequent uses of "can't". We always work with probability distributions here. The possible daft thing is taking Eleizer too literally. He clearly didn't literally mean that we had lost our right to criticize Paris. I'm welcome to criticize anyone, and have been known to criticize lots of people ... (read more)

Carl: He seems like a respectable 'debunking' magician. Like Houdini, Penn and Teller, and Randi, he argues against the supernatural, so taking him seriously doesn't seem like a strong criticism of the reasoning process leading Paris to cryonics, though Alcor would seem like the obvious choice rather than the cryonics institute.

Michael: I said I wasn't attacking cryonics, but I guess I overlooked being interpreted as protecting my right to insult celebrities! I'll be more explicit.

My problem is with the words: "all of it together can't outweigh the life consequences of that one little decision". I'm not saying cryonics isn't worthwhile, and I'm not saying Eliezer's wrong to praise Paris Hilton. If you say "I don't eat people because humans are poisonous", and I argue your reasoning, that doesn't mean I called you a cannibal.

Even with probability distributions ... (read more)

Recovering: I'm not signed up for cryonics, though I think I may sign up eventually when the marginal benefit in singularity risk of a dollar spent saving the world is much lower. I definitely don't think that everyone should sign up, but I didn't take the claim literally. My main point was that unlike Eliezer, you did seem to be speaking literally/precisely, and

Cryonics can't fail.

Humanity can't fail to survive all existential risks.

My revival can't be prevented by anything else, unknown unknowns included.

Paris's can't.

I can't make it without cryonics.

Paris can't.

don't follow from his statement.

I do not think that people should prioritize cryonics over SIAI, so stop being shocked, Vassar. I think people should prioritize cryonics over eating at fancy restaurants or over having a pleasant picture in their room. If anyone still does this I don't want to hear them asking whether SIAI or cryonics has higher priority.

I wish people would try a little harder to read into my statements, though not for Straussian reasons. By saying "life consequences" I specifically meant to restrict the range to narrower than "consequences in general&qu... (read more)

The results of a successful cryonics experiment seem to me to be the creation of a very good copy of me.

This is a black hole that sucks up arguments. Beware.

I'll deal with this on Overcoming Bias eventually, but I haven't done many of the preliminary posts that would be required.

Meanwhile, I hope you've noticed your confusion about personal identity.

In underlying physics, there are no distinct substances moving through time: all electrons are perfectly interchangeable, the only fundamentally real things are points in configuration space rather than indivi... (read more)

Michael: I see that's true if his statement is a measure of probability distributions. I thought he meant there's no possibly future where anything I could have done would have made me better off than Paris's one decision made her better off. Looks like I've assumed a common meaning for something used on this blog as a technical term - if so, I apologize.

If I mention that Douglas Hofstadter's "Godel, Escher, Bach" is one of the Greatest Books Ever, people don't jump up and say: "Are you saying it would be better to buy that book than to donate to SIAI?"

No, it isn't. Neither does it feel to me like it would be wise to advise people to go through life bookless, or even that I should advise people to only take books out from the library. If you're going to own any book, you may as well own that one; and it is not totally unconnected to AI, so perhaps there will be trickle effects.

To me ... (read more)

Why pick that particular occasion to make the protest, rather than, say, someone buying a speedboat?

Off the top of my head, that someone is interested in cryonics is very strong Bayesian evidence that they'll be easier than average to persuade to donate to SIAI. On the other hand, this would equally justify suggesting to them to cut back on other luxuries to donate. But like Michael Vassar suggested and I elaborated on, since the benefit of cryonics is far-off and uncertain, it may take less willpower to give up than other luxuries. But surely not that muc... (read more)

that someone is interested in cryonics is very strong Bayesian evidence that they'll be easier than average to persuade to donate to SIAI.

That is it! That is what bothers me about Eliezer's advocacy of cryonics, which I will grant is no more deserving of reproach than most personal expenditures. IIUC, his livelihood depends on donations to the SIAI. Someone once quipped that it is impossible to convince a man of the corrrectness of some proposition if his livelihood depends on his not believing it. Sometimes I worry that his enthusiasm for cryonics is ... (read more)

Nick: Personally, I forgo cryonics in favor of luxury goods all the time rather unapologetically. I don't see how this could constitute conspicuous self-sacrifice signaling. Spending on things like cryonics or SIAI is generally going to be driven by idealized semi-aspirational self-models which are not hyperbolic discounters, not heavy discounters, and extend their self-concept fairly broadly rather than confining it according to biological imperatives. For such a self-model, there's not much self to sacrifice. For the self that makes most of my small... (read more)

[-]g00

It's hard to see how not signing up for cryonics could be "conspicuous" (except for a small minority of professional transhumanists who might face questioning about it, like Eliezer) since (1) to an excellent first approximation no one has signed up for cryonics, so the signal gives rather little information, and (2) it's only going to become public if you are close to death (or if you put out a press release about it like Paris Hilton, I guess).

To most people, abstaining from other luxury goods in favour of cryonics is going to look and feel muc... (read more)

Sometimes I worry that his enthusiasm for cryonics is a sign that his dependency on donations will bias his judgement on important things, not just cryonics.

I do not understand the logic of this. I have no livelihood interest in cryonics.

It would reassure me if singularitarian leaders had secure incomes that derive from a source that does not depend on the opinions and perceptions of prospective donors.

Anyone wants to buy me an annuity, go for it. It would reassure me too.

Do you have livelihood interest in donations to SIAI?

Yes, obviously.

...did I say something unclear? I'm a bit worried because I seem to be misinterpreted a lot, in this thread, and looking back, I can't see why.

Is it okay to say "I don't want to be cryonically preserved because I don't want to be brought back to life in the future after I die normally?"

The livelihood argument only goes to motivation, and Eliezer's motivation is of no interest to me. Why should it be? I don't need to trust his motivation - I only need to read and evaluate his arguments. Or am I missing something?

What you miss is that Eliezer has chosen to accept an immense responsibility (IIUC because no one else had accepted it) namely to guide the world through the transition from evolved intelligence to engineered intelligence. Consequently, Eliezer's thought habits are of high interest to me.

TEXTAREAs in Firefox 1.5 have a disease in which a person must exercise constant vigilance to prevent stray newlines. Hence the ugly formatting.

I'm a bit worried because I seem to be misinterpreted a lot, in this thread, and looking back, I can't see why.

In my case, maybe I need to learn when and how to interpret statements as describing expected utility or probability distributions rather than sets of actual events.

Is there a link that explains this clearly, and is it just a BayesCraft thing or is there reading material outside the Bayesphere I should be able to interpret like this?

RI, in this comment section, you can probably safely replace "utility function" with "goal" and drop the word "expected" altogether.

[-]Tom310

"Congratulations, Paris. I look forward to meeting you someday.

Posted by Eliezer Yudkowsky"

Pffff hahahaha

You neglected to mention that her motivation for signing up for cryonics was to be with her (similarly frozen) pet chihuahua. So Eliezer will have a rival for his affections.

For the love of cute kittens, I didn't mean it that way. "I look forward to meeting you someday" is what I would say of any human being who signed up for cryonics.

Eliezer clarified earlier that this blog entry is about personal utility rather than global utility. That presents me with another opportunity to represent a distinctly minority (many would say extreme) point of view, namely, that personal utility (mine or anyone else's) is completely trumped by global utility. This admittedly extreme view is what I have sincerely believed for about 15 years, and I know someone who held it for 30 years without his becoming an axe murderer or anything horrid like that. To say it in other words, I regard humans as means t... (read more)

Eliezer “This is a black hole that sucks up arguments. Beware. I'll deal with this on Overcoming Bias eventually, but I haven't done many of the preliminary posts that would be required. Meanwhile, I hope you've noticed your confusion about personal identity.”

I look forward to the posts on consciousness, and yes, I don’t feel like I have a super coherent position on this. I struggle to understand how me is still me after I have died, my dead body is frozen, mashed up and then reconstituted some indefinite time in the future. Quarks are quarks but a human i... (read more)

If we equate the decision to undergo cryonics with the decision to live forever, then I think calling it a small decision is problematic. Suppose I were to say, "You will live forever. That is your nature." It seems most people have one of two ways of dealing with this possibility-- 1) create an endlessly beautiful future (heaven) or, 2) deny the possibility (death is an ultimate end). These actions do not seem to me to be based on the notion that living forever is a small decision.

Here's my data point:

  1. Like Michael Vassar, I see the rationality of cryonics, but I'm not signed up myself. In my case, I currently use altruism + inertia (laziness) + fear of looking foolish to non-transhumanists + "yuck factor" to override my fear of death and allow me to avoid signing up for now. Altruism is a constant state of Judo.

  2. My initial gut emotional reaction to reading that Eliezer signed up for cryonics was irritation that Eliezer asks for donations, and then turns around and spends money on this perk that most people, including me

... (read more)

Richard, if morality is a sort of epiphenomenon with no observable effects on the universe, how could anyone know anything about it?

[-]g00

Where did Richard say anything resembling "with no observable effects on the universe"?

Yes, TGGP, I've reread my comment and cannot see where I . . .

TGGP, I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.) The identification of that goal can have a very drastic effect on the universe e.g. by inspiring a group of bright 20 year-olds to implement a seed AI with that goal as its utility function. But that does not answer your question, does it?

I have to admire a blog that can go from Paris Hilton to the metaphysics of morality in only a few short hops.
Richard said The first half of physics, the part we know, asks how reality can be bent towards goals humans already have.
That's engineering, not physics. Then later you say:
I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.)
While yo... (read more)

So, then Richard, do you assert that morality does have observable effects on the universe? Do you think that a physicist can do an experiment that will grant him/her knowledge of morality? You have been rather vague by saying that just as we discovered many positive facts with science, so we can discover normative ones, even if we have not been able to do so before. You haven't really given any indication as to how anyone could possibly do that, except by analogizing again to fields that have only discovered positive rather than normative facts. It would seem to me the most plausible explanation for this difference is that there are none of the latter.

Richard, assuming that you're thinking the way my past self was thinking, you should find the following question somewhat disturbing:

How would you recognize a moral discovery if you saw one? How would you recognize a criterion for recognizing moral discoveries if you saw one? If you can't do either of these things, how can you build an AI that makes moral discoveries, or tell whether or not a physicist is telling the truth when she says she's made a moral discovery?

Thanks for the nice questions.

Handily, the Templeton Foundation took out a two-page ad in the New York Times today where a number of luminaries discuss the purpose of the universe. Presumably our personal goals should be in tune with the overarching goal of the universe, if there is one.

Non sequitur, mtraven.

"Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it." -- T. H. Huxley

Certainly ethical naturalism has encouraged many oppressions and cruelties. Ethical naturalists must remain constantly aware of that potential.

Er, they needn't remain constantly aware. They need only take it into account in all their public statements.

You surely realize you haven't answered any of the tough questions here. Evolution is a natural process, but not an ethical one. The second law of thermodynamics is a universal trend but this doesn't make entropy a terminal value. So how would you recognize a natural ethical process if you saw one?

Eliezer, in what way do you mean "altruism" when you use it? I only ask for clarity.

I don't understand how altruism, as selfless concern for the welfare of others, enters into the question of supporting the singularity as a positive factor. This would open a path for a singularity in which I am destroyed to serve some social good. I have no interest in that. I would only want to support a singularity that benefits me. Similarly, if everyone else who supports the efforts to achieve singularity is simply altruistic no one is looking out for their own welfare. Selfish concern (rational self interest) seems to increase the chance for a safe singularity.

I think it was the Stoics who said one's ethical duty was to act in accordance with the Universe. Marcus Aurelius did a lousy job of making sure his son was competent to run the empire though.

"So how would you recognize a natural ethical process if you saw one?"

Suppose that you observe process A- maybe you look at it, or poke around a bit inside it, but you don't make a precise model. If you extrapolate A forward in time, you will get a probability distribution over possible states (including the states of all the other stuff that A touches). If A consistently winds up in very small regions of this distribution, compared to what your model is, and there's no way to fix your model without making it extremely complex, you can say A is a... (read more)

People who believe that the universe has a goal (I'm not really one of them, except on Thursdays) also tend to believe that humans are the culmination or at least the instrument of that goal. Humans are free to try to combat the universe's goals if they want to, but they may just be fulfilling the universe's goal by rebelling against it.

"How would you recognize a natural ethical process if you saw one?"
How would you recognize an ethical process if you saw one? If you saw an ethical process would you think it unnatural, or supernatural, or what exactly? (Sorry if that's a silly question)

I'm new to this whole cryonics debate, so I have a question: How long do you all believe you'll be frozen for? If you think that being revived is scientifically possible, what are the developments that need to be achieved to get to that point?

Off the top of my head, I would think you'd need, at the very least, (1) prevention of cellular damage of the brain cells during the freezing and reviving processes; (2) the ability to revive dead tissue in general; and (3) the ability to perfectly replicate consciousness after it has been terminated.

I would thi... (read more)

BS - Cost of cryonics: "no less than US$28,000 and rarely more than US$200,000". One way to fund this is with a life insurance policy.

The cryonics organizations themselves are always seeking better methods of suspension, and your contract is with a suspension provider, not with a suspension technology, so the point about technological advance is moot.

There are in general two conceptions of how revival might work. One is through nanotechnological repair of freezing damage to the cells (along with whatever condition originally caused a person's d... (read more)

For the sake of brevity, I borrow from Pascal's Mugger.

If a Mugger appears in every respect to be an ordinary human, let us call him a "very unconvincing Mugger". In contrast, an example of a very convincing Pascal's Mugger is one who demonstrates an ability to modify fundamental reality: he can violate physical laws that have always been (up to now) stable, global, and exception-free. And he can do so in exactly the way you specify.

For example, you say, "Please Mr Mugger follow me into my physics laboratory." There you repeat the Mi... (read more)

Richard, my objections in my e-mail to you still stand. I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our reality, but not everyone accepts that and you have not established that we ought to. If I am a simulation, I take my simulation as my reality and care as little about the space-time simulating me (outside of how they affect me) as another simulation someone in our reality might create. Outside of the issue of importance, you still have not established how we obtain oughts... (read more)

The ought is, You ought to do whatever the very credible Mugger tells you to do if you find yourself in a situation with all the properties I list above. Blind obedience does not have a very good reputation; please remember, reader, that the fact that the Nazis enthusiastically advocated and built an interstate highway system does not mean that an interstate highway system is always a bad idea. Every ethical intelligent agent should do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same... (read more)

I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our [spacetime].

In the scenario with all the properties I list above, I assign most of the intrinsic good to obeying the Mugger. Some intrinsic good is assigned to continuing to refine our civilization's model of reality, but the more investment in that project fails to yield the ability to cause effects that persist indefinitely without the Mugger's help, the more intrinsic good gets heaped on obeying the Mugger. Nothing else ge... (read more)

You have not established that one ought to "do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same". Where is the jump from is to ought? I know Robin Hanson gave a talk saying something along those lines, but he was greeted with a considerable amount of disagreement from people whose ethical beliefs aren't especially different from his.

That entails consistently resisting tyranny and exploitation.
If a tyrant's goal was to increase their knowledge of reality and spread it which t... (read more)

The blog "item" to which this is a comment started 5 days ago. I am curious whether any besides TGGP and I are still reading. One thing newsgroups and mailing lists do better than blogs is to enable conversational threads to persist for more than a few days. Dear reader, just this once, as a favor to me, please comment here (if only with a blank comment) to signal your presence. If no one signals, I'm not continuing.

Why is a "civilization" the unit of analysis rather than a single agent?
Since you put the word in quotes, I take it ... (read more)

[-]g00

I am still reading. I'm inclined to agree with you that if some sort of moral realism is correct and if some demonstrably-godlike being tells you "X is good" then you're probably best advised to believe it. I don't understand how you get from there to the idea that we should be studying the universe like physicists looking for answers to moral questions; so far, so far as I know, all even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren't proceeding at all like physicists a... (read more)

TGGP pointed out a mistake, which I acknowledged and tried to recover from by saying that what you learn about reality can create a behavioral obligation. g pointed out that you don't need to consider exotic things like godlike beings to discover that. If you're driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there's a person trying to cross the road immediately in front of you. So now I have to retreat again.

There are unstated premises that go into the braking-sharply conclusion. Wha... (read more)

I'm still reading.

It is not obvious why creating a causal chain that goes on indefinitely is uniquely morally relevant. (Nor is it obvious that the concept is meaningful in reality - a causal chain with a starting point can be unboundedly long but at no actual point in time will it be infinite.) I do see it as valuable to look for ways to escape this space-time continuum, because I presently want (and think I will continue to want) (post)humanity to continue existing and growing indefinitely, but I don't believe there is any universal validity to this valu... (read more)

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time. Why would you even expect a laboratory experiment to have such a result? It also seems you are postulating an extra-agent (the Mugger), which limits the amount of control experimenters have and in turn makes the experiment unrepeatable.

This was Eliezer's point: how could you ever recognize which ones are good and which ones are evil? How could you even recognize a process for recognizing objective good and evil?

I have only one suggestion so far, which is that if you find yourself in a situation which satisfies all five of the conditions I just listed, obeying the Mugger initiates an indefinitely-long causal chain that is good rather than evil. I consider, "You might as well assume it is good," to be equivalent to, "It is good." Now that I have an example I can try t... (read more)

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time.
As far as I know, none of them are looking for a message from beyond the space-time continuum. Maybe I will try to interest them in making the effort. My main interest however is a moral system that does not break down when thinking about seed AI and the singularity. Note that the search for a message from outside space-time takes place mainly at the blackboard and only at the very end mov... (read more)

I don't think you've established that "You might as well consider it good", I might as well not consider it good or bad. You haven't given a reason to consider it more good than bad, just hope. I might hope my lottery ticket is a winner, but I have no reason to expect it to be.

If you want to persuade physicists to start looking for messages from beyond the space-time continuum, you'd better be able to offer them a method. I am completely at a loss for how one might go about it. I certainly don't know how you are going to do it at the blackboard. ... (read more)

In cryptography, you try to hide the message from listeners (except your friends). In anticryptography, you try to write a message that a diligent and motivated listener can decode despite his having none of your biological, pyschological and social reference points.

I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time.
I meant that most of the difficulty of the project is in understanding our laws of physics well enough to invent a possible novel method f... (read more)

No blog yet, but I now have a wiki anyone can edit. Click on "Richard Hollerith" to go there.

In quantum experiments the random outcomes are the same for all experimenters, so it can be repeated and the same probabilities will be observed. When you have someone else sending messages, you can't rely on them to behave the same for all experimenters. If there are a larger group of Muggers that different scientists could communicate with, than experiments might reveal statistical information about the Mugger class of entity (treating them as experimental subjects), but it's a stretch.

Do you consider the following a fair rephrasing of your last comment? A quantum measurement has probability p of going one way and p - 1 of going the other way where p depends on a choice made by the measurer. That is an odd property for the next bit in a message to have, and makes me suspicious of the whole idea.

If so, I agree. Another difficulty that must be overcome is, assuming one has obtained the first n bits of the message, to explain how one obtains the next bit.

Nevertheless, I believe my primary point remains: since our model of physics does no... (read more)

I do not consider your rephrasing to be accurate. I wasn't giving the measurers choice, they are all supposed to follow the same procedure in order to obtain the same (probabilistic) results. It is the Mugger, or outside agent, that is making choices and therefore preventing the experiment from being controlled and repeatable.

What do you see as the major deficiencies in our model of reality? That the behavior of quantum particles is probabilistic rather than deterministic?

Don't believe everything the tabloids say.

"Paris Sets The Record Straight On 'Ellen'":

http://cbs5.com/entertainment/Paris.Hilton.Ellen.2.598168.html

quote:

The tabloids even stooped so low as to discuss her plans after death. Hilton was quoted as saying "It's so cool, all the cells in your body are still alive when death is pronounced and if you're immediately cooled, you can be perfectly preserved. My life could be extended by hundreds and thousands of years."

Hilton denied ever making those comments and pointed out to DeGeneres that she ... (read more)

Thanks Antoine,

I'll file this with Walt Disney.

Simon Cowell now too, apparently. No, I don't read the Daily Mail!

2Paul Crowley
Now says he was joking. Not sure if eternity strictly needs the man, but I don't dislike him enough to wish him dead!

If we accept MWI, cryonics is a backdoor to Quantum Immortality, one which waiting and hoping may not offer.