All of HalFinney's Comments + Replies

HalFinney200

Years ago, before coming up with even crazier ideas, Wei Dai invented a concept that I named UDASSA. One way to think of the idea is that the universe actually consists of an infinite number of Universal Turing Machines running all possible programs. Some of these programs "simulate" or even "create" virtual universes with conscious entities in them. We are those entities.

Generally, different programs can produce the same output; and even programs that produce different output can have identical subsets of their output that may include ... (read more)

2cousin_it
Yep, I already arrived at that answer elsewhere in the thread. It's very nice and consistent and fits very well with UDT (Wei Dai's current "crazy" idea). There still remains the mystery of where our "subjective" probabilities come from, and the mystery why everything doesn't explode into chaos, but our current mystery becomes solved IMO. To give a recent quote from Wei, "There are copies of me all over math".
1red75
Should we stop on UDASSA? Can we consider universe that consists of continuum of UDASSAs each running some (infinite) subset of set of all possible programs.
6DanielVarga
This is a cool theory, but it is probably equivalent to another, less cool theory that yields identical predictions and does not reference infinite virtual universes. :)

You make a lot of interesting points, but how do you apply them to the question at hand: what should you have for dinner, and why?

This is a fascinating topic, and I hope it attracts more commentary. As Bentarm says, it is important and relevant to each of us, yet the topic is fraught with uncertainty, and it is expensive to try to reduce the uncertainty.

I do not believe Taubes. No one book can outweigh the millions of pages of scientific research which have led to the current consensus in the field. Taubes is polemical, argumentative, biased, and one-sided in his presentation. He makes no pretense of offering an objective weighing of the evidence for and against various nutritional h... (read more)

8Paul Crowley
I roughly buy this argument. However, I'd be interested to know more about how you distinguish this from the rejection of cryonics by cryobiologists.
HalFinney180

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of a... (read more)

HalFinney290

I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the ot... (read more)

HalFinney180

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of a... (read more)

Let me give an argument in favor of #4, doing what the others do, in the thermometer problem. Now we seem to have them behaving badly. I think in practice many people would in fact look at other thermometers too in making their guesses. So why aren't they doing it? Two possibilities: they're stupid; or they have a good reason to do it. An example good reason: some thermometers don't read properly from a side angle, so although you think you can see and read all of them, you might be wrong. (This could be solved by #3, writing down the average of the cards,... (read more)

Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the "Omega appears" part of this thought experiment, it may not be as absurd as it seems.

Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie "Yes Man" is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.

HalFinney100

When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.

Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.

The way motor neurons work is there is an upper motor neuron (UMN)... (read more)

2loqi
I'm much less worried by this than I am by the prospect that I'd have to do the same for many of my normal thought patterns due to unforeseen inter-dependencies. Indeed, that's one of the reasons why I prefer thinking about it solely in terms of stored information: a redundant copy only really constitutes a pointer's worth of information. It's even conceivable that a SAI could reconstruct missing neural information in non-obvious ways, like a few stray frames of video. Not worth betting on, though. Thanks for the informative reply.
7AdeleneDawner
Fascinating. Citation?

Like others, I see some ambiguity here. Let me assume that the substrate includes not just the neurons, but the glial and other support cells and structures; and that there needs to be blood or equivalent to supply fuel, energy and other stuff. Then the question is whether this brain as a physical entity can function as the substrate, by itself, for high level mental functions.

I would give this 95%.

That is low for me, a year ago I would probably have said 98 or 99%. But I have been learning more about the nervous system these past few months. The brain's w... (read more)

Another sample problem domain is crossword puzzles:

Don't stop at the first good answer - You can't write in the first word that seems to fit, you need to see if it is going to let you build the other words.

Explore multiple approaches simultaneously - Same idea, you often can think of a few different possible words that could work in a particular area of the puzzle, and you need to keep them all in mind as you work to solve the other words.

Trust your intuitions, but don't waste too much time arguing for them - This one doesn't apply much because usually peo... (read more)

A perhaps similar example, sometimes I have solved geometry problems (on tests) by using analytical geometry. Transform the problem into algebra by letting point 1 be (x1,y1), point 2 be (x2,y2), etc, get equations for the lines between the points, calculate their points of intersection, and so on. Sometimes this gives the answer with just mechanical application of algebra, no real insight or pattern recognition needed.

2Paul Crowley
When I was a kid this was how I solved all nontrivial geometry problems, because I was much better at algebra than geometry!

I wouldn't be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn't be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.

6Wei Dai
Suppose the AI we build (AI1) finds itself insufficiently intelligent to persuade us. It decides to build a more powerful AI (AI2) to give it advice. AI2 wakes up and modifies AI1 into being perfectly satisfied with the way things are. Then, mission accomplished, they both shut down and leave humanity unchanged. I think what went wrong here is that this formulation of utilitarianism isn't reflectively consistent. If there are, then the AI would modify us physically instead.
4magfrump
Why do you say these "strong limits" exist? What are they? I do think that everyone being persuaded to be Bodhisattvas is a pretty good possible future, but I do think there are better futures that might be given up by that path. (immortal cyborg-Bodhisattvas?)
0[anonymous]
Strong limits? You mean the limit of how much the atoms in a human can be rearranged and still be called 'human'?

I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.

1Wei Dai
I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.
HalFinney130

Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?

Wouldn't we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn't stink?

Would we wonder why the part of the brain for hearing high pitches didn't sound like a high pitch? Why the part which feels a punch in the nose doesn't actually reach out and punch us in the nose when we lean close?

I can't help feeling that this line of questioning is bizarre and unproductive.

-2Mitchell_Porter
Hal, what would be more bizarre - to say that the colors, smells, and sounds are somewhere in the brain, or to say that they are nowhere at all? Once we say that they aren't in the world outside the brain, saying they are inside the brain is the only place left, unless you're a dualist. Most people here are saying that these things are in the brain, and that they are identical with some form of neural computation. My objection is that the brain, as currently understood by physics, consists of large numbers of particles moving in space, and there is no color, smell, or sound in that. I think the majority response to that is to say that color, smell, sound is how the physical process in question "feels from the inside" - to which I say that this is postulating an extra property not actually part of physics, the "feel" of a physical configuration, and so it's property dualism. If the redness, etc, is in the brain, that doesn't mean that the brain part in question will look red when physically examined from outside. Every example of redness we have was part of a subjective experience. Redness is interior to consciousness, which is interior to the thing that is conscious. How the thing that is conscious looks when examined by another thing that is conscious is a different matter.
HalFinney110

An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.

I don't think your question is well represented by the phrase "where is computation".

Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware.

For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Wh... (read more)

0Mitchell_Porter
If people want to say that consciousness is computation, they had better be able to say what computation is, in physical terms. Part of the problem is that computational properties often have a representational or functional element, but that's the problem of meaning. The other part of the problem is that computational states are typically vague, from a microphysical perspective. Using the terminology from thermodynamics of microstates and macrostates - a microstate is a complete and exact description of all the microphysical details, a macrostate is an incomplete description - computational states are macrostates, and there is an arbitrariness in how the microstates are grouped into macrostates. There is also a related but distinct sorites problem: what defines the physical boundary of the macro-objects possessing these macrostates? How do you tell whether a given elementary particle needs to be included, or not? I don't detect much sympathy for my insistence that aspects of consciousness cannot be identified with vague entities or properties (and possibly it's just not understood), so I will try to say why. It follows from insisting that consciousness and its phenomena do actually exist. To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists. (Just to be clear, I'm not saying that the object of every perception exists - if that were true, there would be no such thing as perceptual error - but I'm saying that perceptions themselves do exist.) But computational macrostates are not exactly defined from a micro level. So they are either incompletely specified, or else, to become completely specified, the fuzziness must be filled out in a way that is necessarily arbitrary and can be done in many ways. The definitional criteria for computational or functional states are simply not strict enough to compel a unique micro completion. Also, macrostates

Thomas Nagel's classic essay What is it like to be a bat? raises the question of a bat's qualia:

Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as

... (read more)
3Mitchell_Porter
Being a bat shouldn't be incomprehensible (and in fact Nagel makes some progress in his essay). You still have a body and a sensorium, they're just different. Getting your sense of space by yelling at the world and listening to the echoes - it's weird, but it's not beyond imagining. The absence of higher cognition might be the hardest thing for a human to relate to, but everyone has experienced some form of mindless behavior in themselves, dominated by sensation, emotion, and physical activity. You just have to imagine being like that all the time. Being a quantum holist[*] and all that, when it comes to consciousness, I don't believe in qualia for Deep Blue because I don't think consciousness arises in that way. If it's like something to be a rock, then maybe the separate little islands of silicon and metal making up Deep Blue's processors still had that. But I'm agnostic regarding how to speak about the being of the very simplest things, and whether it should be regarded as lying on a continuum with the being of conscious beings. Anyway, I answer both your questions yes, and I think other people may as well be optimistic too, even if they have a different theoretical approach. We should expect that it will all make sense one day. [*] ETA: What I mean by this is the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these. So it's a brain-as-quantum-computer hypothesis, with an ontological twist thrown in.

A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?

One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?

ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann's theorem seems to forbid this even if we assume that the calculations are intractable.

The rational way to characterize the situation, if in fact intractability is a practical o... (read more)

3Wei Dai
I would say that one possibility is that their disagreement is common knowledge, but they don't know how to reach agreement. From what I've learned so far, disagreements between rationalist wannabes can arise from 3 sources: * different priors * different computational shortcuts/approximations/errors * incomplete exchange of information Even if the two rationalist wannabes agree that in principle they should have the same priors and the same computations, and full exchange of information, as of today they do not have general methods to solve any of these problems, can only try to work out their differences on a case-by-case basis, with high likelihood that they'll have to give up at some point before they reach agreement. Your suggestion of what rationalist wannabes should do intuitively makes a lot of sense to me. But perhaps one reason people don't do it is because they don't know that it is what they should do? I don't recall a post here or on OB that argued for this position, for example.
0timtyler
You mean "common knowledge" in the technical sense described in the post? If so, your questions do not appear to make sense.

Try a concrete example: Two dice are thrown, and each agent learns one die's value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?

Suppose player 1 sees a 2 and learns that player 2's die is in 1-3. Then he knows that player 2 knows that player 1's die is in 1-3. It is common knowledge that the sum is in 2-6.

You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partiti... (read more)

0Psy-Kosh
Not sure... what happens when the ranges are different sizes, or otherwise the type of information learnable by each player is different in non symmetric ways? Anyways, thanks, upon another reading of your comment, I think I'm starting to get it a bit.
1janos
What I don't like about the example you provide is: what player 1 and player 2 know needs to be common knowledge. For instance if player 1 doesn't know whether player 2 knows whether die 1 is in 1-3, then it may not be common knowledge at all that the sum is in 2-6, even if player 1 and player 2 are given the info you said they're given. This is what I was confused about in the grandparent comment: do we really need I and J to be common knowledge? It seems so to me. But that seems to be another assumption limiting the applicability of the result.

How about Scott Aaronson:

http://www.scottaaronson.com/papers/agree-econ.pdf

He shows that you do not have to exchange very much information to come to agreement. Now maybe this does not address the question of the potential intractability of the deductions to reach agreement (the wannabe papers may do this) but I think it shows that it is not necessary to exchange all relevant information.

The bottom line for me is the flavor of the Aumann theorem: that there must be a reason why the other person is being so stubborn as not to be convinced by your own tenaci... (read more)

1timtyler
To quote from the abstract of Scott Aaronson's paper: "A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree": if their opinions about any topic are common knowledge, then those opinions must be equal." Even "honest, rational, Bayesian agents" seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible - since they honestly believe them. If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty - and such an agent's understanding of Bayesian probability theory may be immaculate. Such agents are not constrained to agree by Aumann's disagreement theorem.
6Wei Dai
I haven't read the whole paper yet, but here's one quote from it (page 5): Scott is talking about the computational complexity of his agreement protocol here. Even if we can improve the complexity to something that is considered practical from a computer science perspective, that will still likely be impractical for human beings, most of whom can't even multiply 3 digit numbers in their heads.
-2timtyler
The reason is often that you regard your own perceptions and conclusion as trustworthy and in accordance with your own aims - whereas you don't have a very good reason to believe the other person is operating in your interests (rather than selfishly trying to manipulate you to serve their own interests). They may reason in much the same way. Probably much the same circuitry continues to operate even in those very rare cases where two truth-seekers meet, and convince each other of their sincerity.

I agree about the issue of unresolved arguments. Was agreement reached and that''s why the debate stopped? No way to tell.

Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.

BTW sorry to see that linkrot continues to be a problem in the future.

7Wei Dai
I took the liberty of a creating a wiki page about the AI-foom debate, with links to all of the posts collected in one place, in case anyone wants to refer to it in the future.
1Wei Dai
I find myself reluctant to support this idea. I think the main reason is that it seems very hard to translate my degrees of belief into probability numbers. So I'm afraid that I'll update my beliefs correctly in response to other people's arguments, but state the wrong numbers. Is this a skill that we can learn to perform better? Right now I just try to indicate my degrees of belief using English words, like "I'm sure", "I think it's likely", "perhaps", etc., which has the disadvantage of not being very precise, but the advantage of requiring little mental effort (which I can redirect into for example thinking about whether an argument is correct or not). ETA: It does seem that there are situations where the extra mental effort required to state probability estimates would be useful, like in the AI-Foom debate, where there is persistent disagreement after an extensive discussion. The disputants can perhaps use probability estimates to track down which individual beliefs (e.g., conditional probabilities) are causing their overall disagreement.
1wedrifid
Would that be desirable? I know, for example, that when reading Robin's posts on that topic I often updated away from Robin's position (weak arguments from a strong debater is evidence that there are not stronger arguments). Given this possibility, having public numbers diverging in such a way would be rather dramatic and decidedly favour dishonesty. In general there are just far too many signalling reasons to avoid having 'probability estimates' public. Very few discussions even here are sufficiently rational as to make those numbers beneficial.

Yes, I think that's a good explanation. One question it raises is ambiguity in thinking of QM via "many worlds". What constitutes a "world"? If we put a system into a coherent superposition, does that mean there are two worlds? Then if we transform it back into a pure state, has a world gone away? What about the fact that whether it is pure or in a superposition depends arbitrarily on the chosen basis? A pure-state vertically polarized photon is in a superposition of states using the diagonal basis. How many worlds are there, two or one... (read more)

-2[anonymous]
This makes me wonder something. It seems that the many-worlds theory involves exponential branching: if there's 1 world one moment, there are 2 the next, then 4, then 8, and so on. (To attempt to avoid the objection you just raised: if 1 pure state, defined intuitively, has significant amplitude one moment, then . . .) Since this grows exponentially, won't it eventually grow to cover every possible state? Admittedly, the time this would take is more or less proportional to the number of particles in the universe, and so I really don't know how long it would take for coinciding to happen, but it seems that this would produce observable consequences eventually, maybe-maybe-not while minds are still around.
2MichaelVassar
I agree that unitary wavefunction evolution is a MUCH better name than the misleading "Many Worlds". Then, of course, you say that computation takes place within the evolving wavefunction and that you are part of that computation everywhere certain patterns in that computation take place. Still some handwaving here, but SO MUCH better than the standard misunderstandings of Many Worlds.
0pre
Interesting. Sounds like you're saying that the entire process of quantum computation aims to keep the system coherent, and so avoid splitting the universe. Which make sense. They tell me the difficulty, in an engineering sense, is to stop the system de-cohering. Is that remotely accurate?

Here are the four papers relating to influence from the future and the LHC:

http://arxiv.org/find/physics/1/au:+Ninomiya_M/0/1/0/all/0/1

The basic idea is that these physicists have a theory that the Higgs particle would be highly unusual, such that its presence in a branch of the multiverse would greatly decrease the measure of that branch. Now I don't claim to understand their math, but it seems that this might produce a different result than the usual anthropic-type arguments regarding earth-destroying experiments.

The authors refer to an "influence f... (read more)

Wei, I understand the paper probably less well than you do, but I wanted to comment that p~, which you call r, is not what Robin calls a pre-prior. He uses the term pre-prior for what he calls q. p~ is simply a prior over an expanded state space created by taking into consideration all possible prior assignments. Now equation 2, the rationality condition, says that q must equal p~ (at least for some calculations), so maybe it all comes out to the same thing.

Equation 1 defines p~ in terms of the conventional prior p. Suppressing the index i since we have o... (read more)

0Wei Dai
Yes, it looks like I got a bit confused about the notation. Thanks for the correction, and for showing how the mathematical formalism works in detail.
HalFinney120

The voice banking software I'm using is from the Speech Research Lab at the University of Delaware. They say they are in the process of commercializing it; hopefully it will still be free to the disabled. Probably not looking for donations though.

Another interesting communications assistance project is Dasher. They have a Java applet demo as well as programs for PC and smart phones. It does predictive input designed to maximize effective bandwidth. A little confusing at first but supposedly after some practice you can type fast with only minimal use of the... (read more)

cjb150

Hi Hal. I'm sorry to hear of your diagnosis.

I spent two years as the maintainer of Dasher, and would be happy to answer questions on it. It's able to use any single analog muscle for control, as a worst case (and a two-axis precise device like a mouse as a best case). There's a video of using Dasher with one axis here -- breath control, as measured by diaphragm circumference:

http://www.inference.phy.cam.ac.uk/dasher/movies/BreathDasher.mpg

and there are videos using other muscles (head tracking, eye tracking) here:

http://www.inference.phy.cam.ac.uk/dashe... (read more)

3dfranke
How confident are you of this? I'd be surprised if there weren't some there who understood.
3eirenicon
I understand - it reminds me of the Max Berry story "Machine Man" where the protagonist, a robotics researcher, loses a leg, so he designs an artificial one to replace it. Of course, it's a lot better than his old leg... so he "loses" the other one. Of course, two out of four artificial limbs is just a good start (and so forth). I wouldn't wish your condition on anyone, but you might just have been lucky enough to live in a time when the meat we were born with isn't relevant to a happy life. Best wishes regardless.
3whpearson
I've played around briefly with dasher and like many of these alternate text inputs it is not designed with coding in mind. I can't remember the forms of punctuation it uses, but the frequencies will be all wrong to start with. What you really want is a cross of dasher and the visual studio style auto-complete, so the words/letters it puts largest are the the variables in scope or from libraries included, or the member functions for the object you are accessing. You'll probably need to specialize your tools to a single language to start with, which is a shame. Pick wisely! I'd love to play around with controlling a computer with my eyes.
HalFinney700

I want to thank everyone for their good wishes and, um, hugs :)

As it stands, my condition is quite good. In fact at the time of my diagnosis two months ago, I was skeptical that it was correct. The ALS expert seemed rather smug that he had diagnosed me so early, saying that I was the least affected of any of his patients. Not only were my symptoms mild, I had had little or no progression in the three months at that time since I had first noticed anything wrong.

However, since then there has been noticeable progression. My initial symptoms were in my speech,... (read more)

hfinney440

My response may seem out of context with the others, because I do not know you personally. However, because we share the same name (I have always wondered if your 'real' first name is Harold like mine) and you have so much involvement in the technology field...you are a top Google result when I Google "our" name. My grandfather (also named Hal Finney) was a baseball player for the Pittsburgh Pirates in the 1930's. His baseball stats are also high ranking Google results.

Bottom line, I am sorry to hear of your diagnosis with ALS. I have followe... (read more)

HalFinney270

It was actually extremely reassuring as the reality of the diagnosis sunk in. I was surprised, because I've always considered cryonics a long shot. But it turns out that in this kind of situation, it helps tremendously to have reasons for hope, and cryonics provides another avenue for a possibly favorable outcome. That is a good point that my circumstances may allow for a well controlled suspension which could improve my odds somewhat.

You're right though that with this diagnosis, life insurance is no longer an option. In retrospect I would be better off if... (read more)

3dfranke
Neuro is cheaper than whole-body, isn't it? Take some equity out of your cryonics insurance plan and use it for your (pre-deanimation) care.
HalFinney590

I am indeed signed up, having been an Alcor client for 20 years.

Ironically I chose full-body suspension as opposed to so-called neurosuspension (head only) on the theory that the spinal cord and peripheral nervous system might include information useful for reconstruction and recovery. Now it turns out that half of this data will be largely destroyed by the disease. Makes me wonder if I should convert to neuro.

Indeed even the popular (mis)conception of head-only revival wouldn't be that bad for me, not unlike the state I will have lived in for a while. In ... (read more)

Psy-Kosh170

Oh, not sure if you heard about this, but apparently there was some Alcor and CI sponsored research and the result was basically that it's a really good idea to make arrangements for, well, if anything happens to you to begin being cooled immediately, and actually even better, to have your blood washed out. India ink and rat ( :( ) experiments suggest that being a warm body for even a couple hours is enough to more or less cause effects like thickening blood and so on to more or less prevent any significant amount of cryoprotectant from actually ending up... (read more)

9gwern
I think you probably should. There's no real upside to preserving your body as you say, and there's a very real cost. (What's Alcor's differential? IIRC, it was many thousands of dollars.) You could direct the excess money somewhere else, like your family (presumably ALS will have a big economic impact on them - treatment expenses, reduced earnings, etc. - even if you live out a natural lifespan). Or you could donate it straight to Alcor: I'm sure they have better things to do with say $20,000 than spend it on freezing some meat that doesn't need freezing.

I am indeed signed up, having been an Alcor client for 20 years.

That is very, very, very good to hear. Sorry, I had to ask that question first before I knew to say:

I'm sorry to hear about your diagnosis. I wish you the best in staying alive. I congratulate you on the wisdom that you have shown and are showing in making your decisions well and in advance. And may you be a lesson and exemplar to all those other readers who will, in one future world or another, walk a path much like yours.

I'm glad to hear you're already signed up and already have life ... (read more)

Morendil160

To what extent, if any, did your choice of signing up years ago modify the impact of the bad news ?

From a certain point of view, your diagnosis enhances the value of having purchased the cryonics option. You can be reasonably certain that when the end comes it will be predictable and you will be in an environment that makes suspension and transport easier.

Also I imagine that financing suspension with a life insurance policy becomes a different proposition, financially, after you've been diagnosed with ALS.

I've been putting it off, myself, for a bunch of re... (read more)

1pdf23ds
Well, you need your torso, but perhaps not your limbs. shudder

"[the mind] could be a physical system that cannot be recreated by a computer"

Let me quote an argument in favor of this, despite the apparently near universal consensus here that it is wrong.

There is a school of thought that says, OK, let's suppose the mind is a computation, but it is an unsolved problem in philosophy how to determine whether a given physical system implements a given computation. In fact there is even an argument that a clock implements every computation, and it has yet to be conclusively refuted.

If the connection between physic... (read more)

-4SilasBarta
Sure thing. I solved the problem here and here in response to Paul Almond's essays on the issue. So did Gary Drescher, who said essentially the same thing in pages 51 through 59 of Good and Real. (I assume you have a copy of it; if not, don't privately message me and ask me how to pirate it. That's just wrong, dude. On so many levels.)

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then lea... (read more)

0AndyWood
In a non-deterministic universe, can Omega be as reliable a forecaster as the problem requires? If so, how?

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then lea... (read more)

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then lea... (read more)

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then lea... (read more)

2Alicorn
There are three of these.

We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligen... (read more)

Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.

I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can expe... (read more)

1quwgri
The holy problem of qualia may actually be close to the question at hand here. What do you mean when you ask yourself: "Does my neighbor have qualia?" Do you mean: "Does my neighbor have the same experiences?" No. You know for sure that the answer is "No." Your brains and minds are not connected. What's going on in your neighbor's head will never be your experiences. It doesn't matter whether it's (ontologically) magical blue fire or complex neural squiggles. Your experiences and your neighbor's brain processes are different things anyway. What do you mean when you ask yourself: "Are my neighbor's brain processes similar to my experiences?" What degree of similarity or resemblance do you mean? Some people think that this is purely a value question. It is an arbitrary decision by a piece of the Universe about which other pieces of the Universe it will empathize with. Yes, some people try to solve this question through Advaita. One can try to view the Universe as a single mind suffering from dissociative disorder. I know that if my brain and my neighbor's brain are connected in a certain way, then I will feel his suffering as my suffering. But I also know that if my brain and an atomic bomb are connected in a certain way, then I will feel the thermonuclear explosion as an orgasm. Should I empathize with atomic bombs? We can try to look at the problem a little differently. The main difference between my sensation of pain and my neighbor's sensation of pain is the individual neural encoding. But I do not sense the neural encoding of my sensations. Or I do not sense that I sense it. If you make a million copies of me, whose memories and sensations are translated into different neural encodings (while maintaining informational identity), then none of them will be able to say with certainty what neural encoding it currently has. Perhaps, when analyzing the question "what is suffering", we should discard the aspect of individual neural encoding. That is, suffering is

I thought maybe we were hearing about the LOTR story through something like the chronophone - the translation into English also translated the story into something analogous for us.

HalFinney270

I remember reading once about an experiment that was said to make rats superstitious.

These rats were used in learning experiments. They would be put into a special cage and they'd have to do something to get a treat. Maybe they'd have to push a lever, or go to a certain spot. But they were pretty good at learning whatever they had to do. They were smart rats. They knew the score, they knew what the cage was for.

So they did a new experiment, where they put them into the training cage as usual. But instead of what they did bringing the treat, they always got... (read more)

2wedrifid
It's a fascinating anecdote, but not relevant. They did everything they could to combat the overwhelming odds. And the anthropic principle suggests that we should expect to find ourselves in a world that does just that. Particularly when facing an enemy that learns from its failures. As Peter alludes to, those worlds that don't do something remarkable yet still manage to survive would be sliced incredibly thin.
9CronoDAS
On the other hand, for all we know, since the laws of physics in this universe allow for magic, the spell might actually do what the Council thinks it does - summons a hero who brings along the proper kind of luck for getting through the current crisis. "I summon Deus Ex Machina!" I know what Eliezer intended the story to mean, but narrative causality seems like a more likely culprit than the anthropic principle for this particular world's survival. Considering this is a world in which the events of Lord of the Rings actually happened, if I were the hero, I'd be assuming that there's a writer of some kind involved.

Given the anthropic effect we are postulating, they don't actually have to do anything - a certain fraction of the worlds will get lucky and survive.

No, the fraction of worlds which "get lucky and survive" is determined by the strategies the people use.

Actually, why doesn't the Hero's world have a Counter-Force? Shouldn't every world have something like it? How many times have our world escaped from the brink of nuclear annihilation, for example?

Right, like the way the LHC keeps breaking before they can turn it on and have it destroy the universe. Sooner or later we'll figure out what's happening.

3Eliezer Yudkowsky
This just in, apparently: http://arstechnica.com/science/news/2009/07/lhc-delayed-again-due-to-vacuum-leaks.ars

I agree with the logic of this analysis, but I have a problem with one of the implicit premises: that "we" should care about political issues at all, and that "we" make governmental decisions. I think this is wrong, and its wrongness explains the seemingly puzzling phenomenon of jumping from tree to forest.

There was no need for anyone beyond the jury to have an opinion on the Duke lacrosse case. We weren't making any decisions there. I certainly wasn't, anyway. So of course when people do express an interest, it is for entertainment and... (read more)

A typical comment from an anti-Cheerios advocate. Is this what LW is coming to? Cheerios lovers unite!

Anyway it was probably not clear but I was a little tongue in cheek with my Cheerios rant. I think what I wrote is correct but mostly I was having fun pretending that there could be a big political battle over even the narrow issue of the Cheerios study and what it means.

I'm afraid I have to take issue with your Cheerios story in the linked comment. You say of the 4% cholesterol lowering claim, "This is false. It is based on a 'study' sponsored by General Mills where subjects took more than half their daily calories from Cheerios (apparently they ate nothing but Cheerios for two of their three daily meals)." You link to http://www.askdeb.com/blog/health/will-cheerios-really-help-lower-your-cholesterol/ but that says nothing about how much Cheerios subjects ate.

I found this article that describes the 1998 Cheerios... (read more)

0Scott Alexander
Hm, I had an article from which I got my numbers, but now I can't find it anymore. I do see several that say three cups of cheerios per day and 450 calories of Cheerios out of a 1900 calorie diet, but I have no idea where I got that "half your total calories" phrase. Possibly I made a mistake and multiplied 4503, when the 450 is already 3150, or possibly copied from an article that did so.
1mps
I think the author's point was not to claim one side was right and the other wrong, but to say one's determination of who is right/wrong in a situation like this should probably be more independent of one's political party affiliation than it actually is. I take that no one actually studied the correlation among people's opinion on this matter and their party affiliation; my impression was the author was speculating that such a correlation would exist.

Let me try restating the scenario more explicitly, see if I understand that part.

Omega comes to you and says, "There is an urn with a red or blue ball in it. I decided that if the ball were blue, I would come to you and ask you to give me 1000 utilons. Of course, you don't have to agree. I also decided that if the ball were red, I would come to you and give you 1000 utilons - but only if I predicted that if I asked you to give me the utilons in the blue-ball case, you would have agreed. If I predicted that you would not have agreed to pay in the blue-... (read more)

0Vladimir_Nesov
Randomness is uncertainty, and determinism doesn't absolve you of uncertainty. If you find yourself wondering what exactly was that deterministic process that fits your incomplete knowledge, it is a thought about randomness. A coin flip is as random as a pre-placed ball in an urn, both in deterministic and stochastic worlds, so long as you don't know what the outcome is, based on the given state of knowledge. The tricky part is what this "partial information" is, as, for example, looking at the urn after Omega reveals the actual color of the ball doesn't count. In the original problem, payoffs differ so much to counteract lack of identity between amount of money and utility, so that the bet does look better than nothing. For example, even if $100*0.5-$100*0.5>0, it doesn't guarantee that U($100)*0.5+U(-$100)*0.5>0. In this post, the values in utilons are substituted directly to place the 50/50 bet exactly at neutral. They could, you'd just need to compute that tricky answer that is the topic of this post, to close the deal. This question actually appears in legal practice, see hindsight bias.

Thanks for the answer, but I am afraid I am more confused than before. In the part of the post which begins, "So, new problem...", the coin is gone, and instead Omega will decide what to do based on whether an urn contains a red or blue marble, about which you have certain information. There is no coin. Can you restate your explanation in terms of the urn and marble?

0Vladimir_Nesov
I'm not sure what exactly confuses you. Coin, urn, what does it matter? See the original post for the context in which the coin is used. Consider it a rigged coin, one probability of which landing on each side was a topic of that debate MBlume talks about.

I don't see where Omega the mugger plays a central role in this question. Aren't you just asking how one would guess whether a marble in an urn is red or blue, given the sources of information you describe in the last paragraph? (Your own long-term study, a suddenly-discovered predictions market.)

Isn't the answer the usual: do the best you can with all the information you have available?

1Vladimir_Nesov
No. Usually, by probability of event X you mean "probability of X given the facts that create this situation where I'm estimating the probability". In this case, you are asking about probability of coin landing on one of the sides given that it was thrown at all, not given that you are seeking the answer. This is an utterly alien question about probability estimation, as a point of view enforced on you doesn't correspond to where you are, as it always has been.

Maybe a better heuristic is to consider whether your degree of assurance in your position is more or less than your average degree of assurance over all topics on which you might encounter disagreements. Hopefully there would be less of a bias on this question of whether you are more confident than usual. Then, if everyone adopted the policy of believing themselves if they are unusually confident, and believing the other person if they are less confident than usual, average accuracy would increase.

I'd agree that "in general, you should believe yourself" is a simpler rule than "in general, you should believe yourself, except when you come across someone else who has a different belief". And simplicity is a plus. There are good reasons to prefer simple rules.

The question is whether this simplicity outweighs the theoretical arguments that greater accuracy can be attained by using the more complex rule. Perhaps someone who sufficiently values simplicity can reasonably argue for adopting the first rule.

ETA: Maybe I am wrong about the ... (read more)

0thomblake
I think we may have exhausted any disagreement we actually had. As I noted early on, I agree that coming across someone else with a different belief is a good occasion for re-evaluating one's beliefs. From here, it will be hard to pin down a true substantive difference.

I meant, do you have a sense of what percentage of top-level posts have comments which show the problem?

5Alicorn
I'm not sure. I have the impression that it is way too many, but that doesn't mean a whole lot, since any positive number is too many and any annoyingly high positive number is way too many. I think most of the top-level posts that mention anything having to do with gender probably have at least one offending comment.

I'd like to see a more popular discussion of Aumann's disagreement theorem (and its follow-ons), and what I believe is called Kripkean possible-world semantics, an alternative formulation of Bayes theorem, used in Aumann's original proof. The proof is very short, just a couple of sentences, but explaining the possible-world formalism is a big job.

Tilba, Wei's earlier post pointed to this article:

http://weidai.com/black-holes.txt

You might also need to know that computation can be done in principle almost without expending energy, and the colder you do the computation, the less energy is wasted. Hence being cold is a good thing, and black holes are very cold.

1Tiiba
I didn't get it right away, but now that I do, it's pretty ingenious. Let me see if I got it right. Build a big ball in space. If the ball was empty, starlight and cosmic background would heat it up, the inner surface would emit photons, and they would bounce around the shell - so you're back to square one. But the black hole at the center can absorb those photons without becoming hot. And the photons are unusable because they are ambient. On the other hand, there is now a temperature difference between the inside and the outside. Can it be used to make usable energy?
Load More