Comment author: JoshuaZ 04 June 2010 05:08:36PM 2 points [-]

That works better for you? That's deeply surprising. Using entities like Omega and transmutation seems to make things more abstract and much harder to understand what the heck is going on. I must need to massively update my notions about what sort of descriptors can make things clear to people.

Comment author: Sideways 04 June 2010 05:19:15PM 1 point [-]

I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.

"If Monty 'replaced' a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket."

"Monty wants to keep the diamond for himself, so if he's offering to trade with me, he probably thinks I have it and wants to get it back."

It might seem paradoxical, but using 'transmute at random' instead of 'replace', or 'Omega' instead of 'Monty Hall', actually simplifies the problem for me by establishing that all relevant facts to the problem have already been included. That never seems to happen in the real world, so the world of the analogy is usefully unreal.

Comment author: logical 04 June 2010 04:43:08PM -8 points [-]

Are you serious? Are you buying this? Ok - let me make this easy: There NEVER WAS a 33% chance. Ever. The 1-in-3 choice is a ruse. No matter what door you choose, Monty has at least one door with a goat behind it, and he opens it. At that point, you are presented with a 1-in-2 choice. The prior choice is completely irrelevant at this point! You have a 50% chance of being right, just as you would expect. Your first choice did absolutely nothing to influence the outcome! This argument reminds me of the time I bet $100 on black at a roulette table because it had come up red for like 20 consecutive times, and of course it came up red again and I lost my $$. A guy at the table said to me "you really think the little ball remembers what it previously did and avoids the red slots??". Don't focus on the first choice, just look at the second - there's two doors and you have to choose one (the one you already picked, or the other one). You got a 50% chance.

Comment author: Sideways 04 June 2010 05:02:04PM 3 points [-]

Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.

If you've really thought about XiXiDu's analogies and they haven't helped, here's another; this is the one that made it obvious to me.

Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bucket, and offers to let you trade buckets.

Each bucket analogizes to a door that you may choose; the sand analogizes to probability mass. Seen this way, it's clear that what you want is to get as much sand (probability mass) as possible, and Omega's bucket has more sand in it. Monty's unopened door doesn't inherit anything tangible from the opened door, but it does inherit the opened door's probability mass.

Comment author: Morendil 08 April 2010 04:17:21PM 1 point [-]

if you try to imagine something that's emotionally implausible to you, you'll likely fail

How do you mean that? I often find myself imagining things that are totally implausible emotionally, but quite possible physically, for instance, once in a while I imagine throwing myself off a bridge that I'm crossing, and I can feel my guts churning. (When I say "imagine" here, I mean I actually visualize myself falling, it's a stronger thing to me than idly considering the notion of falling.)

Comment author: Sideways 08 April 2010 05:17:58PM -1 points [-]

As a tentative rephrasing, something that's "emotionally implausible" is something that "I would never do" or that "could never happen to me." Like you, I can visualize myself falling with a high degree of accuracy; but I can't imagine throwing myself off the bridge in the first place. Suicide? I would never do that.

It occurs to me that "can't imagine" implies a binary division when ability to imagine is more of a continuum: the quality of imagination drops steadily between trying to imagine brushing my teeth (everyday), calling 911 (very rare, but I've done it before), punching through a wall (never done it, but maybe if I was mad enough), and jumping off a bridge (I would never do that).

For all four, I can imagine the physical events as bare facts; but for the first two I can easily place myself in the simulation, complete with cognitive and emotional states. That's much harder in the third case; in the fourth, I'm about as confident in my imagination as I am in trying to imagine a world where 1+1=3.

Comment author: gregconen 07 April 2010 09:08:50PM 1 point [-]

If you haven't already, do check out Eby's Instant Irresistible Motivation video for learning how to create positive motivation.

Interesting. In fact, it seems to mesh with the process I've successfully used to do things like cleaning my desk.

Unfortunately, many of the tasks I have to do don't lend themselves to the visualization in step 1. How does one visualize having studied for an exam, or completed an exercise routine?

Comment author: Sideways 07 April 2010 09:46:37PM *  0 points [-]

If you've exercised before, you can probably remember the feeling in your body when you're finished--the 'afterglow' of muscle fatigue, endorphins, and heightened metabolism--and you can visualize that. If you haven't, or can't remember, you can imagine feelings in your mind like confidence and self-satisfaction that you'll have at the end of the exercise.

As for studying, the goal isn't to study, per se; it's to do well on the test. Visualizing the emotional rewards of success on the test itself can motivate you to study, as well as get enough sleep the night before, eat appropriately the day of, take performance enhancing drugs, etc.

Imagination is a funny thing. You can imagine things that could physically never happen--but if you try to imagine something that's emotionally implausible to you, you'll likely fail. Just now I imagined moving objects with my mind, with no trouble at all; then I tried to imagine smacking my mother in the face and failed utterly. If you actually try to imagine having something--not just think about trying--and fail, it's probably because deep down you don't believe you could ever have it.

Comment author: mattnewport 16 February 2010 08:48:34PM 2 points [-]

The human experience of colour is not really about recognizing a specific wavelength of light. We've discussed this before here. Our rods and cones are sensitive to the wavelength of light but the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.

Comment author: Sideways 16 February 2010 10:29:59PM 0 points [-]

The human experience of colour is not really about recognizing a specific wavelength of light.

True, but irrelevant to the subject at hand.

the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.

No, the qualia of color have nothing to do with the observed object. This is the pons asinorum of qualia. The experience of color is a product of the invariant surface properties of objects; the qualia of color is a product of the relationship between that experience and other similar experiences.

A human looking at an optical illusion might say, "That looks red, but it's really white," acknowledging that spectral color is objective, but psychophysical color is more malleable. But compare that sentence to "that sounds good, but it's really bad." Statements about color aren't entirely subjective--to some extent they're about fact, not opinion.

Statements about qualia are about the subjective aspect of an experience: e.g., red is the color of rage; of love; the color that means 'stop.'

Comment author: Tiiba 12 January 2010 12:10:03AM -1 points [-]

I'm a foolish layman, but the problem of consciousness seems very easy to me. Probably because I'm a foolish layman.

Qualia are simply holes in our knowledge. The qualium or redness exists because your brain doesn't record the details of the light. If you were built to feel its frequency, or the chemical composition of food and smells, you'd have qualia for those. It's also possible to have qualia for things like "the network card driver crashing, SIAI damn I hate that".

Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine.

(It is my understanding that consciousness, as used here, is the state of having qualia. Correct me if I'm wrong.)

Comment author: Sideways 16 February 2010 07:35:50PM -1 points [-]

Your eyes do detect the frequency of light, your nose does detect the chemical composition of smells, and your tongue does detect the chemical composition of food. That's exactly what the senses of sight, smell, and taste do.

Our brains then interpret the data from our eyes, noses, and tongues as color, scent, and flavor. It's possible to 'decode', e.g., color into a number (the frequency of light), and vice versa; you can find charts on the internet that match frequency/wavelength numbers to color. Decoding taste and scent data into the molecules that produce them is more difficult, but people find ways to do it--that's how artificial flavorings are made.

There are lots of different ways to encode data, and some of them are more useful in some situations, but none of them are strictly privileged. A non-human brain could experience the 'color' of light as a number that just happens to correspond to its frequency in oscillations/second, but that wouldn't prevent it from having qualia, any more than encoding numbers into hexadecimal prevents you from doing addition.

So it's not the 'redness' of light that's a quale; 'red' is just a code word for 'wavelength 635-700 nanometers.' The qualia of redness are the associations, connections, emotional responses that your brain attaches to the plain sensory experience.

Comment author: MrHen 10 February 2010 08:30:57PM 5 points [-]

I don't know how to respond to this or Morendil's second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.

Things like this:

You seem to be confused about free will. Keep reading the Sequences and you won't be.

Confuse me because as far as I can tell, this has nothing to do with free will. I don't care about free will. I care about what happens when a perfect predictor enters the room.

Is such a thing just completely impossible? I wouldn't have expected the answer to this to be Yes.

If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you'll hear. For example, if I walk up to someone and say, "I'm good at predicting people in simple problems, I'm truthful, and I predict you'll give me $5," they won't give me anything. Since I know this, I won't make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.

Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of "perfect predictor"? What?

To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn't have happened, Omega wouldn't predict X.

I don't see how including "knowledge of the prediction" into X makes any difference. I don't see how whatever definition of free will you are using makes any difference.

"Go read the Sequences" is fair enough, but I wouldn't mind a hint as to what I am supposed to be looking for. "Free will" doesn't satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, "You cannot predict past a free will choice?"

As it is right now, I haven't learned anything other than, "You're wrong."

Comment author: Sideways 10 February 2010 09:28:50PM *  0 points [-]

When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.

Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. But in this scenario, Omega doesn't have to be a general perfect predictor; it only has to be a perfect predictor of you.

From Omega's perspective, after running the simulation, your actions are determined. But you don't have access to Omega's simulation, nor could you understand it even if you did. There's no way for you to know what the results of the computations in your brain will be, without actually running them.

If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer's concept of free will.

(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a "well-formed" mind without any such rent-shirking spandrels.

Comment author: Sideways 08 September 2009 09:36:24PM -4 points [-]

The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.

Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world--and your duty is clear:

(50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20.

Don't take the bet.

So why Eliezer's insistence on using a logical coin-flip? Because, I suspect, it prevents many-worlds from being relevant. Logical coin-flips don't create possible worlds the way quantum coin-flips do.

But what is a logical coin-flip, anyway?

Using the example given at the top of this post, an agent that was not only rational but clever would sit down and calculate the 256th binary digit of pi before answering. Picking a more difficult logical coin-flip just makes the calculation more difficult; a more intelligent agent could solve it, even if you can't.

So there are two different kinds of logical coin-flips: the sort that are indistinguishable from quantum coin-flips even in principle, in which case they ought to cause the same sort of branching events under many-worlds--and the sort that are solvable, but only by someone smarter than you.

If you're not smart enough to solve the logical coin-flip, you may as well treat it as a quantum coin-flip, because it's already been established that you can't possibly do better. That doesn't mean your decision algorithm is flawed; just that if you were more powerful, it would be more powerful too.

Comment author: Sideways 07 September 2009 11:47:05PM 0 points [-]

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.

Of course, this problem of identity and continuity has been hashed out on OB/LW before. But the Boltzmann-brain hypothesis doesn't require more than one of you--just a lot of other people, something the 50%-ers have no philosophical problem with. It's a challenge for a solipsist, not a 50%-er.

Comment author: Sideways 07 September 2009 11:33:32PM *  3 points [-]

[Rosencrantz has been flipping coins, and all of them are coming down heads]

Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.

Rosencrantz: What?

Rosencrantz & Guildenstern Are Dead, Tom Stoppard

View more: Prev | Next