Comment author: MrHen 10 February 2010 08:30:57PM 5 points [-]

I don't know how to respond to this or Morendil's second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.

Things like this:

You seem to be confused about free will. Keep reading the Sequences and you won't be.

Confuse me because as far as I can tell, this has nothing to do with free will. I don't care about free will. I care about what happens when a perfect predictor enters the room.

Is such a thing just completely impossible? I wouldn't have expected the answer to this to be Yes.

If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you'll hear. For example, if I walk up to someone and say, "I'm good at predicting people in simple problems, I'm truthful, and I predict you'll give me $5," they won't give me anything. Since I know this, I won't make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.

Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of "perfect predictor"? What?

To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn't have happened, Omega wouldn't predict X.

I don't see how including "knowledge of the prediction" into X makes any difference. I don't see how whatever definition of free will you are using makes any difference.

"Go read the Sequences" is fair enough, but I wouldn't mind a hint as to what I am supposed to be looking for. "Free will" doesn't satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, "You cannot predict past a free will choice?"

As it is right now, I haven't learned anything other than, "You're wrong."

Comment author: Sideways 10 February 2010 09:28:50PM *  0 points [-]

When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.

Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. But in this scenario, Omega doesn't have to be a general perfect predictor; it only has to be a perfect predictor of you.

From Omega's perspective, after running the simulation, your actions are determined. But you don't have access to Omega's simulation, nor could you understand it even if you did. There's no way for you to know what the results of the computations in your brain will be, without actually running them.

If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer's concept of free will.

(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a "well-formed" mind without any such rent-shirking spandrels.

Comment author: Sideways 07 September 2009 11:47:05PM 0 points [-]

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.

Of course, this problem of identity and continuity has been hashed out on OB/LW before. But the Boltzmann-brain hypothesis doesn't require more than one of you--just a lot of other people, something the 50%-ers have no philosophical problem with. It's a challenge for a solipsist, not a 50%-er.

Comment author: Sideways 07 September 2009 11:33:32PM *  3 points [-]

[Rosencrantz has been flipping coins, and all of them are coming down heads]

Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.

Rosencrantz: What?

Rosencrantz & Guildenstern Are Dead, Tom Stoppard

Comment author: Psychohistorian 07 August 2009 08:59:06PM 4 points [-]

If you are responding to a hypothetical that tests a mathematical model, and your response doesn't use math, and doesn't hinge on a consciousness, infinity, or impossibility from the original problem domain, your response is likely irrelevant.

"The model used in this hypothetical does not meaningfully correspond to reality" seems relevant and not to fall under those categories, though it may count as impossibility. A lot of objections to hypotheticals, from what I've seen, stem from this conceptual problem but people rarely come out and say this bluntly.

Comment author: Sideways 08 August 2009 12:58:29AM -1 points [-]

IAWY and this also applies to hypotheticals testing non-mathematical models. For instance, there isn't much isomorphism between Newcomblike problems involving perfectly honest game players who can predict your every move, and any gamelike interaction you're ever likely to have.

Comment author: thomblake 21 July 2009 06:27:30PM *  1 point [-]

you need to do some formatting on that link. looks like your (] got switched around.

Comment author: Sideways 21 July 2009 06:30:01PM 0 points [-]

Thanks for the heads-up. Fixed.

Comment author: Sideways 21 July 2009 05:50:02PM *  13 points [-]

I may be in the minority in this respect, but I like it when Less Wrong is in crisis. The LW community is sophisticated enough to (mostly) avoid affective spirals, which means it produces more and better thought in response to a crisis. I believe that, e.g., the practice of going to the profile of a user you don't like and downvoting every comment, regardless of content, undermines Less Wrong more than any crisis has or will.

Furthermore, I think the crisis paradigm is what a community of developing rationalists ought to look like. The conceit of students passively absorbing wisdom at the feet of an enlightened teacher is far from the mark. How many people can you think of, who mastered any subject by learning in this way?

That said... both "sides" of the gender crisis are repeating themselves, which strongly suggests they have nothing new to say. So I say Eliezer is right. If you can't understand the other side's perspective by now--if you still have no basis for agreement after all this discussion--you need to acknowledge that you have a blind spot here and either re-read with the intent to understand rather than refute, or just avoid talking about it.

Comment author: spuckblase 14 July 2009 07:51:05AM -2 points [-]

Thanks but no thanks. I do know this really really basic stuff - I just don't agree. Instead of just postulating that all explanations have to be tied to prediction, why don't you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.

Comment author: Sideways 14 July 2009 09:00:24AM 3 points [-]

Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.

Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would.

Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the "true nature" of our world. Their Hume-world theory, like yours, cannot be based on reading reality's source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions.

Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe.

Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.

In response to comment by Sideways on The enemy within
Comment author: Roko 05 July 2009 04:16:27PM 2 points [-]

Humans, like all known life on earth, are adaptation executers.

well, being a consequentialist is a particular adaptation you can execute. "Consequentialist" is a subset of "Adaption Excecuter"

Humans certainly come much closer to pure consequentialism - of explicitly representing a goal and calculating optimal actions based upon the environment you observe to achieve that goal - than any other creature does.

In response to comment by Roko on The enemy within
Comment author: Sideways 05 July 2009 06:47:25PM 0 points [-]

I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved in parallel.

I'm not sure I agree that humans are closer to pure consequentialism than animals; if anything, the imperfect match between prediction and decision faculties makes us less consequentialist. Eating or not eating one strip of bacon won't have an appreciable impact on your social status! Rather, I would say that future-prediction allows us to have more complicated and (to us) interesting goals, and to form more complicated action paths.

In response to The enemy within
Comment author: Sideways 05 July 2009 03:47:32PM 1 point [-]

All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...

Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the brain in the same way as immediate sense input.

For example, consider the act of eating bacon. A human can extrapolate from the bacon to a pattern of bacon-eating to a future of obesity, health risks, and reduced social status (including greater difficulty finding a mate). This explains why humans can dither over whether to eat bacon, while a dog just scarfs it down--dogs can't predict the future that way. (The frontal lobes also distinguish between bad/good/better/best actions--hence the vegetarian's decision to abstain from bacon on moral grounds.)

Eliezer's body of writing on evolutionary psychology and P.J. Eby's writing on PCT and personal effectiveness seem to be regarded as incompatible by some commenters here (and I don't want to hijack this thread into yet another PCT debate), but they both support the proposition that akrasia and other "sub-optimal" mental states result from a brain processing future-predictions with systems that evolved to handle data from proximate environmental inputs and memory.

Comment author: RobinHanson 04 July 2009 07:50:24PM 3 points [-]

But is it true? Do young folks have more of an ability to unlearn falsehoods than old folks?

Comment author: Sideways 04 July 2009 08:13:18PM 1 point [-]

I think the point of the quote is not that young folks are more able to unlearn falsehoods; it's that they haven't learned as many falsehoods as old people, just by virtue of not having been around as long. If you can unlearn falsehoods, you can keep a "young" (falsehood-free) mind.

View more: Prev | Next