Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Timeless Control
Comment author: Frank_Hirsch 09 June 2008 09:19:48AM 0 points [-]

Frank, Demonstrated instances of illusory free-will don't seem to me to be harder or easier to get rid of than the many other demonstrated illusory cognitive experiences. So I don't see anything exceptional about them in that regard.

HA, I do. It is a concept I suspect we are genetically biased to hold, an outgrowth of the distinction between subject (has a will) and object (has none). Why are be biased to do so? Because, largely, it works very well as a pattern for explanations about the world. We are built to explain the world using stories, and these stories need actors. Even when you are convinced that choice does not exist, you'll still be bound to make use of that concept, if only for practical reasons. The best you can do is try to separate the "free" from the "choice" in an attempt to avoid the flawed connotation. But we have trouble conceptualising choice if it's not free; because then, how could it be a choice? All that said, I seem to remember someone saying something like: "Having established that there is no such thing as a free will, the practical thing to do is to go on and pretend there was.".

In response to Timeless Control
Comment author: Frank_Hirsch 08 June 2008 09:34:24PM 0 points [-]

HA: How come you think I defend any "non-illusory human capacity to make choices"? I am just wondering why the illusion seems so hard to get rid of. Did I fail so miserably at making my point clear?

In response to Timeless Control
Comment author: Frank_Hirsch 07 June 2008 02:45:14PM 0 points [-]

If your mind contains the causal model that has "Determinism" as the cause of both the "Past" and the "Future", then you will start saying things like, "But it was determined before the dawn of time that the water would spill - so not dropping the glass would have made no difference".

Nobody could be that screwed up! Not dropping the glass would have been no option. =)

About all that free-will stuff: The whole "free will" hypothesis may be so deeply rooted in our heads because the explanatory framework of identifying agents with beliefs about the world, objectives, and the "will" to change the world according to these beliefs and objectives just works so remarkably well. Much like Newtons theory of gravity: In terms of the ratio of predictive_accuracy_in_standard_situations to operational_complexity Newton's gravity kicks donkey. So does the Free Will (TM). But that don't mean it's true.

Comment author: Frank_Hirsch 05 June 2008 11:39:51AM -2 points [-]

steven: To much D&D? I prefer chaotic neutral... Hail Eris! All hail Discordia! =)

In response to Timeless Identity
Comment author: Frank_Hirsch 03 June 2008 09:32:58AM 2 points [-]

[Eliezer says:] And if you're planning to play the lottery, don't think you might win this time. A vanishingly small fraction of you wins, every time.

I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that "In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!". I said: "Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.". Although your assertion in the case about lottery is much weaker, I don't believe it's strictly true.

Comment author: Frank_Hirsch 02 June 2008 10:13:26PM 0 points [-]

The Taxi anecdote is ultra-geeky - I like that! ;-)

Also, once again I accidentally commented on Eliezers last entry, silly me!

Comment author: Frank_Hirsch 02 June 2008 09:35:53PM 0 points [-]

[Unknown wrote:] [...] you should update your opinion [to] a greater probability [...] that the person holds an unreasonable opinion in the matter. But [also to] a greater probability [...] that you are wrong.

In principle, yes. But I see exceptions.

[Unknown wrote:] For example, since Eliezer was surprised to hear of Dennett's opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann's religion, he should assign a greater probability to the Jewish religion, even if only to a slight degree.

Well, admittedly, the Dennett quote depresses me a bit. If I were in Eliezers shoes, I'd probably also choose to defend my stance - you can't dedicate your life to something with just half a heart!

About Auman's religion: That's one of the cases where I refuse to adapt my assigned probability one iota. His belief about religion is the result of his prior alone. So is mine, but it is my considered opinion that my prior is better! =)

Also, if I may digress a bit, I am sceptical about Robin's Hypothesis that humans in general update to little from other people's beliefs. My first intuition about this was that the opposite was the case (because of premature convergence and resistance to paradigm shifts). After having second thoughts, I believe the amount is probably just about right. Why? 1) Taking other people's beliefs as evidence is an evolved trait, and so is probably the approximate amount. 2) Evolution is smarter than I (and Robin, I presume).

Comment author: Frank_Hirsch 02 June 2008 06:03:17AM 3 points [-]

Unknown: Well, maybe yeah, but so what? It's just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That's nothing at all to do with "overconfidence", but it's everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss it out of hand without thinking twice - sometimes I do not even take the time to hear them out! Why should I, when I know it's gonna be a waste of time? Overconfidence? No, sanity!

Comment author: Frank_Hirsch 01 June 2008 10:38:00PM 0 points [-]

I thought the assumption was that SI is to S to get any ideas about world domination?

Comment author: Frank_Hirsch 01 June 2008 09:48:00PM 0 points [-]

Makes me think:
Wouldn't it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?

View more: Next