All of Frank_Hirsch's Comments + Replies

[Cyan wrote:] In reply to Q1, I'd want to introduce new terminology like "implicit understanding" and "explicit understanding" (paralleling the use of that terminology in reference to memory).

You mean like the distinction between competence and performance?

Laura: In a comment marked as general I do not expect to find a sharply asymmetric statement about a barely (if at all) asymmetric issue.

[Laura ABJ:] While I think I have insight into why a lot of men might FAIL with women, that doesn't mean I get THEM...

You are using highly loaded and sexist language. Why is it only the men who fail with the women? Canst thou not share in the failure, bacause thou art so obviously superior?

Q:
Did Sarah understand Mike? She could articulate important differences, but seemed unable to act accordingly, to accept his actions, to communicate her needs to him, or even to understand why V-Day went sour.
A:
Sarah and Mike seem to be in exactly the same position. Either they learn it or they learn to live with it. Or not.

Q:
Question #2: How far does understanding need to go? Some understanding of differences is helpful, but only when it's followed by acceptance of the differences. That's an attitude rather than an exercise in logic.
A:
This is even stranger than #1. Sorry, does not compute.


well, i wonder how gender is actually defined, if six have been claimed.
can you give a line on the model which is used?
my very rough first model allows for (2+n)(2+m)222 combinations. that's at least 32 for the corner-cases alone. i say if it's worth doing, then it's worth doing right.

Frank, Demonstrated instances of illusory free-will don't seem to me to be harder or easier to get rid of than the many other demonstrated illusory cognitive experiences. So I don't see anything exceptional about them in that regard.

HA, I do. It is a concept I suspect we are genetically biased to hold, an outgrowth of the distinction between subject (has a will) and object (has none). Why are be biased to do so? Because, largely, it works very well as a pattern for explanations about the world. We are built to explain the world using stories, and these s... (read more)

HA: How come you think I defend any "non-illusory human capacity to make choices"? I am just wondering why the illusion seems so hard to get rid of. Did I fail so miserably at making my point clear?

If your mind contains the causal model that has "Determinism" as the cause of both the "Past" and the "Future", then you will start saying things like, "But it was determined before the dawn of time that the water would spill - so not dropping the glass would have made no difference".

Nobody could be that screwed up! Not dropping the glass would have been no option. =)

About all that free-will stuff: The whole "free will" hypothesis may be so deeply rooted in our heads because the explanatory framework of i... (read more)

steven: To much D&D? I prefer chaotic neutral... Hail Eris! All hail Discordia! =)

[Eliezer says:] And if you're planning to play the lottery, don't think you might win this time. A vanishingly small fraction of you wins, every time.

I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that "In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!". I said: "Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.". Although your assertion in the case about lottery is much weaker, I don't believe it's strictly true.

The Taxi anecdote is ultra-geeky - I like that! ;-)

Also, once again I accidentally commented on Eliezers last entry, silly me!

[Unknown wrote:] [...] you should update your opinion [to] a greater probability [...] that the person holds an unreasonable opinion in the matter. But [also to] a greater probability [...] that you are wrong.

In principle, yes. But I see exceptions.

[Unknown wrote:] For example, since Eliezer was surprised to hear of Dennett's opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann
... (read more)

Unknown: Well, maybe yeah, but so what? It's just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That's nothing at all to do with "overconfidence", but it's everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss ... (read more)

Nick:
I thought the assumption was that SI is to S to get any ideas about world domination?

Makes me think:
Wouldn't it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?

botogol:

Eliezer (and Robin) this series is very interesting and all, but.... aren't you writing this on the wrong blog?

I have the impression Eliezer writes blog entries in much the same way I read Wikipedia: Slowly working from A to B in a grandiose excess of detours... =)

Wow, good teaser for sure! /me is quivering with anticipation ^_^

Caledonian:

One of the very many problems with today's world is that, instead of confronting the root issues that underlie disagreement, people simply split into groups and sustain themselves on intragroup consensus. [...] That is an extraordinarily bad way to overcome bias.

I disagree. What do we have to gain from bringing all-and-everyone in line with our own beliefs? While it is arguably a good thing to exchange our points of view, and how we are rationalising them, there will always be issues where the agreed evidence is just not strong enough to refu... (read more)

Will Pearson [about tiny robots replacing neurons]: "I find this physically implausible."

Um, well, I can see it would be quite hard. But that doesn't really matter for a thought experiment. To ask "What it would be like to ride on a light beam?" is quite as physically implausible as it gets, but seems to have produced a few rather interesting insights.

[Warning: Here be sarcasm] No! Please let's spend more time discussing dubious non-disprovable hypotheses! There's only a gazillion more to go, then we'll have convinced everyone!

Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles: Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.

Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to ind... (read more)

Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles:
Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.

Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to ind... (read more)

I must say I found this rather convincing (but I might just be confirmation biased). Also, I have a question on the topic: The zombiists assume that the universe U of existing things is split into two exclusive parts, physical things P and epiphenomenal things E. The physical things P probably develop something like P(t+1)=f(P(t),noise), as we have defined that E does not influence P. But what does E develop like? Is it E(t+1)=f(P(t)[,noise]), or is it E(t+1)=f(P(t),E(t)[,noise])? I have somehow always assumed the first, but I do not remember having read it spellt out so unmistakeably.

Richard: Yes, there is a reality beyond reality! Sure, it's not real in the sense that it is measurable or measurably interacts with our drab scientific reductionist reality, but it's... real! Really! I can feel it! So speak the Searle-addled...

Caledonian: "Since we can't extrapolate our physics that far, we don't know whether they're truly compatible with our understanding of physics or not." For the sake for argument, I'll let that stand (as a conflict of minor importance). Still, why should we go and assume a non-reductionist model? That's multiplying entities beyond necessity.

Caledonian: Sure you do. That's why we have biology and chemistry and neuroscience instead of having only one field: physics.

That's just a matter of efficiency (as I have tried to illuminate). There is nothing about those high level descriptions that is not compatible with physics. They are often more convenient and practical, but they do not add one iota of explanatory power.

PK: I don't see the ++ in your nice example, it's perfectly valid C... =)

Caledonian, Ian C.: I know of no models of reality that have superior explanatory power than the standard reductionist one-level-to-bind-them-all position (apologies for the pun). So why add more? In a certain way "our maps [are] part of reality too", but not in any fundamental sense. To simulate a microchip doing a FFT, it's quite sufficient to simulate the physical processes in it's logic gates. You need not even know what the chip is actually supposed to do. You just need... (read more)

• Sarah is hypnotized and told to take off her shoes when a book drops on the floor. Fifteen minutes later a book drops, and Sarah quietly slips out of her loafers. “Sarah,”, asks the hypnotist, “why did you take off your shoes?” “Well . . . my feet are hot and tired.”, Sarah replies. “It has been a long day”. • George has electrodes temporarily implanted in the brain region that controls his head movements. When neurosurgeon José Delgado (1973) stimulates the electrode by remote control, George always turns his head. Unaware of the remote stimulation, he ... (read more)

Frank Hirsch: How do you propose to lend credibility to your central tenet "If you seem to have free will, then you have free will"?

Ian C.: I'm not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory. If you introspect on yourself making a decision, the process is not (as you would expect): consideration (of pros and cons) -> decision -> option selected. It is in fact: consideration -> 'will' yourself to decide -> knowledge of option chose... (read more)

Frank Hirsch: "I don't think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of "free will" that contradicts causality-plus-randomness at the physical level."

Ian C.: More abstract ideas are proven by reference to more fundamental ones, which in turn are proven by direct observation. Seeing ourselves choose is a direct observation (albeit an introspective one). If an abstract theory (such as the whole universe being governed by billiard ball causatio... (read more)

Nominull: I believe Eliezer would rather be called Eliezer...

Ian C.: We observe a lack of predictability at the quantum level. Do quarks have a free will? (Yup a shameless rip-off of Dougs argument, tee-hee! =) Btw. I don't think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of "free will" that contradicts causality-plus-randomness at the physical level.

2RomanDavis
How about, Eliezer-sensei?

I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.

I know a really bad one which nearly turned my stomach: Some newspaper wrote "Survey uncovers that X's have the property Y!" (I forget the details). I read the article and it turned out that, according to some survey, most people believe that X's have the property Y. Argh!

Frank, what does that have to do with the quality of the paper I linked?

James, everything. The paper looks very much like the book in a nutshell plus an actual experiment. What does the paper have to do with "And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem."? I find these 13 and 12 year old papers more exciting. And here is some practical image recognition (although no general captcha) stuff.

James Blair: I've read JH's "On Intelligence" and find him overrated. He happens to be well known, but I have yet to see his results beating other people's results. Pretty theories are fine with me, but ultimately results must count.

Oh, and the Liar Paradox makes much more sense once we overcome our obsession about recursion: If we take the equally valid stance of viewing it as an iteration, it is easy to see that the whole problem is that the proposition does not converge; that's all there is to it.

I think the trouble about "Have you stopped beating your wife?" is that it is not about a state but about a state transition. It asks "10?", and the answer "no" really leaves three possibilities open (including that the questionee has recently started beating his wife). The sentence structure implies a false choice between answers 10 and 11, because we are used to asking (and answering) yes/no questions about 1-bit issues while here we deal with a 2-bit issue. But you probably knew all that... =)

[having read the comments]

Kriti et al: I'd recommend this and this to anybody who hasn't already read it. Otherwise I have not much idea for introductory texts right now.

[Without having read the comments]

WTF? You say: [...] I was actually advised to post something "fun", but I'd rather not [...]

I think it was fun!

BTW could we increase the probability of people being honest by basing reward not on individual choices, but on the log-likelihood over a sample of similar choices? (For a given meaning of similar.)

tcpkac: The important caveat is : 'boundaries around where concentrations of unusually high probability density lie, to the best of our knowledge and belief' . All the imperfections in categorisation in existing languages come from that limitation.

This strikes me as a rather bold statement, but "to the best of our knowledge and belief" might be fuzzy enough to make it true. Some specific factors that distort our language (and consequently our thinking) might be:

  • Probability shifts in thingspace invalidating previously useful clusterings. Natural
... (read more)

Okay, now let's code those factory objects! 1 bit for blue not red 1 bit for egg not cube 1 bit for furred not smooth 1 bit for flexible not hard 1 bit for opaque not translucent 1 bit for glows not dark 1 bit for vanadium not palladium

Nearly all objects we encounter code either 1111111 or 0000000. So we compress all objects into two categories and define: 1 bit for blegg (1111111) not rube (0000000). But, alas, the compression is not lossless, because there are objects which are neither perfect bleggs nor rubes: A 1111110 object will be innocently accused... (read more)

5atorm
I love the rewards.

Just a small one, because I can't hold it: You can't judge the usefulness of a definition without specifying what you want it to be useful for. And now I'm off to bed... =)

Hi, am back from the city, and a bit sleepy. I'll try my best with my comment. =) Michael: I was not so much commenting on this specific post as on the whole series. Your example seems to me to boil down to a case of bait-and-switch. Eliezer: ,,When people start violently arguing over their communication signals while they (a) understand what each other are trying to say'' Here the problem is already at full swing, and it's the same as philosophers arguing about the "real" definition of X. As soon as you have managed to get your point across, any... (read more)

Ben: I think you're right, we are on the same page! =) How about "Useful definitions will still be distorted by our mental mechanisms. Malignant and careless definitions are bad no matter what."?

Rolf: ,,What do you think of, say, philosophers' endless arguments of what the word "knowledge" really means?'' I think meh! ,,This seems to me one example where many philosophers don't seem to understand that the word doesn't have any intrinsic meaning apart from how people define it.'' Well, if they like to do so, let 'em. At least they're off the streets. =) What's worse is the kind of philosophers who flourish by sidestepping honest debate by complicating matters until nobody (including themselves) can possibly tell a left hand from a right f... (read more)

Eliezer, I must admit I really don't get your problem with definitions. Or, more precisely, I can't get myself to share it. It seems to me you attack definitions mainly because they enable malignant (and/or confused) arguers to do a bait-and-switch. Without defining what is being talked about, there is no obvious switching anymore, so that seems to be your solution. But to me that is like leaving an important variable unbound, which makes the whole argument underdefined and therefore practically worthless. IMHO it is precisely because two people have a com... (read more)

Eisegetes:
"Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society.

From a practical POV, without any ambitions to look under the hood, we can just draw this "ordinary language defense line", as I'd call it. Where it gets interesting from an Evolutionary Psychology POV is exactly those "inner shame/disgust/disapproval registers". Th... (read more)

ZMD:
C'mon gimme a break, I said it's not satisfying!
I get your point, but I dare you to come up with a meaningful but unassailable one-line definition of morality yourself!
BTW birth control certainly IS moral, and overeating is just overdoing a beneficial adaption (i.e. eating).

0orthonormal
If that's what you see as the goal, then you didn't get his point. (Context, since the parent came before the OB-LW jump: Frank asserted that "A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes", and Z.M. Davis pointed out the flaws in that statement.)

Eisegetes:
Well I (or you?) really maneuvered me into a tight spot here.
About those options, you made a goot point.
To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
I don't know of any satisfiying definition of morality. I probably must involve actions that are neither taylored for personal nor inclusive fitness. I suppose the best I can come up with is "A moral action is one which you choose (== that makes you feel good) with... (read more)

Eisegetes (please excuse the delay):

That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. [...] Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions -- in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember -- we are dealing here not with all possible brain states, but with all possible states of the porti... (read more)

Load More