Sniffnoy

I'm Harry Altman. I do strange sorts of math.

Posts I'd recommend:

Wiki Contributions

Comments

Sorted by

We could also point to sleepwalkers of various sorts: even when executing complex actions (like murdering someone), I've never seen any accounts which mention deeply felt emotions. (WP emphasizes their dullness and apathetic affect.)

Nitpick: Sleepwalking proper apparently happens during non-REM sleep; acting out a dream during REM sleep is different and has its own name. Although it seems like sleepwalkers may also be dreaming somehow even though they aren't in REM sleep? I don't know -- this is definitely not my area -- and arguably none of this is relevant to the original point; but I thought I should point it out.

Ha! OK, that is indeed nasty. Yeah I guess CASes can solve this kind of problem these days, can't they? Well -- I say "these days" as if it this hasn't been the case for, like, my entire life, I've just never gotten used to making routine use of them...

One annoying thing in reading Chapter 3 -- chapter 3 states that for l=2,4,8, the optimal scoring rules can be written in terms of elementary functions. However, you only actually give the full formula for the case l=8 (for l=2 you give it on half the interval). What are the formulas for the other cases?

(But also, this is really cool, thanks for posting this!)

I think some cases cases of what you're describing as derivation-time penalties may really be can-you-derive-that-at-all penalties. E.g., with MWI and no Born rule assumed, it doesn't seem that there is any way to derive it. I would still expect a "correct" interpretation of QM to be essentially MWI-like, but I still think it's correct to penalize MWI-w/o-Born-assumption, not for the complexity of deriving the Born rule, but for the fact that it doesn't seem to be possible at all. Similarly with attempts to eliminate time, or its distinction from space, from physics; it seems like it simply shouldn't be possible in such a case to get something like Lorentz invariance.

Why do babies need so much sleep then?

Given that at the moment we don't really understand why people need to sleep at all, I don't think this is a strong argument for any particular claimed function.

Oh, that's a good citation, thanks. I've used that rough argument in the past, knowing I'd copied it from someone, but I had no recollection of what specifically or that it had been made more formal. Now I know!

My comment above was largely just intended as "how come nobody listens when I say it?" grumbling. :P

I should note that this is more or less the same thing that Alex Mennen and I have been pointing out for quite some time, even if the exact framework is a little different. You can't both have unbounded utilities, and insist that expected utility works for infinite gambles.

IMO the correct thing to abandon is unbounded utilities, but whatever assumption you choose to abandon, the basic argument is an old one due to Fisher, and I've discussed it in previous posts! (Even if the framework is a little different here, this seems essentially similar.)

I'm glad to see other people are finally taking the issue seriously, at least...

Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.

Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.

Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.

We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?

I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>

I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a "rationalist" is supposed to do, it's still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.

Load More