Dweomite
Dweomite has not written any posts yet.

If it has an integral gain, it will notice this and try to add more and more heat until it stops being wrong. If it can't, it's going to keep asking for more and more output, and keep expecting that this time it'll get there. And because it lacks the control authority to do it, it will keep being wrong, and maybe damage its heating element by asking more than they can safely do. Sound familiar yet?
From tone and context, I am guessing that you intend for this to sound like motivated reasoning, even though it doesn't particularly remind me of motivated reasoning. (I am annoyed that you are forcing me to... (read 430 more words →)
At some point, a temperature control system needs to take actions to control the temperature. Choosing the correct action depends on responding to what the temperature actually is, not what you want it to be, or what you expect it to be after you take the (not-yet-determined) correct action.
If you are picking your action based on predictions, you need to make conditional predictions based on different actions you might take, so that you can pick the action whose conditional prediction is closer to the target. And this means your conditional predictions can't all be "it will be the target temperature", because that wouldn't let you differentiate good actions from bad actions.
It is... (read more)
Your explanation about the short-term planner optimizing against the long-term planner seems to suggest we should only see motivated reasoning in cases where there is a short-term reward for it.
It seems to me that motivated reasoning also occurs in cases like gamblers thinking their next lottery ticket has positive expected value, or competitors overestimating their chances of winning a competition, where there doesn't appear to be a short-term benefit (unless the belief itself somehow counts as a benefit). Do you posit a different mechanism for these cases?
I've been thinking for a while that motivated reasoning sort of rhymes with reward hacking, and might arise any time you have a generator-part Goodharting an... (read more)
... except that you have a natural immunity (well, aversion) to adopting complex generators, and a natural affinity for simple explanations. Or at least I think both of those are true of most people.
It seems pretty important to me to distinguish between "heuristic X is worse than its inverse" and "heuristic X is better than its inverse, but less good than you think it is".
Your top-level comment seemed to me like it was saying that a given simple explanation is less likely to be true than a given complex explanation. Here, you seem to me like you're saying that simple explanation is more likely to be true, but people have a preference for them that is stronger than the actual effect, and so you want to push people back to having a preference that is weaker but still in the original direction.
"Possible" is a subtle word that means different things in different contexts. For example, if I say "it is possible that Angelica attended the concert last Saturday," that (probably) means possible relative to my own knowledge, and is not intended to be a claim about whether or not you possess knowledge that would rule it out.
If someone says "I can(not) imagine it, therefore it's (not) possible", I think that is valid IF they mean "possible relative to my understanding", i.e. "I can(not) think of an obstacle that I don't see any way to overcome".
(Note that "I cannot think of a way of doing it that I believe would work" is a weaker... (read more)
I interpreted the name as meaning "performed free association until the faculty of free association was exhausted". It is, of course, very important that exhausting the faculty does not guarantee that you have exhausted the possibility space.
Alas, unlike in cryptography, it's rarely possible to come up with "clean attacks" that clearly show that a philosophical idea is wrong or broken.
I think the state of philosophy is much worse than that. On my model, most philosophers don't even know what "clean attacks" are, and will not be impressed if you show them one.
Example: Once in a philosophy class I took in college, we learned about a philosophical argument that there are no abstract ideas. We read an essay where it was claimed that if you try to imagine an abstract idea (say, the concept of a dog), and then pay close attention to what you are imagining, you will... (read 498 more words →)
An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc.
This is rather tangential to the main thrust of the post, but a couple of people used a react to request a citation for this claim.
One noteworthy source is Aella's surveys on fetish popularity and tabooness. Here is an older one that gives the % of people reporting interest, and here is a newer one showing the... (read 389 more words →)
If you're a moral realist, you can just say "Goodness" instead of "Human Values".
I notice I am confused. If "Goodness is an objective quality that doesn't depend on your feelings/mental state", then why would the things humans actually value necessarily be the same as Goodness?
Sure, give me meta-level feedback.