Sure, give me meta-level feedback.
If it has an integral gain, it will notice this and try to add more and more heat until it stops being wrong. If it can't, it's going to keep asking for more and more output, and keep expecting that this time it'll get there. And because it lacks the control authority to do it, it will keep being wrong, and maybe damage its heating element by asking more than they can safely do. Sound familiar yet?
From tone and context, I am guessing that you intend for this to sound like motivated reasoning, even though it doesn't particularly remind me of motivated reaso...
At some point, a temperature control system needs to take actions to control the temperature. Choosing the correct action depends on responding to what the temperature actually is, not what you want it to be, or what you expect it to be after you take the (not-yet-determined) correct action.
If you are picking your action based on predictions, you need to make conditional predictions based on different actions you might take, so that you can pick the action whose conditional prediction is closer to the target. And this means your conditional predictions can...
Your explanation about the short-term planner optimizing against the long-term planner seems to suggest we should only see motivated reasoning in cases where there is a short-term reward for it.
It seems to me that motivated reasoning also occurs in cases like gamblers thinking their next lottery ticket has positive expected value, or competitors overestimating their chances of winning a competition, where there doesn't appear to be a short-term benefit (unless the belief itself somehow counts as a benefit). Do you posit a different mechanism for these case...
... except that you have a natural immunity (well, aversion) to adopting complex generators, and a natural affinity for simple explanations. Or at least I think both of those are true of most people.
It seems pretty important to me to distinguish between "heuristic X is worse than its inverse" and "heuristic X is better than its inverse, but less good than you think it is".
Your top-level comment seemed to me like it was saying that a given simple explanation is less likely to be true than a given complex explanation. Here, you seem to me like you're saying ...
"Possible" is a subtle word that means different things in different contexts. For example, if I say "it is possible that Angelica attended the concert last Saturday," that (probably) means possible relative to my own knowledge, and is not intended to be a claim about whether or not you possess knowledge that would rule it out.
If someone says "I can(not) imagine it, therefore it's (not) possible", I think that is valid IF they mean "possible relative to my understanding", i.e. "I can(not) think of an obstacle that I don't see any way to overcome".
(Note tha...
I interpreted the name as meaning "performed free association until the faculty of free association was exhausted". It is, of course, very important that exhausting the faculty does not guarantee that you have exhausted the possibility space.
Alas, unlike in cryptography, it's rarely possible to come up with "clean attacks" that clearly show that a philosophical idea is wrong or broken.
I think the state of philosophy is much worse than that. On my model, most philosophers don't even know what "clean attacks" are, and will not be impressed if you show them one.
Example: Once in a philosophy class I took in college, we learned about a philosophical argument that there are no abstract ideas. We read an essay where it was claimed that if you try to imagine an abstract idea (say, the concept of a dog...
An awful lot of people, probably a majority of the population, sure do feel deep yearning to either inflict or receive pain, to take total control over another or give total control to another, to take or be taken by force, to abandon propriety and just be a total slut, to give or receive humiliation, etc.
This is rather tangential to the main thrust of the post, but a couple of people used a react to request a citation for this claim.
One noteworthy source is Aella's surveys on fetish popularity and tabooness. Here is an older one that gives the % of people...
If you're a moral realist, you can just say "Goodness" instead of "Human Values".
I notice I am confused. If "Goodness is an objective quality that doesn't depend on your feelings/mental state", then why would the things humans actually value necessarily be the same as Goodness?
What would you want such a disclaimer or hint to look like?
(I am concerned that if a post says something like "this post is aimed at low-level people who don't yet have a coherent foundational understanding of goodness and values" then the set of people who actually continue reading will not be very well correlated with the set of people we'd like to have continue reading.)
A smart human-like mind looking at all these pictures would (I claim) assemble them all into one big map of the world, like the original, either physically or mentally.
On my model, humans are pretty inconsistent about doing this.
I think humans tend to build up many separate domains of knowledge and then rarely compare them, and even believe opposite heuristics by selectively remembering whichever one agrees with their current conclusion.
For example, I once had a conversation about a video game where someone said you should build X "as soon as possible", an...
I don't think Eliezer's actual real-life predictions are narrow in anything like the way Klurl's coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius' arguments do have non-zero weight, just nothing close to the weight they'd need to overcome the baseline improbability of such a narrow target.
Thank you for being more explicit.
If you write a story where a person prays and then wins the lottery as part of a demonstration of the efficacy of prayer, that is fictional evidence even though prayer and winning lotteries are both real things.
In your example, it seems to me that the cheat is specifically that the story presents an outcome that would (legitimately!) be evidence of its intended conclusion IF that outcome were representative of reality, but in fact most real-life outcomes would have supported the conclusion much less than that. (i.e. there ...
I notice I am confused about nearly everything you just said, so I imagine we must be talking past each other.
On the contrary: This is perhaps the only way the story could avoid generalizing from fictional evidence. Your complaint about Klurl's examples are that they are "coincidentally" drawn from the special class of examples that we already know are actually real, which makes them not fictional. Any examples that weren't special in this way would be fictional evidence, and readers could object that we're not sure if those examples are actually possible.
If you think that the way the story played out was misleading, that seems like a disagreement about reality, n...
I would agree that, while reality-in-general has a surprising amount of detail, some systems still have substantially more detail than others, and this model applies more strongly to systems with more detail. I think of computer-based systems as being in a relatively-high-detail class.
I also think there are things you can choose to do when building a system to make it more durable, and so another way that systems vary is in how much up-front cost the creator paid to insulate the system against entropy. I think furniture has traditionally fallen into a high-durability category, as an item that consumers expect to be very long-lived...although I think modernity has eroded this tradition somewhat.
I have a tentative model for this category of phenomenon that goes something like:
On my reading, most of Klurl's arguments are just saying that Trapaucius is overconfident. Klurl gives many specific examples of ways things could be different than Trapaucius expects, but Klurl is not predicting that those particular examples actually will be true, just that Trapaucius shouldn't be ruling them out.
..."I don't recall you setting an exact prediction for fleshling achievements before our arrival," retorted Trapaucius.
"So I did not," said Klurl, "but I argued for the possibility not being ruled out, and you ruled it out. It is sometimes po
I think you do a good job of arguing (in the earlier part of the article) that it is logically possible to drop the independence axiom without being money-pumped by giving up logical consequentialism but keeping dynamic consistency. However, I think you do a poor job of arguing (in the later parts) that we should give up consequentialism.
You examine 3 in-depth examples to try to show that we'd be fine if we dropped independence: ergodicity economics, the Allais Paradox, and the Ellsburg Paradox. In all 3 cases, it think your argument is missing a critical ... (read more)