Posts

Sorted by New

Wiki Contributions

Comments

It seems to me - and I'm a depressive - that even if depressed people really do have more accurate self-assessment, your third option is still the most likely.

One recurrent theme on this site is that humans are prone to indulge cognitive biases which _make them happy_. We try to avoid the immediate hedonic penalty of admitting errors, forseeing mistakes, and so on. We judge by the availability heuristic, not by probability, when we imagine a happy result like winning the lottery.

When I'm in a depressed state, I literally _can't_ imagine a happy result. I imagine that my all plans will fail and striving will be useless.

This is still not a rational state of mind. It's not _inherently_ more accurate. But it's a state of mind that's inherently more resistant to certain specific errors - such as over-optimistic probability assessment or the planning fallacy.

These errors of optimism are common, especially in self-assessment. Which might well be the reason depressed people make more accurate self-assessments - humans as a whole have a cognitive bias to personal overconfidence.

-

But it's also inherently more resistant to optimistic conclusions, _even when they're backed by the evidence_.

(It's more rational to be accurate and sad than delusional and happy - because happiness based on delusion frequently crashes into real-world disasters, whereas if you're accurate and sad you can _use_ the accuracy to reduce the things you're sad about.)

Exactly. Note that the writers intended to give them the extra feature "behaves logically", and failed completely. They managed "behaves like a human, then complains that it's not logical", which is very far from being the same thing.

Only until you build a self-modifying super-intelligent bull. Because the first thing it will do is become smart enough to persuade you to give the ring to someone else, who it's calculated it can con into taking the ring off.

Human minds are really badly adapted for defence against con artists operating on the same level; how on earth would we defend ourselves against an exponentially smarter one?

As I was taught, that's also a little unfair, or at least oversimplified. That everyone confesses to everything is not just primitive anonymisation, it's a declaration of communal responsibility. It's supposed to be deliberate encouragement to take responsibility for the actions of your community as a whole, not just your own.

Zendo is my go-to exercise for explaining just about any idea in inductive investigation. (But it's even more useful as a tool for reminding myself to do better. After years, the number of Zendo games I lose due to positive bias is still far higher than I'd like... even when I think I've taken steps to avoid that.)

Yes they do. If the world is controlled by an intelligent entity, then statistical proofs tell you about the behaviour of that entity, rather than impersonal laws of physics, but they still tell you what's likely to happen.