Upon rethinking it, I decided that my original position missed the mark somewhat, because it's not clear how "rationality" plays into an id-ego-superego model (which could either match short-term desires, decider, long-term desires, or immoral desires, decider, moral desires- the first seems more useful for this discussion).
It seems to me that rationality is not superego strengthening, but ego strengthening- and the best way to do that is to elevate whoever isn't present at the moment. If your superego wants you to embark on some plan, consult your id before committing (and making negative consequences immediate is a great way to do that); if your id wants you to avoid some work, consult your superego before not doing it.
And so I think what you've written is spot on for half of the problem, and agree your scheme is good at solving that half of the problem (and gives insights about the other half).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I would actually dispute this, but that goes into what you actually mean by a "self". I don't see how it's not obvious that are multiple agents at work; the problem of akrasia is, then, trying to decide which agent actually gets to pilot your brain at that instant. I suspect this is alleviated, to some extent, by increased self-awareness; if you can pick out modes of thought that you don't actually want to "endorse" (like the "I want to be a physicist" versus "I don't want to do physics" example below), you are probably more likely to have the ability to override what you label as "not endorsed" than if you are actually sitting there wondering "wait, is this what I really think? Which mode is me?"