I have a few recurrent self-actualization fantasies that make use of fanciful abilities and resources. Sometimes the ability is time travel, which made this tweet by Liron Shapira stand out to me:
A time machine is a mechanism that lets you pretend like something far from you is actually near you, with respect to causal distance.
Likewise with your telekinesis and "vanishing into the floor", I propose that daydreams (as recurrent, unproductive consideration of situations involving plans that are, in reality, non-actionable for their use of fanciful skills and resources) commonly serve as agency-superstimuli: imagined successes, relying on expanded abilities (such as those which reduce effort, cost, or uncertainty of achieving some material effect), produce an inference with in-pretense-validity of one's own exceptional personal character.
Maybe it's worth distinguishing "wishing for an outcome", and "imagining the experience of the desired outcome (eating breakfast)", and "imagining a fantastical plan for achieving the outcome" as having different effects on one's motivation / decisions.
For your copy writing example, you list a few interesting techniques which show up later in only abbreviated form in the section of responses to Type 3 problems. Rephrasing and expanding a little bit, if you're worried about poor task performance you might motivate yourself by: 1) highlighting to yourself that you are uncertain about your performance quality, rather than certain that it will be bad, and that you're thus neglecting the possibility that you will do well at the task, 2) highlighting your comparative advantage in solving the problem for reasons other than skill (such as being in a unique position / time / place to solve the problem, or having special access to relevant resources, or having a title with related useful liberties / authorities), or 3) highlighting your (role-consonant) duty to try / to perform, while trivializing your duty to evaluate your performance (perhaps diffusing that responsibility by deciding that it belongs to some non-specified others).
I might add that questioning whether your own performance will be adequate / sufficient, probably has at least these three functions: 1) to make you change/improve your plans, or give up on your plans if they seem inadequate, 2) to motivate you to ask other people for information about your current/future performance, and 3) to excuse future failure ("I knew I couldn't do this. I kept saying I didn't know how. Everyone heard me. I shouldn't have been forced to do this. This isn't my fault. This outcome shouldn't/doesn't justify an inference decreasing anyone's estimation of my skill / social standing."). (Please not that I'm using "you" rhetorically. I don't know the specifics of your work with CFAR, haven't perceived any failure, and am not trying to accuse you of any, I don't know what you call that, "epistemic misfeasance" maybe.)
These suggest a few ways to suppress (the decision relevance / import of) such worries: 1) making more resolute in your mind that a) nothing more can/should be done to improve your plans / your expected future performance, that b) "backing out" would be more costly than continuing, 2) asking aloud for others to help evaluate / improve your performance, or 3) verifying (to your satisfaction in advance of the performance) a communal perception of the validity of your excuse by (confirming that others will not / persuading others that they should not) make such a judgment to bad character, due to whatever circumstances are in effect.
I think one of the most interesting parts of this post is your conceptualization of System I and System II as not just being parts of your decision making apparatus, but as being separate person with their own preference, beliefs, and signature behavioral characteristics. Is there other literature which suggests that dual process of decision making are paired with dual processes of motivation (appetitive/aversive drives (and also maybe some preference-like psychological state behind habitual / scripted action) vs. reflective / higher order, verbally endorsed, ego-syntonic preferences)?
A time machine is a mechanism that lets you pretend like something far from you is actually near you, with respect to causal distance.
This is only true for what people typically mean by "time machine" if causal distances may be negative.
Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.
Attempted telekinesis
The case of the munching noises
The ad copy writer who doesn’t know if she’s “good enough”
Useful “telekinesis”: Separating babies from bathwater
How to distinguish?
Task type:
Type 1: Problems that System 1 can solve by itself:
Examples: Making breakfast; causing someone to know you care about them.
Suggested response: This sort of wishing is healthy, and may prompt actions that make a lot more sense than those system 2 would plan (e.g., your nonverbals as you apologize are likely to be far better if you viscerally care about your interlocutor). Leave system 1 be.
Type 2: Problems that are worth solving, but that need help from System 2:
Examples: “There’s nothing good to eat” (situation: you notice that several times, over the last hour, you’ve gone to the fridge, opened it, stared inside, closed it... and then opened it again a few minutes later -- as though to see if something good has magically materialized into the closed fridge); Feeling 'stuck' at one's job (or in a relationship); Not having enough money. (The distinguishing feature here is that system 1 has been looping on the problem for a while to no effect, and that system 2 has not yet taken a good look at the problem.)
Suggested response: Raise the problem to conscious attention; then, try to figure out what is bothering system 1; finally, decide what to do about it. As you do this, parts of the wishing will naturally shift from the general problem ("Somehow make work less stuck-feeling") to the specific strategy you've chosen ("Figure out how to renegotiate with my manager").[4]
Type 3: “Problems” that should be given up on:
Examples: “Make the munching noises go away” (in a case where you’ve decided not to); “Make San Franciscans be better drivers”; “Let me vanish into the floor.” (The distinguishing feature here is simply that these are "problems" that, on reflection, you do not wish to take action on.)
Suggested response: Find a way to let system 1 know that solving this problem isn't worth the cost, or that keeping this problem on your internal "worry/fume about" list is quite unlikely to have positive effects. For example, you might:
Examples: The problem of locating a workshop venue (during the hour at which I was trying to write the workshops ad, that October); the situation with your roommates and the dishes (while you're at work solving a coding problem).
Suggested response: Designate a particular future-you to do the task. Dialog with your "inner simulator" (your system 1 anticipations) until both system 1 and system 2 are convinced that that specific you will actually do the task, and that there is no additional positive effect to be gained via staying preoccupied now.
Type 5: Problems that System 2 needs "shower-thoughts" help with:
Examples: Archimedes' problem measuring the king's crown; "My relationship with Fred is broken, and I can't figure out what to do about it"; "How the heck can I solve that math riddle?" (The distinguishing feature here is that both: (1) the problem has already been raised to conscious attention at some point (and system 2 failed to instantly solve it); and (2) the problem is a worthy use of your shower-thoughts -- either for what it'll accomplish directly, or for the improvement it may give to your pattern of thought.))
Suggested response: This sort of wishing is healthy. Leave system 1 be.
Emotional tone:
Wishes often seem to me to have emotional tones. Some tones are simple desire (“Breakfast... mmm....”). Others have an overlayed hopelessness or bitter resignation about them (“I just always have to put up with how everyone else is incompetent”); others, still, have a tone (at least in me) of hammed-up flailing, self-pity, or desire for outside help -- as though if I just feel helpless enough, somehow a grown-up will come to the rescue ("Make the workshop crisis not be in this state... Make the workshop crisis not be in this state...").
It seems to me that it's worth installing an "alert" that sounds, in your head, whenever it hears either the hopeless/bitter/resigned tone, or the flailing/save-me tone. Both are often signs of buggy "attempted telekinesis" situations that are worth conscious debugging (a la the schema above). And the emotional tones can be easier to automatically flag.