I think it is simply a planning fallacy in action. More precisely: you are already in the middle of the old project, so you already see many problems, and the planning fallacy is weaker there. In the new project you haven't encountered any problem yet, so the planning fallacy is strong there. This is what gives you an illusion that the new project is much simpler. Maybe it is not on congnitive level, but on emotional level -- the old projects feels like "a lot of hard, boring work" and the new project feels like "exciting low-hanging fruit". Am I right?
At least for me this effect is rather strong, sometimes absurdly strong. I'll give you an example:
I was writing a book, hundred pages already finished, hundred more pages waiting to be written. My emotions gradually went from excitement, through "flow", to boredom, progressing to hate. So I stopped writing and started thinking: perhaps I should give up on this book and do something else instead. But what? What are my long-term goals and what are my dreams? So I imagined some possibilites, weighted their emotional appeal, made some connections like "this would help me do that" or "this and this could be made at the same time", and finally it became obvious to me that the coolest thing I could do would be writing a book, more specifically, exactly the kind of book I was writing right now. And the feeling was so strong, my emotions really wanted to stop writing the old boring book, and to start writing the new exciting exactly the same book; logically it did not make any sense, but my emotions felt that this time it would be perfect, because the imaginary book is always more shiny and easier to write than the real book. So... It's easy to see the mistake when your emotions are telling you that "X is much better than X". It's more difficult if your emotions, following the same algorithm, come to conclusion that "Y is much better than X". But the essence is that when you compare real with imaginary, the imaginary always wins. The new project seems better because it is imaginary yet.
I was writing a book, hundred pages already finished, hundred more pages waiting to be written. My emotions gradually went from excitement, through "flow", to boredom, progressing to hate.
If all of the writers I know are to be believed, this actually happens with every book-- halfway through, no one wants to finish. They're usually glad they do, though. I wonder what that's all about.
I have a problem with never finishing things that I want to work on. I get enthusiastic about them for a while, but then find something else to work on. This problem seems to be powered partially by my sunk costs fallacy hooks.
When faced with the choice of finishing my current project or starting this shiny new project, my sunk costs hook activates and says "evaluate future expected utility and ignore sunk costs". The new project looks very shiny compared to the old project, enough that it looks like a better thing to work on than the rest of the current project. The trouble is that this always seems to be the case. It seems weird that the awesomeness of my project ideas would have exponential growth over time, so there must be something else here.
The "evaluate future expected utility and ignore sunk costs" heuristic works well for life-planning where people get status quo bias and financial-utility things where utility is easy to calculate consistently. In fact it seems like generally good decision theory, except that I'm running on corrupted hardware. My corrupted hardware seems to decay the perceived value of a project over the time that I know of it, or inflate it when it seems new and exciting, which of course throws off the sunk costs hook's assumption that I can evaluate utility *consistently*.
So my sunk costs hook has a bad assumption. I don't want to go modifying the hook in a way that would break its applicability to economic and life-planning situations or its theoretical correctness, so I'll just add "this assumes consistent utility function". This of course doesn't actually help me on the project planning case, I need to put a hook on evaluating the utility of a project that makes the utility function consistent.
Some things that might de-skew my evaluation of exciting new projects:
So I'll see how this works out.
I think this situation is probably not unique. Many of our debiasing hooks are formulated to combat specific biases but might catch situations outside their domain. In this example, the sunk costs bias is a real thing, but the hook to catch it was also catching a situation where sunk costs was not the primary bias, and actually ended up contributing to bias.
It might be valuable to think about what other situations a hook might catch, and modify it to not screw things up before we install it. Also, other biases may act in the opposite direction, and only hooking one of them might make things worse.
Anyways, that's my thoughts on a specific bit of debiasing. Maybe you all have some other examples of this sort of thing, and maybe this will be useful for people who have the same problem.