A relevant anecdote from a friend:
He's been playing this new game called Generals. There's basically one dominant strategy against all humans, so when he's playing against a human, he sticks with the strategy, and focuses on execution - implementing the strategy faster and more reliably. This is Rajas - point at the target, and then go after it fast.
But the leaderboard is dominated by AIs and eventually he got to that level. So the important time started being between games; you can't beat the AI on reaction time. So he thought about how Lee Sedol had beat AlphaGo in one game. Answer: by pushing it into a part of Go-space it hadn't explored. It turned out that if he tried to play the strategy that's second or third best against humans, it was totally out of the AI's experience, so he could wipe the floor with it. This is Sattva - think your way around the problem. Take perspective.
People who are new to online strategy games tend to spend their initial games trying to stay alive instead of trying to accomplish their game goals (destroy the other player). This is Tamas. Just keep your head above water. Enough food in your body. Hide from threats. Live to fight another day. Not very adaptive in online games where death isn't very costly, but adaptive when facing real life threats.
The descriptions already seem to mix different things together. For example, why is "much effort" connected with pleasure and selfishness, but not with virtue or delusion? A possible interpretation is that "much effort" represent internal conflict (which the virtuous and the delusional are supposed not to have). But then we obviously cannot map this into Freud's model of three subagents in the brain, because it doesn't represent a specific subagent, but rather a conflict between them.
If I had to make a 1:1 mapping into Freud's concepts though, I would choose "ego" for the first line (ignoring the connotations of "virtue"), because the second one is obviously "id", and the third one is some dark version of "superego".
In other words, imagine that someone takes the Freud's model, likes the concept of three subagents, but then decides that neither of them has enough applause lights. This "id" guy seems like all fun and no brain, not even self-preservation. This "superego" guy is just another word for lawfully stupid. And this "ego" guy is the only smart one of the trinity, but having no goals of his own, he becomes a slave of the remaining two, working hard to make all three of them survive.
Wouldn't it be nice if we could pretend that the sociallly learned good advice and the socially learned bad advice are two things that have absolutely nothing in common? And if we could furthermore pretend that all the socially learned good advice is actually a matter of pure rationality, derived from the first principles by the smart "ego" guy? Or for the less smart of us, who can't independently derive everything from the first principles, we can still pretend that recognizing a more rational teacher and being swayed by his superior rhetorical skills is something completely unlike all the situations when we learn bad things from authorities or peer pressure.
So the end model is this "ego" + the good parts of "superego" (which can get all the applause lights for being both smart and virtuous), then "id", and then the bad parts of the "superego". Now instead of mere subagents with conflicting goals, we know which one should win!