Vladimir_Golovin comments on A Sense That More Is Possible - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (205)
Couldn't agree more. Execution is crucial.
I can come out of a probability trance with a perfect plan, an ideal path of least resistance through the space of possible worlds, but now I have to trick, bribe or force my messy, kludgy, evolved brain into actually executing the plan.
A recent story from my experience. I had (and still have) a plan involving a relatively large chunk of of work, around a full-time month. Nothing challenging, just 'sit down and do it' sort of thing. But for some reason my brain is unable to see how this chunk of work will benefit my genes, so it just switches into a procrastination mode when exposed to this work. I tried to force myself to do it, but now I get an absolutely real feeling of 'mental nausea' every time I approach this task – yes, I literally want to hurl when I think about it.
For a non-evolved being, say an intelligently-designed robot, the execution part would be a non-issue – it gets a plan, it executes it as perfectly as it can, give or take some engineering inefficiencies. But for an evolved being trying to be rational, it's an entirely different story.
An idea on how to make the execution part trivial – a rational planner should treat his own execution module as a part of the external environment, not as a part of 'himself'. This approach will produce plans that take into account the inefficiencies of one's execution module and plan around them.
I hope you realize this is potentially recursive, if this 'execution module' happens to be instrumental to rationality. Not that that's necessarily a bad thing.
No, I don't (yet) -- could you please elaborate on this?
Funny how this got rerun on the same day as EY posted about progress on Löb's problem.
Well, ideally one considers the whole of themselves when doing the calculations, but it does make the calculations tricky.
And that still doesn't answer exactly how to take it into account. ie, "okay, I need to take into account the properties of my execution module, find ways to actually get it to do stuff. How?"
However, treating the execution module as external and fixed may demotivate attempts to improve it.
(Related: Chaotic Inversion)
If one had public metrics of success at rationality, the usual status seeking and embarrassment avoidance could encourage people to actually apply their skills.
Shouldn't a common-sense 'success at life' (money, status, free time, whatever) be the real metric of success at rationality? Shouldn't a rationalist, as a General Inteligence, succeed over a non-rationalist in any chosen orderly environment, according to any chosen metric of success -- including common metrics of that environment?
No.
Point-by-point:
Agreed. Let's throw away the phrase about General Intelligence -- it's not needed there.
Obviously, if we're measuring one's reality-steering performance we must know the target region (and perhaps some other parameters like planned time expenditure etc.) in advance.
The measurement should measure the performance of a rationalist at his/her current level, not taking into account time and resources he/she spent to level up. Measuring 'the speed or efficiency of leveling-up in rationality' is a different measurement.
The definitions at the beginning of the original post will do.
On one hand, the reality-mapping and reality-steering abilities should work for any activity, no matter whether the performer is hardware-accelerated for that activity or not. On the other hand, we should somehow take this into account -- after all, excelling at things one is not hardware-accelerated for is a good indicator. (If only we could reliably determine who is hardware-accelerated for what).
(Edit: cool, it does numeric lists automatically!)
Public metrics aren't enough - society must also care about them. Without that, there's no status attached and no embarrassment risked.
To get this going, you'd also need a way to keep society's standards on-track, or even a small amount of noise would lead to a positive feedback loop disrupting its conception of rationality.
Everyone has at least a little bit of rationality. Why not simply apply yourself to increasing it, and finding ways to make yourself implement its conclusions?
Just sit under the bodhi tree and decide not to move away until you're better at implementing.