Vladimir_Golovin comments on A Sense That More Is Possible - Less Wrong

61 Post author: Eliezer_Yudkowsky 13 March 2009 01:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Golovin 13 March 2009 11:48:15AM *  9 points [-]

It doesn't take a formal probability trance to chart a path through everyday life - it was in following the results

Couldn't agree more. Execution is crucial.

I can come out of a probability trance with a perfect plan, an ideal path of least resistance through the space of possible worlds, but now I have to trick, bribe or force my messy, kludgy, evolved brain into actually executing the plan.

A recent story from my experience. I had (and still have) a plan involving a relatively large chunk of of work, around a full-time month. Nothing challenging, just 'sit down and do it' sort of thing. But for some reason my brain is unable to see how this chunk of work will benefit my genes, so it just switches into a procrastination mode when exposed to this work. I tried to force myself to do it, but now I get an absolutely real feeling of 'mental nausea' every time I approach this task – yes, I literally want to hurl when I think about it.

For a non-evolved being, say an intelligently-designed robot, the execution part would be a non-issue – it gets a plan, it executes it as perfectly as it can, give or take some engineering inefficiencies. But for an evolved being trying to be rational, it's an entirely different story.

Comment author: Vladimir_Golovin 13 March 2009 12:48:17PM 13 points [-]

An idea on how to make the execution part trivial – a rational planner should treat his own execution module as a part of the external environment, not as a part of 'himself'. This approach will produce plans that take into account the inefficiencies of one's execution module and plan around them.

Comment author: thomblake 13 March 2009 09:15:22PM 4 points [-]

I hope you realize this is potentially recursive, if this 'execution module' happens to be instrumental to rationality. Not that that's necessarily a bad thing.

Comment author: Vladimir_Golovin 14 March 2009 06:32:52PM 3 points [-]

No, I don't (yet) -- could you please elaborate on this?

Comment author: Luke_A_Somers 24 March 2013 01:21:43AM 0 points [-]

Funny how this got rerun on the same day as EY posted about progress on Löb's problem.

Comment author: Psy-Kosh 13 March 2009 09:12:14PM 3 points [-]

Well, ideally one considers the whole of themselves when doing the calculations, but it does make the calculations tricky.

And that still doesn't answer exactly how to take it into account. ie, "okay, I need to take into account the properties of my execution module, find ways to actually get it to do stuff. How?"

Comment author: Nick_Tarleton 13 March 2009 09:25:02PM *  1 point [-]

However, treating the execution module as external and fixed may demotivate attempts to improve it.

(Related: Chaotic Inversion)