atucker comments on What a reduction of "could" could look like - Less Wrong

53 Post author: cousin_it 12 August 2010 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread.

Comment author: atucker 13 August 2010 01:13:50AM *  0 points [-]

Would I be right in thinking that this implies that you can't apply could to world()? i.e. "Our universe could have a green sky" wouldn't be meaningful without some sort of metaverse program that references what we would think is the normal world program?

Or have the utility computed also depend on some set of facts about the universe?

I think both of those would require that the agent have some way of determining the facts about the universe (I suppose they could figure it out from the source code, but that seems sort of illegitimate to me).

Comment author: cousin_it 13 August 2010 05:06:22AM *  0 points [-]

This post only gives a way to apply "could" to yourself. This doesn't imply you can never apply "could" to the world, we might find another way to do that someday.

Comment author: atucker 14 August 2010 05:02:41PM 1 point [-]

It seems like there are two kinds of "could" at work here, one that applies to yourself and is based on consistent action to utility relationships, and another that involves uncertainty as to what actions cause what utilities (based on counterfactuals about the universe).

Comment author: cousin_it 14 August 2010 05:26:04PM 0 points [-]

Thanks, good point about uncertainty. I'm making a mental note to see how it relates to counterfactuals.