lukeprog comments on A model of UDT with a halting oracle - Less Wrong

41 Post author: cousin_it 18 December 2011 02:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread.

Comment author: lukeprog 11 January 2012 04:08:35AM 2 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

This totally deserves to be polished a bit and published in a mainstream journal.

Comment author: cousin_it 11 January 2012 06:20:33PM *  3 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

That's a question of degree. Some past posts of mine are similar to this one in formality.

Nesov also said in an email on Jan 4 that now we can write this stuff up. I think Wei and Gary should be listed as coauthors too.

Comment author: Vladimir_Nesov 12 January 2012 10:35:56AM 0 points [-]

I still want to figure out games (like PD) in the oracle setting first. After the abortive attempt on the list, I didn't yet get around to rethinking the problem. Care to take a stab?

Comment author: cousin_it 12 January 2012 12:41:27PM *  1 point [-]

The symmetric case (identical payoffs and identical algorithms) is trivial in the oracle setting. Non-identical algorithms seems to be moderately difficult, our candidate solutions in the non-oracle setting only work because they privilege one of the outcomes apriori, like Loebian cooperation. Non-identical payoffs seems to be very difficult, we have no foothold at all.

I think we have a nice enough story for "fair" problems (where easy proofs of moral arguments exist), and no good story for even slightly "unfair" problems (like ASP or non-symmetric PD). Maybe the writeup should emphasize the line between these two kinds of problems. It's clear enough in my mind.

Comment author: Vladimir_Nesov 12 January 2012 05:39:11PM 0 points [-]

Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It's unclear to me how to combine this with possibility of reasoning about such agents (by other agents).

Comment author: cousin_it 12 January 2012 06:32:50PM *  0 points [-]

That's very speculative, I don't remember any nontrivial results in this vein so far. Maybe the writeup shouldn't need to wait until this gets cleared up.

Comment author: Vladimir_Nesov 12 January 2012 10:35:35AM 0 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

(It's not "advanced", it's not even in its infancy yet. On the other hand, there is a lot of decision theory that's actually advanced, but solves different problems.)

Comment author: Jayson_Virissimo 16 June 2012 08:48:08AM *  2 points [-]

I think Luke meant "advanced" as in superrationality, not "advanced" as in highly developed.

BTW, nice work.