Squark comments on Solomonoff Cartesianism - Less Wrong

21 Post author: RobbBB 02 March 2014 05:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 22 March 2014 08:43:11PM 1 point [-]

...I mean it sounds a bit like expecting a solution to the One True Prior to fall out of the development of a principled probability theory...

I believe my new formalism circumvents the problem by avoiding strong prior sensitivity.

Same reply, plus specific mild skepticism relating to how current work on the Lobian obstacle hasn't yet taken a shape that looks like it fills the logical-counterfactual symbol in UDT...

My proposal does look that way. I hope to publish an improved version soon which also admits logical uncertainty in the sense of being unable to know the zillionth digit of pi.

Thinking about this in a natively naturalized mode, it doesn't seem too unnatural to me to try to adopt a bridge hypothesis to an AI that can choose to treat arbitrary events in RAM as sensory observations and condition on them.

In my formalism input channels and arbitrary events in RAM have similar status.

Comment author: Vulture 15 April 2014 11:44:40PM 0 points [-]

Minor formal note: I have a mildly negative knee-jerk when someone repeatedly links to/promotes to something referred to only as "my _". Giving your formalism a proper name might make you sound less gratuitously self-promotional (which I don't think you are).

Comment author: Squark 17 April 2014 07:02:34PM 0 points [-]

Hi Vulture, thanks for your comment!

Actually I already have a name for the formalism: I call it the "updateless intelligence metric". My intuition was that referring to my own invention by the serious-sounding name I gave it myself would sound more pompous / self-promotional than referring to it as just "my formalism". Maybe I was wrong.