Stuart_Armstrong comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 15 May 2012 07:24:52PM 5 points [-]

Perhaps I should have said "To conclude that this possibility is very unlikely" instead of "To deny this possibility". My own intuition seems to assign a probability to it that is greater than "very unlikely" and this was largely unchanged after reading your paper. For example, many of the items in the list in section 4.5, that have to be true if orthogonality was false, can be explained by my hypothesis, and the rest do not seem very unlikely to begin with.

Comment author: Stuart_Armstrong 17 May 2012 12:52:37PM *  1 point [-]

Looking at your post at http://lesswrong.com/lw/2id/metaphilosophical_mysteries, I can see the sketch of an argument. It goes something like "we know that some decision theories/philosophical processes are 'objectively 'inferior, hence some are objectively superior, hence (wave hands furiously) it is at least possible that some system is objectively best".

I would counter:

1) The argument is very weak. We know some mathematical axiomatic systems are contradictory, hence inferior. It doesn't follow from that that there is any "best" system of axioms.

2) A lot of philosophical progress is entirely akin to mathematical progress: showing the consequences of the axioms/assumptions. This is useful progress, but not really relevant to the argument.

3) All the philosophical progress seems to lie on the "how to make better decisions given a goal" side; none of it lies on the "how to have better goals" side. Even the expected utility maximisation result just says "if you are unable to predict effectively over the long term, then to achieve your current goals, it would be more efficient to replace these goals with others compatible with a utility function".

However, despite my objections, I have to note that the argument is at least an argument, and provides some small evidence in that direction. I'll try and figure out whether it should be included in the paper.