Unknowns comments on Two straw men fighting - Less Wrong

2 Post author: JanetK 09 August 2010 08:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 09 August 2010 03:55:52PM 2 points [-]

Consider my reply to be to the claim:

If you ask it when it made it's decision, it will point to the time when it analyzed the code.

If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.

I avoided commenting on the 'subjective experience' side of things because I thought it was embodying a whole different kind of confusion. It assumes that the AI executes some kind of 'subjective experience' reasoning that is similar to that of humans (or some subset thereof). This quirk relies on lacking any strong boundaries between thought processes. People usually can't predict their decisions without making them. For both the general case and the specific case of the code I gave a correctly implemented module that could be given the label 'subjective experience' would see the difference between prediction and analysis.

I upvoted the parent for the use of it's. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.

Comment author: Unknowns 09 August 2010 04:02:50PM *  -1 points [-]

"If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong."

I use "decision" precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don't have (and can't have) any mathematical proof of it.

(I corrected this comment so that it says "mathematical proof" instead of proof in general.)

Comment author: thomblake 09 August 2010 04:15:21PM 2 points [-]

So you may believe yourself right about this, but you don't have (and can't have) any proof of it.

If you believe that we can't have any proof of it, then you're wasting our time with arguments.

Comment author: Unknowns 09 August 2010 04:20:36PM *  -2 points [-]

You might have a proof of it, but not a mathematical proof.

Also note that your comment that I would be "wasting our time" implies that you think that you couldn't be wrong.

Comment author: Emile 09 August 2010 04:14:33PM *  2 points [-]

I think most people on LessWrong are using "decision" in the sense used in Decision Theory.

Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.

Comment author: wedrifid 09 August 2010 04:51:11PM *  3 points [-]

Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.

It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from "not quite technical true" to, basically, nonsense (at least in as much as it claims to pertain to AIs).

Comment author: Unknowns 09 August 2010 04:22:00PM -2 points [-]

I don't think my definition is either exotic or inconsistent with the sense used in decision theory.

Comment author: wedrifid 09 August 2010 04:53:52PM 3 points [-]

I don't think my definition is ... inconsistent with the sense used in decision theory.

You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn't even remotely compatible with the sense used in decision theory.

Comment author: wedrifid 09 August 2010 04:38:16PM 0 points [-]

How many legs does an animal have if I call a tail a leg and believe all animals are quadrupeds?