Wes_W comments on Be Wary of Thinking Like a FAI - Less Wrong

6 Post author: kokotajlod 18 July 2014 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: kokotajlod 20 July 2014 12:37:12PM 0 points [-]

Hmm, okay. I'd be interested to hear your thoughts on the particular cases then. Are there any examples that you would endorse?

Comment author: Wes_W 21 July 2014 04:35:43AM *  1 point [-]

I agree with the points about Boltzmann Brains and mind substrates. In those cases, though, I'm not sure the FAI heuristic saves you any work, compared to just directly asking what the right answer is.

The ideal FAI wouldn't care about its personal identity over time

Almost certainly not true if taken verbatim; one of the critical traits of an FAI (as opposed to a regular AGI) is that certain traits must remain stable under self-improvement. An FAI would care very strongly about certain kinds of changes. But with a less literal reading, I can see what you're going for here - yes, an ideal FAI might be indifferent to copying/deletion except to the extent that those help or hinder its goals.

I'm not sure how that belief, applied to oneself, cashes out to anything at all, at least not with current technology. I also don't see any reason to go from "the FAI doesn't care about identity" to "I shouldn't think identity exists."

The ideal FAI would use UDT/TDT/etc. Therefore I should too.

(Disclaimer: I am not a decision theorist. This part is especially likely to be nonsense.)

You should use which one?

The less snappy version is that TDT and UDT both have problem cases. We don't really know yet what an ideal decision theory looks like.

Second, I doubt any human can actually implement a formal decision theory all the time, and doing it only part-time could get you "valley of bad rationality"-type problems.

Third, I suspect you could easily run into problems like what you might get by saying "an ideal reasoner would use Solomonoff Induction, so I should too". That's a wonderful idea, except that even approximating it is computationally insane, and in practice you won't get to use any of the advantages that make Solomonoff Induction theoretically optimal.

If you instead mean things like "an ideal FAI would cooperate in PD-like scenarios given certain conditions", then sure. But again, I'm not sure the FAI heuristic is saving you any work.

The ideal FAI would ignore uncomputable possibilities. Therefore I should too.

A factual FAI might, for mere practical reasons. I don't see why an ideal FAI normatively should ignore them, though.

Comment author: kokotajlod 21 July 2014 03:50:25PM 1 point [-]

Ok, thanks.

I also don't see any reason to go from "the FAI doesn't care about identity* to "I shouldn't think identity exists."

I don't either, now that I think about it. What motivated me to make this post is that I realized that I had been making that leap, thanks to applying the heuristic. We both agree the heuristic is bad.

Why are we talking about a bad heuristic? Well, my past self would have benefited from reading this post, so perhaps other people would as well. Also, I wanted to explore the space of applications of this heuristic, to see if I had been unconsciously applying it in other cases without realizing it. Talking with you has helped me with that.