Posts

Sorted by New

Wiki Contributions

Comments

Russell: thanks for the response. By "external party that had privileged access to your mind", I just meant a human-like party that knows your current state and knows you as well as you know yourself (not better) but doesn't have certain interests in the experiment that you had as a participant. Running against a copy is interesting, but assuming it's a high-fidelity copy, that's a completely different scenario with (in my estimation) a radically different likelihood of the AI getting out, as you noted when talking about "ordinary sources of information about me".

To play the devil's advocate a bit here regarding your comment on probability estimates without statistical data, wasn't your response actually a "probability estimate without statistical data" (albeit without using numbers)? That is, when you say "at no point was I on the verge of doing it", I think that's just another way of expressing some unspecified probability estimate (like "no greater than about 0.9 [or whatever "being on the verge of" subjectively means for you]").

Russell: did you seriously think about letting it out at any point, or was that never a serious consideration?

If there were an external party that had privileged access to your mind while you were engaging in the experiment and that knew you as well as know yourself, and if that party kept a running estimate of the likelihood that you would let the AI out, what would that highest probability estimate have been? And at what part of the time period would that highest probability estimate have occurred (just a ballpark estimate of 'early', 'middle', 'end' would be helpful)?

Thanks for sharing this info if you respond.