NancyLebovitz comments on Stupid questions thread, October 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (223)
Someone downvoted the question above. What the hell? (My guess: it's VoiceOfRa doing his downvote-the-enemy thing again.)
To the actual question: first of all, I think it's entirely possible that we have additional layers (sys1 means "fast heuristic", sys2 means "slow deliberate reasoning"; we surely have a big bag of heuristics, and I bet there are cases where we have extra-fast heuristics, fastish heuristics, and slow deliberate reasoning); and it seems like one could envisage an AI with (1) nothing like sys1 at all because its "proper" reasoning is cheap enough to be used all the time, (2) a human-like bag of heuristics that get used when circumstances allow, producing much the same distinction as we have, (3) smoothly varying how-much-approximation knobs that adjust according to how valuable quicker answers are, interpolating continuously between "system 1" and "system 2", and probably (4) all sorts of other things I haven't thought of.
The sort of provably-safe AI that MIRI would like to see would presumably either be in category 1, or be designed so that in some sense sufficiently consequential decisions always get made "properly". The latter seems like it would be hard to reason about. (Er, or it might be in category 4 in which case by definition I have nothing to say about it.)
I was wondering about VoR and that downvote.
Now that I think about it, we do have a range of speeds, including the occasional sudden revelation (as when an addict realizes that they really can and must stop).