I have what feels like a naive question. Is there any reason that we can't keep appealing to even higher-order preferences? I mean, when I find that I have these sorts of inconsistencies, I find myself making an additional moral judgment that tries to resolve the inconsistency. So couldn't you show the human (or, if the AI is doing all this in its 'head', a suitably accurate simulation of the human) that their preference depends on the philosopher that we introduce them to? Or in other cases where, say, ordering matters, show them multiple orderings, or their simulations' reactions to every possible ordering where feasible, and so on. Maybe this will elicit a new judgment that we would consider morally relevant. But this all relies on simulation, I don't know if you can get the same effect without that capability, and this solution doesn't seem even close to being fully general.
I imagine that this might not do much to resolve your confusion however. It doesn't do much to resolve mine.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
(Not very familiar with math.)
The Heyting-algebraic definition of implication makes intuitive sense to me, or at least after you state your confusion. 'One circle lies inside the other' is like saying A is a subset of B, which is a statement that describes a relation between two sets, and not a statement that describes a set, so we shouldn't expect that that mental image would correspond to a set. Furthermore, the definition of implication you've given is very similar to the material implication rule; that we may substitute 'P implies Q' with 'not-P or Q'.
Also, I have personally been enjoying your recent posts with few prerequisites. (Seems to be a thing.)