In a previous post, I argued that nihilism is often short changed around here. However I'm far from certain that it is correct, and in the mean time I think we should be careful not to discard our values one at a time by engaging in "selective nihilism" when faced with an ontological crisis, without even realizing that's what's happening. Karl recently reminded me of the post Timeless Identity by Eliezer Yudkowsky, which I noticed seems to be an instance of this.
As I mentioned in the previous post, our values seem to be defined in terms of a world model where people exist as ontologically primitive entities ruled heuristically by (mostly intuitive understandings of) physics and psychology. In this kind of decision system, both identity-as-physical-continuity and identity-as-psychological-continuity make perfect sense as possible values, and it seems humans do "natively" have both values. A typical human being is both reluctant to step into a teleporter that works by destructive scanning, and unwilling to let their physical structure be continuously modified into a psychologically very different being.
If faced with the knowledge that physical continuity doesn't exist in the real world at the level of fundamental physics, one might conclude that it's crazy to continue to value it, and this is what Eliezer's post argued. But if we apply this reasoning in a non-selective fashion, wouldn't we also conclude that we should stop valuing things like "pain" and "happiness" which also do not seem to exist at the level of fundamental physics?
In our current environment, there is widespread agreement among humans as to which macroscopic objects at time t+1 are physical continuations of which macroscopic objects existing at time t. We may not fully understand what exactly it is we're doing when judging such physical continuity, and the agreement tends to break down when we start talking about more exotic situations, and if/when we do fully understand our criteria for judging physical continuity it's unlikely to have a simple definition in terms of fundamental physics, but all of this is true for "pain" and "happiness" as well.
I suggest we keep all of our (potential/apparent) values intact until we have a better handle on how we're supposed to deal with ontological crises in general. If we convince ourselves that we should discard some value, and that turns out to be wrong, the error may be unrecoverable once we've lived with it long enough.
It's possible to detect tulips, but there are many alternative things that it's possible to detect, so there needs to be some motivation for the detecting of tulips in particular to actually take place. For natural concepts, it's efficient world modeling (which your AI by assumption doesn't need to care about), and for morality-related concepts, it's value judgments (these will require different concepts for different AIs, but may agree on the utility of keeping track of the "fundamental" physical facts).
(On a different note, "Are tulips in the territory?" sounds like a question about definitions. Some more specific relevant query may be similar, but I'm not sure how to find one.)
So you're saying that my AI (with infinite computational power) would never discover the existence of tulips?
I don't intend it to be. I think tulips exist, unlike shmulips (similar to tulips, except they have golf balls instead of flowers), which don't. I don't think I have a firm grip on the map-territory distinction, but I was trying to use it in the way Wei was using it.
Anyway, here's the basis of my question: tulips do exist. They're real, mind ... (read more)