Yeah, but answer a question "why should agent care about 'preparing'?" Then any answer you give will yield "why this?" ad infinitum. So this chain of "whys" cannot be stopped unless you specify some terminal point. And the moment you do specify such a point, you introduce an "ought" statement.
I mean, you suppose that agent should care about possibly caring in the future, but this itself constitutes an 'ought' statement.
But isn't "don't lose a lot", for example is a goal by itself?
If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let's say that consciousness is some terminal node in our world-model, this still leaves the question "What systems in word are conscious?". And I guess that current hypotheses for answer to this question are rather confusing. We didn't have same level of confusion with other models of ba...
I don't have precise answer to your question, but have some question which can prehaps be useful in answering it.
Namely: what about space? I mean, you talk here about time, something along the lines of "We can imagine our Universe as a solid eternal block of spacetime, so why do I experiencing present moment instead of all moments at once?" But what about "We can imagine our universe as a solid eternal block of spacetime, so why I am experiencing 'here' locality instead of all places at once?". I think these are very similar questions.
We can go... (read more)