All of Rob Dost's Comments + Replies

Answer by Rob Dost20

I don't have precise answer to your question, but have some question which can prehaps be useful in answering it. 

Namely: what about space? I mean, you talk here about time, something along the lines of "We can imagine our Universe as a solid eternal block of spacetime, so why do I experiencing present moment instead of all moments at once?" But what about "We can imagine our universe as a solid eternal block of spacetime, so why I am experiencing 'here' locality instead of all places at once?". I think these are very similar questions. 

We can go... (read more)

3Ben
Related to this idea of space, is maybe asking "why am I me, and not someone else?". The question in quotes is obviously nonsense, but I think it can get quite confusing, especially if we start assuming that people can be replicated (perhaps using digital copies). If you are one of 5 copies of a digital personality, does it make sense for you to be grateful you are not a different one of those copies? The world would not in any mechanical way be different if you were one of the copies and they were you. So it becomes complicated to think about because it seems to imply that two mechanically identical universes can be subjectively different for "me" (for some value of "me"). The time question in the original post I think it kind of equivalent. They are sort of thinking that their are many, many "me"'s at different times, all with different experiences. But that I am right now only one of those "me"'s. What is special about that one that it is the one that I am experiencing right now.

Yeah, but answer a question "why should agent care about 'preparing'?" Then any answer you give will yield "why this?" ad infinitum. So this chain of "whys" cannot be stopped unless you specify some terminal point. And the moment you do specify such a point, you introduce an "ought" statement.  

1Donatas Lučiūnas
Why do you think an assumption that there is no inherent "ought" statement is better than assumption that there is?

I mean, you suppose that agent should care about possibly caring in the future, but this itself constitutes an 'ought' statement.

-2Donatas Lučiūnas
Yes, but this 'ought' statement is not assumed. Let me share a different example, hope it helps In my opinion it is the same here. Agent without known goal is not goalless agent. It needs to know everything to come to a conclusion that it is goalless. Which implies that this 'ought' statement is inherent, not assumed.

But isn't "don't lose a lot", for example is a goal by itself?

-2Donatas Lučiūnas
In my opinion - no. The fact that agent does not care now does not prove that it will not care in the future. Orthogonality Thesis is correct only if agent is completely certain that it will not care about anything else in the future. Which cannot be true, because future is unpredictable.

If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let's say that consciousness is some terminal node in our world-model, this still leaves the question "What systems in word are conscious?". And I guess that current hypotheses for answer to this question are rather confusing. We didn't have same level of confusion with other models of ba... (read more)

1interstice
Sort of. I consider the stuff about the 'meta-hard problem', aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn't say that consciousness is a 'building block of reality' in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it's a type error within a system that relates world-models to experiences) Relating this meta-theory to other minds and morality is somewhat trickier. I'd say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to "which systems are conscious" as we're going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we're going to get to 'which physical systems have positive or negative experiences?')