As Tegmark argues, the idea of "final goal" for AI is likely incoherent, at least if (as he states), "Quantum effects aside, a truly well-defined goal would specify how all particles in our Universe should be arranged at the end of time."  

But "life is a journey not a destination".  So really, what we should be specifying is the entire evolution of the universe through its lifespan.  So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

I hypothesize that experience is related to, if not a product of, change.  I further propose (counter-intuitively, and with an eye towards "refinement" (to put it mildly))** that we treat experience as inherently positive and not try to distinguish between positive and negative experiences.

Then it seems to me the (still rather intractable) question is: how does the rate of entropy's increase relate to the quantity of experience produced?  Is it simply linear (in which case, it doesn't matter, ethically)?  My intuition is that is it more like the fuel efficiency of a car, non-linear and with a sweet spot somewhere between a lengthy boredom and a flash of intensity.



*I'm not super up on cosmology; are there other theories I ought to be considering?

**One idea for refinement: successful "prediction" (undefined here) creates positive experiences; frustrated expectations negative ones.


New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 12:50 PM

By the way, welcome to the club of posters on LW! Try not to let your first posts not being well-received get you too down, it's quite common, I promise.

It's an interesting idea, but it's not at all new. Most moral philosophers would agree that certain experiences are part (or all) of what has value, and that the precise physical instantiation of these experiences does not necessarily matters (in the same way many would agree on this same point in philosophy of consciousness).

There's a further meta-issue which is why the post is being downvoted. Surely is vague and maybe too short, but it seems to have the goal of initiating discussion and refining the view being presented rather than adequately defending or specifying it. I have posted tentative discussions - much more developed than this one - in meta-ethics or other abstract issues in ethics directly related to rationality and AI-safety, and I wasn't exactly warmly met. Given that much of the central problems being discussed here are within ethics, why the disdain for meta-ethics? Of course, it might as well just be a coincidence or that all those posts were fundementaly flawed in a obvious way.

Yeah I am not happy about the way I'm being received. Any advice, other than avoiding interesting meta-ethics questions?

Wrt how new it is: how about if I put it this way:

Maybe experience is fundamentally not a function of brain state, but a function of brain state over time. Note that this is not strongly anti-physicalism. Especially if you believe in discrete time, in which case you can have experience be a function of the transitions that occur between states in successive time-steps:

Experience = f(s{t}, s{t-1}).

Maybe check out Christopher Alexandrt's Nature of Order.

It sounds like his wholeness function is what you are poking around.

So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

Maybe read the Fun Theory sequence?

I dunno if the universe can read, jmmcd. ;P

It can, but it doesn't have the time...

Maybe tell me why I should? My time is valuable.

I'm afraid I won't have time to give you more help. There's a short summary of each sequence under the link at the top of the page, so it won't take you forever to see the relevance.

EDIT: you're wondering elsewhere in the thread why you're not being well received. It's because your post doesn't make contact with what other people have thought on the topic.

I put "enjoy itself" in quotes, because I don't mean it literally. The questions that that sequence addresses according to the summary don't seem relevant to what I am trying to get at.

I guess I need to be more precise. I just mean how can we maximize the integral of experience through time (whether we let experience take negative values is a detail). This was one of Tegmark's proposals in that paper, already, except he is writing in terms of a final goal instead of a process, which was the point of my post...

"The amount of consciousness in our Universe, which Giulio Tononi has argued corresponds to integrated information"

In AI research, intelligent agents typically have a clear-cut and well-defined final goal, e.g., win the chess game or drive the car to the destination legally. The same holds for most tasks that we assign to humans, because the time horizon and context is known and limited. (...) a truly well-defined goal would specify how all particles in our Universe should be arranged at the end of time.

We typically only care about the arrangement of particles at the end of the task, because that is the nature of the simple tasks we usually use machines for today. Actually, even that is not true: when "driving the car to destination legally" we care not only about the arrangement of the particles of the car at the end of the trip, but also about what happened on the way -- that's what "legally" means here. (Unless we also count "police sending us tickets" as particles. But I guess the car is supposed to follow the laws even when the police does not look.)

We can define "journey" goals e.g. by calculating score at each time interval, and trying to maximize the sum or the average (or some other function) of all the intervals. This can make sense even if we don't know how long the task will last.

treat experience as inherently positive and not try to distinguish between positive and negative experiences.

This sounds wrong. But I am not even sure what exactly would we measure here, if both positive and negative experience count the same. Is it the intensity of the experience (in either direction) which counts? (That is, would you rather be tortured than bored? Would you rather be tortured really painfully than enjoying a mild pleasure?) Or is it duration of the experience? (That is, we want to maximize the subjective time of sentient beings, regardless of what happens during the time? Would you rather live 1001 years in hell than 1000 years in heaven?)

This sounds wrong.

Of course. That's why I proposed refining it.

But I am not even sure what exactly would we measure here

I thought it was obvious. It is the integral of total experience (suitably defined) through time that counts.

During the end of a drive there would either be/not be a configuration of particles in the shape of a paper ticket memorializing your transgression of the law. And if not that, there is a configuration of particles in the heads of the law enforcement officials recalling your transgression and planning on writing you a ticket or whatever. Any universe-wide configuration of particles contains the history of all of the events preceding it, even if they are opaque to us, because the possibility space of particle configurations is (probably?) larger than utility relevant histories.

Any universe-wide configuration of particles contains the history of all of the events preceding it

So there is no way that we can arrive at the same state from different starting points? That seems ridiculous to me.

I'm talking about particles at the quantum level here. Subatomic particles are ridiculously small. The amount of empty space in the universe is incomprehensibly vaster than the amount of particles that inhabit it, so it wouldn't be surprising to me if it were impossible to arrive at the same universe-wide state from different starting points, but I don't know that that's true as a matter of fact. But if there were a perfectly symmetrical perturbation, by definition it would be unobservable to us, since we would end up in the exact same state along either pathway.