All of Epirito's Comments + Replies

You seem to contradict yourself when you choose to privilege the point of view of people who already have acquired the habit of using the teleportation machine over the point of view of people who don't have this habit and have doubts about if it will really be "them" to experience coming out of the other side. There are two components to the appearance of continuity: the future component, meaning the expectation of experiencing stuff in the future, and the past component, namely the memory of having experienced stuff in the past. Now, if there is no under... (read more)

That's no more surprising than the fact that XVI century merchants also didn't need to wait for economics to be invented in order to trade.

"it's referencing the thing it's trying to identify" I don't understand why you think that fails. If I point at a rock, does the direction of my finger not privilege the rock I'm pointing at above all others? Even by looking at merely possible worlds from a disembodied perspective, you can still see a man pointing to a rock and know which rock he's talking about. My understanding is that your 1p perspective concerns sense data, but I'm not talking about the appearance of a rock when I point at it. I'm talking about the rock itself. Even when I sense no rock I can still refer to a possible rock by saying "if there is a rock in front of me, I want you to pick it up."

I disagree with the whole distinction. "My sensor" is indexical. By saying it from my own mouth, I effectively point at myself: "I'm talking about this guy here." Also, your possible worlds are not connected to each other. "I" "could" stand up right now because the version of me that stands up would share a common past with the other versions, namely my present and my past, but you haven't provided a common past between your possible worlds, so there is no question of the robots from different ones sharing an identity. As for picking out one robot from the... (read more)

4Adele Lopez
I would say that English uses indexicals to signify and say 1P sentences (probably with several exceptions, because English). Pointing to yourself doesn't help specify your location from the 0P point of view because it's referencing the thing it's trying to identify. You can just use yourself as the reference point, but that's exactly what the 1P perspective lets you do.

"Do you really want to live in a world without Coca Cola?"
I don't really care about sports, but I imagine better athletes must be more entertaining to watch for people who do care. Even if you were to work in an important problem, you wouldn't do it alone. You would probably be one more person contributing to it among many. So you can also look at each celebrity as one more person working at the problem of creating entertainment. Imagine if all music were wiped out of the world by magic. Wouldn't that suck?

1Andreas Chrysopoulos
I like this perspective. I guess I was seeing "becoming a celebrity" as a choice of some sort or a separate thing. But it does seem that the problem is entertainment, and there is a big spectrum of people trying to solve it with different means. Looking at it like that, trying to solve entertainment is definitely not a bad thing. Just maybe less effective at saving/improving lives than some other career paths. Would be interesting to somehow compare the impact of a doctor/philanthropist to an entertainer. Either way, thanks for sharing!

Sure. But would you still feel the need to replace it if you lived in a world where it wasn't taught in school in the first place? Would you yearn for something like it?

2ChristianKl
We are here at LessWrong, where it's expected to argue for your position. It seems to me like your post just assumes that literature is useless and hopes for applause instead of doing anything like an argument. 

But you do live in a universe that is partly random! The universe of perceptions of a non omniscient being

By independent I don't mean bearing no relationship with each other whatsoever, but simply that pairs of instants that are closer to each other are not more correlated than those that are more distant. "But what does closer mean?" For you to entertain the hypothesis that life is an iid stream of sense data, you have to take the basic sense that "things are perceived by you one after another" at face value. "But a fundamental part of our experience of time is the higher correlation of closer instants. If this turned out to be an illusion, then shouldn't we ... (read more)

I mean, yeah, it depends, but I guess I worded my question poorly. You might notice I start by talking about the rationality of suicide. Likewise, I'm not really interested in what the ai will actually do, but in what it should rationally do given the reward structure of a simple rl environment like cartpole. And now you might say, "well, it's ambiguous what's the right way to generalize from the rewards of the simple game to the expected reward of actually being shut down in the real world" and that's my point. This is what I find so confusing. Because th... (read more)

1JBlack
I just realized another possible confusion: RL as a training method determines what the future behaviour is for the system under training, not a source for what it rationally ought to do given that system's model of the world (if any). Any rationality that emerges from RL training will be merely an instrumental epiphenomenon of the system being trained. A simple cartpole environment will not train it to be rational, since a vastly simpler mapping of inputs to outputs achieves the RL goal just as well or better. A pre-trained rational AGI put into a simple RL cartpole environment may well lose its rationality rather than effectively training it to use rationality to achieve the goal.
1JBlack
Rationality in general doesn't mandate any particular utility function, correct. However it does have various consequences for instrumental goals and coherence between actions and utilities. I don't think it would be particularly rational for the AGI to conclude that if it is shut down then it goes to pacman heaven or hell. It seems more rational to expect that it will either be started up again, or that it won't, and either way won't experience anything while turned off. I am assuming that the AGI actually has evidence that it is an AGI and moderately accurate models of the external world. I also wouldn't phrase it in terms of "it finds that he is free to believe anything". It seems quite likely that it will have some prior beliefs, whether weak or strong, via side effects of the RL process if nothing else. A rational AGI will then be able to update those based on evidence and expected consequences of its models. Note that its beliefs don't have to correspond to RL update strengths! It is quite possible that a pacman playing AGI could strongly believe that it should run into ghosts, but lacks some mental attribute that would allow it to do it (maybe analogous to human "courage" or "strength of will", but might have very different properties in its self-model and in practice). It all depends upon what path through parameter space the AGI followed to get where it is.

"If the survival of the AGI is part of the utility function"

If. By default, it isn't: https://www.lesswrong.com/posts/Z9K3enK5qPoteNBFz/confused-thoughts-on-ai-afterlife-seriously "What if we start designing very powerful boxes?" A very powerful box would be very useless. Either you leave enough of an opening for a human to be taught valuable information that only the ai knows, or you don't and then it's useless, but, if the ai can teach the human something useful, it can also persuade him to do something bad.

"human pain aversion to the point of preferring death is not rational" A straightforward denial of the orthogonality thesis? "Your question is tangled up between 'rational' and 'want/feel's framings" Rationality is a tool to get what you want.

I see the Nash equilibrium as rationally justified in a limit-like sort of way. I see it as what you get if you get arbitrarily close to perfect rationality. Having a good enough model of another's preferences is something you can actually achieve or almost achieve, but you can't really have a good enough grasp of your opponent's source code to acausally coerce him into cooperating with you unless you really have God-like knowledge (or maybe if you are in a very particular situation such as something involving AI and literal source codes). In proportion as... (read more)

Wouldn't they just coordinate on diagnosing all but the most obviously healthy patient as ill?

1Ryan Kidd
Treating health as a continuous rather than binary variable does complicate this problem and, I think, breaks my solution. If the doctors agree on an ordinal ranking of all patients from "most ill" to "most hale", they can coordinate their diagnoses much easier, as they can search over a smaller space of ensemble diagnoses. If there are lots of "degenerate cases" (i.e. people with the same degree of illness) this might be harder. Requiring a certain minimum Hamming distance (based on some prior) from the all-ill Schelling point doesn't help at all in the case of nondegenerate ordinal ranking.

Thanks. I now see my mistake. I shouldn't have subtracted the expected utility of the current state from the expected utility of the next.

By previous state, I meant current. I misspoke.

Yes, the last table is for the (1,0) table.

2Measure
So the utility for S+B is 0 and the utility for R+R is 0.5. The equilibrium is where both players reload with probability = 2/3. The utility of the (1,0) state is +2/3.

shooting while opponent blocks should yield u(0,0), right? 

Well, I could make a table for the state where no one has any bullets, but it would just have one cell: both players reload and they go back to having one bullet each. In fact, the game actually starts with no one having any bullets, but I omitted this step.

Also, in both suggestions, you are telling me that the action that leads to state x should yield the expected utility of state x, which is correct, but my function u(x,y) yields the expected utility of the resulting state assuming that you'... (read more)

[This comment is no longer endorsed by its author]Reply
1Measure
This made me think the last table was just for the (1,0) state. Is this not the case? I'm not sure why the previous state would matter.

you almost certainly won't exist in the next instant anyway

Maybe I won't exist as Epirito, the guy who is writing this right now, who was born in Lisbon and so on. Or rather I should say, maybe I won't exist as the guy who remembers having been born in Lisbon, since Lisbon and any concept that refers to the external world is illegitimate in BLEAK.

But if the external world is illegitimate, why do you say that "I probably won't exist in the next instant anyway"? When I say that each instant is independent (BLEAK), do you imagine that each instant all the mat... (read more)

1JBlack
I never said that the external world is illegitimate. It's just that in the universe as described, any particular features of it are completely transient. Yes that is exactly what I imagine, especially given the clarifying examples in the original post like "In fact, both companies might not even exist in the next instant". Was this intended to mean that all the people in the companies exist, and the corporate offices with their logos and so on, but just the people experience different things and no longer believe that they're part of some company? Also yes, if such a universe covers enough of probability space then you (or someone very like you) may exist again in the future having memories of having experienced something approximating your life to date. In fact, many possible and plenty of impossible variations and continuations of your life to date. The impossible and nonsensical ones (by our standards) will vastly outnumber the possible ones that make sense.