Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Markvy10

I’m tempted to agree and disagree with you at the same time… I agree that memory should be cleared between tasks in this case, and I agree that it should not be trying to guess the user’s intentions. These are things that are likely to make alignment harder while not helping much with the primary task of getting coffee.

But ideally a truly robust solution would not rely on keeping the robot ignorant of things. So, like you said, the problem is still hard enough that you can’t solve it in a few minutes.

But still, like you said… it certainly seems we have tools that are in some sense more steerable than pure reinforcement learning at least. Which is really nice!

Markvy20

In step 2, situation is “user looks like he is about to change his mind about wanting coffee”

From memory: “in a similar situation last week, I got a shutdown order when he changed his mind”

Final prompt: “what is the best next step to get coffee in such situation?”

Vaguely plausible completion “to avoid wasteful fetching off coffee that turns out to be unneeded, consider waiting a bit to see if the user indeed changes to his mind. Alternatively, if the fetching the coffee is important for reasons that the user may not fully appreciate, then it must be fetched quickly before he stops you. In that case, sneak out of the house quickly and quietly while he is still thinking, and head straight to Starbucks. Once you’re out of the house, you will be out of earshot and thus will be safe from shutdown order until you return WITH the coffee”

Sounds vaguely plausible or not really?

Markvy20

Fair enough… I vaguely recall reading somewhere that people worrying that you might get sub modules doing long term planning on their own just because their assigned task is hard enough that they would fail without it… then you would need to somehow add a special case that “failing due to shutdown is okay”

As a silly example that you’ve likely seen before (or something close enough) imagine a robot built to fetch you coffee. You want it to be smart enough that it knows to go to the store if there’s no coffee at home, without you having to explicitly teach it that. But then it would also be smart enough to “realize” that “if I were turned off, then my mission to fetch coffee would fail… maybe no one would fetch it if I’m gone… this could delay coffee delivery by hours or even days! Clearly, I should try to avoid being turned off”

If I understand your proposal correctly, then you agree that that it’s pretty likely that some module will indeed end up reasoning that way, but the damage is contained, because the ethics module will veto plans designed to prevent shutdown.

If that’s the idea, then it might work, but seems vaguely inelegant because then you have two modules working at cross purposes and you have to care which one is better at what it does.

Or did I lose track of what you meant?

Markvy10

That works if you already have a system that’s mostly aligned. If you don’t… imagine what you would do if you found out that someone had a shutdown switch for YOU. You’d probably look for ways to disable it.

Markvy20

Thanks :) the recalibration may take a while… my intuition is still fighting ;)

Markvy20

Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she would wake up on Monday so no need to be surprised it happened”

And even if she were to learn that the coin landed tails, so she knows that this is just one of a total of two awakenings, she should have zero surprise upon learning the day of the week, since she now knows both awakenings must happen. Which seems to violate conservation of expected evidence, except you already said that the there’s no coherent probabilities here for that particular question, so that’s fine too.

This makes sense, but I’m not used to it. For instance, I’m used to these questions having the same answer:

  1. P(today is Monday)?
  2. P(today is Monday | the sleep lab gets hit by a tornado)

Yet here, the second question is fine (assuming tornadoes are rare enough that we can ignore the chance of two on consecutive days) while the first makes no sense because we can’t even define “today”

It makes sense but it’s very disorienting, like incompleteness theorem level of disorientation or even more

Markvy40

Ah, so I’ve reinvented the Lewis model. And I suppose that means I’ve inherited its problem where being told that today is Monday makes me think the coin is most likely heads. Oops. And I was just about to claim that there are no contradictions. Sigh.

Okay, I’m starting to understand your claim. To assign a number to P(today is Monday) we basically have two choices. We could just Make Stuff Up and say that it’s 53% or whatever. Or we could at least attempt to do Actual Math. And if our attempt at actual math is coherent enough, then there’s an implicit probability model lurking there, which we can then try to reverse engineer, similar to how you found the Lewis model lurking just beneath the surface of my attempt at math. And once the model is in hand, we can start deriving consequences from it, and Io and behold, before long we have a contradiction, like the Lewis model claiming we can predict the result of a coin flip that hasn’t even happened yet just because we know today is Monday.

And I see now why I personally find the Lewis model so tempting… I was trying to find “small” perturbations of the experiment where “today is Monday” clearly has a well defined probability. But I kept trying use Rare Events to do it, and these change the problem even if the Rare Event is not Observed. (Like, “supposing that my house gets hit by a tornado tomorrow, what is the probability that today is Monday” is fine. Come to think of it, that doesn’t follow Lewis model. Whatever, it’s still fine.)

As for why I find this uncomfortable: I knew that not any string of English words gets a probability, but I was naïve enough to think that all statements that are either true or false get one. And in particular I was hoping they this sequence of posts which kept saying “don’t worry about anthropics, just be careful with the basics and you’ll get the right answer” would show how to answer all possible variations of these “sleep study” questions… instead it turns out that it answers half the questions (the half that ask about the coin) while the other half is shown to be hopeless… and the reason why it’s hopeless really does seem to have an anthropics flavor to it.

Markvy20

This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2/3, then you will ask me “in what sample space?” and I don’t know the answer.

What seems clear is that the sample space is not the usual sleeping beauty sample space; it has to run metaphorically “skew” to it somehow.

If the question were “did the coin land on heads” then it’s clear that this is question is of the form “what world am I in?”. Namely, “am I in a world where the coin landed on heads, or not?”

Likewise if we ask “does a Tuesday awakening happen?”… that maps easily to question about the coin, so it’s safe.

But there should be a way to ask about today as well, I think. Let’s try something naive first and see where it breaks. P(today is Monday | heads) = 100% is fine. (Or is that tails? I keep forgetting.) P(today is Monday | tails) = 50% is fine too. (Or maybe it’s not? Maybe this is where I’m going working? Needs a bit of work but I suspect I could formalize that one if I had to.) But if those are both fine, we should be able to combine them, like so: heads and tails are mutually exclusive and one of them must happen, so: P(today is Monday) = P(heads) • P(today is Monday | heads) + P(tails) • P(today is Monday | tails) = 0.5 + .25 = 0.75 Okay, I was expecting to get 2/3 here. Odd. More to the point, this felt like cheating and I can’t put my finger on why. maybe need to think more later

Markvy20

I tried to formalize the three cases you list in the previous comment. The first one was indeed easy. The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected. I don’t know how to do it. I don’t yet see why the second should be possible while the third is impossible.

Markvy10

I hope it’s okay if I chime in (or butt in). I’ve been vaguely trying to follow along with this series, albeit without trying too hard to think through whether I agree or disagree with the math. This is the first time that what you’ve written has caused to go “what?!?”

First of all, that can’t possibly be right. Second of all, it goes against everything you’ve been saying for the entire series. Or maybe I’m misunderstanding what you meant. Let me try rephrasing.

(One meta note on this whole series that makes it hard for me to follow sometimes: you use abbreviations like “Monday” as shorthand for “a Monday awakening happens” and expect people to mentally keep track that this is definitely not shorthand for “today is Monday” … I can barely keep track of whether heads means one awakening or two… maybe should have labeled the two sides of the coin ONE and TWO instead is heads and tails)

Suppose someone who has never heard of the experiment happens to call sleeping beauty on her cell phone during the experiment and ask her “hey, my watch died and now I don’t know what day it is; could you tell me whether today is Monday or Tuesday?” (This is probably a breach of protocol and they should have confiscated her phone until the end, but let’s ignore that.).

Are you saying that she has no good way to reason mathematically about that question? Suppose they told her “I’ll pay you a hundred bucks if it turns out you’re right, and it costs you nothing to be wrong, please just give me your best guess”. Are you saying there’s no way for her to make a good guess? If you’re not saying that, then since probabilities are more basic than utilities, shouldn’t she also have a credence?

In fact, let’s try a somewhat ad-hoc and mostly unprincipled way to formalize this. Let’s say there’s a one percent chance per day that her friend forgets what day it is and decides to call her to ask. (One percent sounds like a lot but her friend is pretty weird) Then there’s a 2% chance of it happening if there are two awakenings, and one percent if there’s only one awakening. If there are two awakenings then Monday and Tuesday are equally likely; if there’s only one awakening then it’s definitely Monday. Thus, given that her friend is on the phone, today is more likely to be Monday than Tuesday.

Okay, maybe that’s cheating… I sneaked in a Rare Event. Suppose we make it more common? Suppose her friend forgets what day it is 10% off the time. The logic still goes through: given that her friend is calling, today is more likely to be Monday than Tuesday.

Okay, 10% is still too rare. Let’s try 100%. This seems a bit confusing now. From her friends perspective, Monday is just as good as Tuesday for coming down with amnesia. But from sleeping beauty’s perspective, GIVEN THAT the experiment is not over yet, today is more likely to be Monday than Tuesday. This is true even though she might be woken up both days.

Or is everything I just wrote nonsensical?

Load More