Either I’m missing something or you have a typo after “Epiphenomenalist version of the argument:”
The equation on the next line should say “equals 0” instead of “not equal to zero”, right?
some issues with formalization of the axioms?
Yeah, I think it’s that one
I’m tempted to agree and disagree with you at the same time… I agree that memory should be cleared between tasks in this case, and I agree that it should not be trying to guess the user’s intentions. These are things that are likely to make alignment harder while not helping much with the primary task of getting coffee.
But ideally a truly robust solution would not rely on keeping the robot ignorant of things. So, like you said, the problem is still hard enough that you can’t solve it in a few minutes.
But still, like you said… it certainly seems we have tools that are in some sense more steerable than pure reinforcement learning at least. Which is really nice!
In step 2, situation is “user looks like he is about to change his mind about wanting coffee”
From memory: “in a similar situation last week, I got a shutdown order when he changed his mind”
Final prompt: “what is the best next step to get coffee in such situation?”
Vaguely plausible completion “to avoid wasteful fetching off coffee that turns out to be unneeded, consider waiting a bit to see if the user indeed changes to his mind. Alternatively, if the fetching the coffee is important for reasons that the user may not fully appreciate, then it must be fetch...
Fair enough… I vaguely recall reading somewhere that people worrying that you might get sub modules doing long term planning on their own just because their assigned task is hard enough that they would fail without it… then you would need to somehow add a special case that “failing due to shutdown is okay”
As a silly example that you’ve likely seen before (or something close enough) imagine a robot built to fetch you coffee. You want it to be smart enough that it knows to go to the store if there’s no coffee at home, without you having to explicitly teach ...
That works if you already have a system that’s mostly aligned. If you don’t… imagine what you would do if you found out that someone had a shutdown switch for YOU. You’d probably look for ways to disable it.
Thanks :) the recalibration may take a while… my intuition is still fighting ;)
Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she wou...
Ah, so I’ve reinvented the Lewis model. And I suppose that means I’ve inherited its problem where being told that today is Monday makes me think the coin is most likely heads. Oops. And I was just about to claim that there are no contradictions. Sigh.
Okay, I’m starting to understand your claim. To assign a number to P(today is Monday) we basically have two choices. We could just Make Stuff Up and say that it’s 53% or whatever. Or we could at least attempt to do Actual Math. And if our attempt at actual math is coherent enough, then there’s an impli...
This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2/3, then you will ask me “in what sample space?” and I don’t know the answer.
What seems clear is that the sample space is not the usual sleeping beauty sample space; it has to run metaphorically “skew” to it somehow.
If the question were “did...
I tried to formalize the three cases you list in the previous comment. The first one was indeed easy. The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected. I don’t know how to do it. I don’t yet see why the second should be possible while the third is impossible.
I hope it’s okay if I chime in (or butt in). I’ve been vaguely trying to follow along with this series, albeit without trying too hard to think through whether I agree or disagree with the math. This is the first time that what you’ve written has caused to go “what?!?”
First of all, that can’t possibly be right. Second of all, it goes against everything you’ve been saying for the entire series. Or maybe I’m misunderstanding what you meant. Let me try rephrasing.
(One meta note on this whole series that makes it hard for me to follow sometimes: you use a...
I think this is much easier to analyze if you think about your plans before the experiment starts, like on Sunday. In fact, let’s pretend we are going to write down a game plan on Sunday, and we will simply consult that plan wherever we wake up and do what it says. This sidesteps the whole half vs third debate, since both sides agree about how things look better the experiment begins.
Furthermore, let’s say we’re going to participate in this experiment 100 times, just so I don’t have to deal with annoying fractions. Now, consider the following tentative g...
Here’s how I think of what the list is. Sleeping Beauty writes a diary entry each day she wakes up. (“Nice weather today. I wonder how the coin landed.”). She would like to add today’s date, but can’t due to amnesia. After the experiment ends, she goes back to annotate each diary entry with what day it was written, and also the coin flip result, which she also now knows.
The experiment is lots of fun, so she signs up for it many times. The Python list corresponds to the dates she wrote in her dairy.
I think that that’s what he meant: more aluminum in the brain is worse than less. What he was trying to say in that sentence is this: high levels in the blood may not mean high levels in the brain unless the blood level stays high for a long time.
Clear to me
“Bob isn't proposing a way to try to get less confused about some fundamental aspect of intelligence”
This might be what I missed. I thought he might be. (E.g., “let’s suppose we have” sounds to me like a brainstorming “mood” than a solution proposal.)
This feels like a rather different attitude compared to the “rocket alignment” essay. They’re maybe both compatible but the emphasis seems very different.
I normally am nervous about doing anything vaguely resembling making a commitment, but my curiosity is getting the better of me. Are you still looking for beta readers?
And answer came there none?
Okay, so if the builder solution can't access the human Bayes net directly that kills a "cheap trick" I had. But I think the idea behind the trick might still be salvageable. First, some intuition:
If the diamond was replaced with a fake, and owner asks, "is my diamond still safe?" and we're limited to a "yes" or "no" answer, then we should say "no". Why? Because that will improve the owner's world model, and lead them to make better predictions, relative to hearing "yes". (Not across the board: they will be surprised to ...
I want to steal the diamond. I don't care about the chip. I will detach the chip and leave it inside the vault and then I will run away with the diamond.
Or perhaps you say that you attached the chip to the diamond very well, so I can't just detach it without damaging it. That's annoying but I came prepared! I have a diamond cutter! I'll just slice off the part of the diamond that the chip is attached to and then I will steal the rest of the diamond. Good enough for me :)
Man in the middle has 3 parties: Bob wants to talk to Alice, but we have Eve who wants to eavesdrop.
Here we have just 2 parties: Harry the human wants to talk to Alexa the AI, but is worried that Alexa is a liar.
Clarification request. In the writeup, you discuss the AI Bayes net and the human Bayes net as if there's some kind of symmetry between them, but it seems to me that there's at least one big difference.
In the case of the AI, the Bayes net is explicit, in the sense that we could print it out on a sheet of paper and try to study it once training is done, and the main reason we don't do that is because it's likely to be too big to make much sense of.
In the case of the human, we have no idea what the Bayes net looks like, because humans don't have that k...
Did not expect you to respond THAT fast :)