Wiki Contributions

Comments

Sorry, could not reply due to rate limit.

In reply to your first point, I agree, in a deterministic world with perfect predictors the whole question is moot. I think we agree there.

Also, yes, assuming "you have a choice between two actions", what you will do has not been decided by you yet. Which is different from "Hence the information what I will do cannot have been available to the predictor." If the latter statement is correct, then how can could have "often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation"? Presumably some information about your decision-making process is available to the predictor in this particular situation, or else the problem setup would not be possible, would it? If you think that you are a very special case, and other people like you are not really like you, then yes, it makes sense to decide that you can get lucky and outsmart the predictor, precisely because you are special. If you think that you are not special, and other people in your situation thought the same way, two-boxed and lost, then maybe your logic is not airtight and your conclusion to two-box is flawed in some way that you cannot quite put your finger on, but the experimental evidence tells you that it is. I cannot see a third case here, though maybe I am missing something. Either you are like others, and so one-boxing gives you more money than two boxing, or you are special and not subject to the setup at all, in which case two-boxing is a reasonable approach.

I should decide to try two-boxing. Why? Because that decision is the dominant strategy: if it turns out that indeed I can decide my action now, then we're in a world where the predictor was not perfect but merely lucky and in that world two-boxing is dominant

Right, that is, I guess, the third alternative: you are like other people who lost when two-boxing, but they were merely unlucky, the predictor did not have any predictive powers after all. Which is a possibility: maybe you were fooled by a clever con or dumb luck. Maybe you were also fooled by a clever con or dumb luck when the predictor "has never, so far as you know, made an incorrect prediction about your choices". Maybe this all led to this moment, where you finally get to make a decision, and the right decision is to two-box and not one-box, leaving money on the table.

I guess in a world where your choice is not predetermined and you are certain that the predictor is fooling you or is just lucky, you can rely on using the dominant strategy, which is to two-box. 

So, the question is, what kind of a world you think you live in, given Nozick's setup? The setup does not say it explicitly, so it is up to you to evaluate the probabilities (which also applies to a deterministic world, only your calculation would also be predetermined).

What would a winning agent do? Look at other people like itself who won and take one box, or look at other people ostensibly like itself and who nevertheless lost and two-box still?

I know what kind of an agent I would want to be. I do not know what kind of an agent you are, but my bet is that if you are the two-boxing kind, then you will lose when push comes to shove, like all the other two-boxers before you, as far as we both know.

There is no possible world with a perfect predictor where a two-boxer wins without breaking the condition of it being perfect.

shminux2mo30

People constantly underestimate how hackable their brains are. Have you changed your mind and your life based on what you read or watched? This happens constantly and feels like your own volition. Yet it comes from external stimuli. 

shminux2mo4-3

Note that it does not matter in the slightest whether Claude is conscious. Once/if it is smart enough it will be able to convince dumber intelligences, like humans, that it is indeed conscious. A subset of this scenario is a nightmarish one where humans are brainwashed by their mindless but articulate creations and serve them, kind of like the ancients served the rock idols they created. Enslaved by an LLM, what an irony.

shminux3mo30

Not into ancestral simulations and such, but figured I comment on this:

I think "love" means "To care about someone such that their life story is part of your life story."

I can understand how how it makes sense, but that is not the central definition for me. When I associate with this feeling is what comes to mind is willingness to sacrifice your own needs and change your own priorities in order to make the other person happier, if only a bit and if only temporarily. This is definitely not the feeling I would associate with villains, but I can see how other people might.

shminux4mo20

Thank you for checking! None of the permutations seem to work with LW, but all my other feeds seem fine. Probably some weird incompatibility with protopage.

shminux4mo20

neither worked... Something with the app, I assume.

shminux4mo20

Could be the app I use. It's protopage.com (which is the best clone of the defunct iGoogle I could find):

shminux4mo20

Thankfully, human traits are rather dispersive. 

shminux4mo00

No, I assume I would not be the only person having this issue, and if I were the only one, it would not be worth the team's time to fix it. Also, well, it's not as important anymore, mostly a stream of dubious AI takes.

Load More