Wiki Contributions

Comments

tslarm2d20

It's based on a scenario described by Derek Parfit in Reasons and Persons.

I don't have the book handy so I'm relying on a random pdf here, but I think this is an accurate quote from the original:

Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. This happens to me because I am never self-denying. It would have been better for me if I had been trustworthy, disposed to keep my promises even when doing so would be worse for me. You would then have rescued me.

(It may be objected that, even if I am never self-denying, I could decide to keep my promise, since making this decision would be better for me. If I decided to keep my promise, you would trust me, and would rescue me. This objection can be answered. I know that, after you have driven me home, it would be worse for me if I gave you the promised reward. If I know that I am never self-denying, I know that I shall not keep my promise. And, if I know this, I cannot decide to keep my promise. I cannot decide to do what I know that I shall not do. If I can decide to keep my promise, this must be because I believe that I shall not be never self-denying. We can add the assumption that I would not believe this unless it was true. It would then be true that it would be worse for me if I was, and would remain, never self-denying. It would be better for me if I was trustworthy.)

tslarm23d10

Got it, thanks! For what it's worth, doing it your way would probably have improved my experience, but impatience always won. (I didn't mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)

I think two-axis voting is a huge improvement over one-axis voting, but in this case it's hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.

tslarm23d85

If eating ice cream at home, you need to take it out of the freezer at least a few minutes before eating it

I'm curious whether this is true for most people. (I don't eat ice cream any more, but back when I occasionally did, I don't think I ever made a point of taking it out early and letting it sit. Is the point that it's initially too hard to scoop?)

tslarm1mo10

Pretty sure it's "super awesome". That's one of the common slang meanings, and it fits with the paragraphs that follow.

tslarm1mo42

Individual letters aren't semantically meaningful, whereas (as far as I can tell) the meaning of a Toki Pona multi-word phrase is always at least partially determined by the meanings of its constituent words. So knowing the basic words would allow you to have some understanding of any text, which isn't true of English letters.

Answer by tslarmFeb 22, 202410

As a fellow incompabitilist, I've always thought of it this way:

There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.

That might sound glib, but I mean it sincerely and I think it is sound. 

It does require you to reject the notion that libertarian free will is an inherently incoherent concept, as some people argue. I've never found those arguments very convincing, and from what you've written it doesn't sound like you do either. In any case, you only need to have some doubt about their correctness, which you should on grounds of epistemic humility alone.

(Technically you only need >0 credence in the existence of free will for the argument to go through, but of course it helps psychologically if you think the chance is non-trivial. To me, the inexplicable existence of qualia is a handy reminder that the world is fundamentally mysterious and the most confidently reductive worldviews always turn out to be ignoring something important or defining it out of existence.)

To link this more directly to your question --

Why bother with effort and hardship if, at the end of the day, I will always do the one and only thing I was predetermined to do anyway?

-- it's a mistake to treat the effort and hardship as optional and your action at the end of the day as inevitable. If you have a choice whether to bother with the effort and hardship, it isn't futile. (At least not due to hard determinism; obviously it could be a waste of time for other reasons!)

tslarm2mo21

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for $5000. 

I don't know whether your conclusion is right or wrong, but it honestly doesn't look like you're committed to finding the truth and convincing thoughtful people of it.

tslarm3mo51

Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?

tslarm3mo10

Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?

If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith. 

I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present the game as one which the Gatekeeper was seriously motivated to win.

edit: for clarity, I'm saying this because the technique is explicitly allowed by the rules:

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

tslarm3mo21

There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.

I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the AI gets out. Then you could raise the stake until it's a meaningful disincentive for the Gatekeeper. 

(If the AI and the Gatekeeper are too friendly with each other to care much about a wealth transfer, they could find a third party, e.g. a charity, that they don't actually think is evil but would prefer not to give money to, and make it the beneficiary.)

Load More