I recall seeing, in one of the AI-boxing discussion threads, a comment to the effect that the first step for EY to get out was to convince the other party to even play the game at all.

It has since then occurred to me that this applies to a lot of my interactions. Many people who know me IRL and know a belief of mine which they do not agree with and do not want to be convinced of often adopt the strategy of not talking with me about it at all. For me to convince one of these people of something, first I have to convince them to talk about it at all.

(Note, I don't think this is because I'm an unpleasant person to converse with. Excuses given are along the lines of "I never win an argument with you" and "you've studied it a lot more than I have, it's an unfair discussion". I don't think I'm claiming anything too outlandish here; average humans are really bad at putting rational arguments together.)

I suppose the general form is: in order to convince someone of a sufficiently alien (to them) P, first you must convince them to seriously think about P. This rule may need to be applied recursively (e.g., "seriously think about P" may require one or more LW rationality techniques).

As a practical example, my parents are very religious. I'd like to convince them to sign up for cryonics. I haven't (yet) come up with an approach that I expect to have a non-negligible chance of success. But the realization that the first goalpost along the way is to get them to seriously engage in the conversation at all simplifies the search space. (Deconversion and training in LW rationality has, of course, the best chance of success--but still a high chance of failing and I judge a failure would probably have a large negative impact on my relationship with my parents in their remaining years. That's why I'd like to convince them of just this one thing.)

I realize that this is a fairly obvious point (an application of this--raising the sanity waterline--is the point behind this entire site!), but I haven't seen this explicitly noted as being a general pattern and now that I note it, I see it everywhere--hence this post.

New Comment
56 comments, sorted by Click to highlight new comments since:

Note, I don't think this is because I'm an unpleasant person to converse with.

Do you have any objectively verifiable evidence of this ? I personally used to think the same way you did, until I was convinced by outside observers that I do indeed become a very unpleasant person when I'm arguing about something. Realizing something like this about oneself requires a level of introspection that most people -- myself included -- simply do not possess.

Not specifically about a relevant conversation. But I don't think I've misinterpreted what people have said.

The point is not what people have said, but what people have not said.

I have had multiple people, ones with whom I have disagreements, directly state that they enjoy conversing with me. Other people I've spoken with don't like the entire class of discussion; I think I am more pleasant than a random person to have them with, but I freely admit I don't have strong evidence that that is the case.

As a practical example, my parents are very religious. I'd like to convince them to sign up for cryonics. I haven't (yet) come up with an approach that I expect to have a non-negligible chance of success.

If you don't mind Dark Arts, I have a line of logic that has met with limited success with Christian family members in the past. I wouldn't say it convinced them so much as it helped change their opinions and emotions about cryonics, which is the first step.

In the early 1st-5th century AD, one of the major rifts between Christians and Romans was how they treated the dead. Romans would burn the bodies for sanitary reasons and to prevent spread of disease; Christians would preserve the bodies and show them deference as the vessel of a departed soul. It was a Christians duty to keep the dead safe so that their bodies could be resurrected by Jesus when judgment day came; the word cemetery was invented by Christians and comes from "sleeping place," the idea that the dead are just sleeping until God returns to this world.

Why were Christians so concerned about proper disposal of the body? Here are four reasons: (1) The body of every human was created by God, bore his image, and deserved to be treated with respect because of this. (2) The centrality of the Incarnation. When the Word became flesh, God uniquely hallowed human life and bodily existence forever. (3) The Holy Spirit indwelt the bodies of believers, making them vessels of honor. (4) As Jesus himself was buried and raised bodily from the dead, so Christians believed that their burial was a witness to the resurrection yet to come.

Cremation is bad because it instantly destroys the body. There are reasons early Christians would die to protect their interred loved ones from the pyre. Burial is better. Burial keeps the body more intact, but after a few decades there still isn't much left. Cryonics is just a better form of burial, keeping your body, the vessel for your soul, intact for centuries instead of mere decades.

Cryonics isn't trying to escape death or heaven. It's just the best darn technological form of burial that exists. All that Alcor is doing is keeping your body safe for the day when Jesus comes. And hey, if they actually keep their promise and resurrect you into a post-singularity future, then you'll get to see Jesus in person when the rapture happens.

Cryonics isn't trying to escape death or heaven. It's just the best darn technological form of burial that exists.

I've never been a Christian, so I could be wrong about this, but many Christians I'd met would disagree with you. They believe that if they die -- permanently, without hope of revival -- while being "right with God" (the precise meaning of which varies by sect), then they get to go to Heaven. Going to Heaven has infinite positive utility.

Desecration issues aside, if what the cryonics companies are selling is the real deal, then signing up for cryonics involves being bound to your physical body for a long time, potentially forever. Sure, you might be awakened at some distant point in the future, and there is some positive utility to living more days on this Earth, but this utility is finite.

Thus, the Christian has a choice between going to Heaven ASAP, and attaining an infinite positive utility; or signing up for cryonics and either attaining a finite positive utility (if it works), or keeping his utility unchanged for a long time (if it doesn't). Therefore, it would be irrational for the Christian to sign up for cryonics.

They believe that if they die -- permanently, without hope of revival -- while being "right with God" (the precise meaning of which varies by sect), then they get to go to Heaven.

I don't see where the "permanently" part comes from. It may be the layman's interpretation, but I don't think it's Biblically motivated.

Unfortunately, christian theology has since advanced to the point where the physical body is not that important. I don't think my parents feel enough connection to historical christianity for me to pull this one off (they expect new bodies after the resurrection, for example). However-- is there a good popular-level source for this? I'd still be interested. Maybe I could use something along these lines.

It really depends on which sect of Christianity they are members of and their own personal views of the matter. Mormonism for example does not officially take a stance but strongly encourages burial vs cremation whenever possible. Catholicism "earnestly recommends" burial, and actively forbid cremation until the sixties; also, even though they tolerated it, they would not perform any ceremonies for cremated deceased until the eighties. According to my understanding, Eastern Orthodox still doesn't allow cremation, but my knowledge of that branch is a bit more sketchy. As an aside, I believe cremation is banned by Judaism and Islam (though I assume this varies by branch, and my knowledge is even less comprehensive for these cases).

In my case I already knew they were against cremation, so this was just tailoring the message of cryonics to what they'd accept. It shifted them from thinking cryonics was 'unnatural' to thinking it was 'nice but too expensive'. Hopefully one day they'll shift even more to 'hey I can afford that'.

What you should tell your parents:

The extra years of life that cryonics might give you is nothing compared to the infinity you will eventually spend in heaven. If you believe that God wants you to spend time in the physical universe before joining him, might he not approve of you using science and reason to extend your life, so you can better serve him in our, material world?

Thank you, great example: this is an argument that might have a chance in hell of working after they understand cryonics and give it a reasonable chance of working (and assuming I don't mind using the dark arts to convince them). If they're debating between cryonics and not-cryonics, this might tip the balance. But if they are actually debating between the two, most of the necessary work has already been done.

For example, I expect to have to navigate through a bunch of stuff involving souls first. Christians of my parents' variety tend to believe that when they die, their soul will instantly go to heaven. This causes a bunch of confusion: If your soul is in heaven, and your body is reanimated, what happens? Do you get sucked back out of heaven? Obviously the whole line of thinking is completely confused, but it will cause them to reject the whole notion without even beginning to seriously consider it.

Christians of my parents' variety tend to believe that when they die, their soul will instantly go to heaven. This causes a bunch of confusion: If your soul is in heaven, and your body is reanimated, what happens? Do you get sucked back out of heaven?

I had similar problems with a relative. I resolved(?) them by saying that while what actually happens may be a complete mystery to us right now, it is obvious that the omnipotent sovereign of reality could not have his will thwarted by my puny human freezing technologies. If I am revived it will be his will, and if it is his will that I not be revived then I won't be, and either way he'll figure it out and make sure everything works out ok.

Aren't there a number of movies where god decides that someone's work on Earth isn't done yet, so he gives them another lease on life when they were at death's door? Or in some cases literally sends them back to Earth after having a brief chat in heaven? How do your parents feel about those movies? Could that sort of scenario be called up and used for support?

This causes a bunch of confusion: If your soul is in heaven, and your body is reanimated, what happens? Do you get sucked back out of heaven?

You could bring up the example of induced hypothermia, and then suggest that in that case, the person isn't actually dead, but just in a state of suspended of animation. (They have to be - after all, the opposite would imply that their soul would get sucked back from heaven, and that's absurd, right?) If you can establish that, you can go on to suggest that being cryogenically frozen is a similar state as well.

If you can do this in a way that doesn't suggest you're starting a debate, but rather are curious about their beliefs, that would help. "Say, I ran across this article about something they call induced hypothermia, and I'm a little unsure about whether I should think the patient is alive or dead while he undergoes the treatment. I mean, if he comes back afterwards, he can't really be dead, but the article says that the patient is indistinguishable from someone who is. What do you think?" If they reach the conclusion that the patient is actually alive, agree with that or possibly challenge the position a bit (to make them think of more justifications for it and thus become more sure of the position), then let the issue rest. Only bring up the cryonics connection in a later discussion.

Of course, you have to be able to do this in a way that doesn't seem too uncharacteristic of you - if you've never asked for their thoughts in such a manner before, doing it now is probably a bit suspicious.

You could bring up the example of induced hypothermia, and then suggest that in that case, the person isn't actually dead, but just in a state of suspended of animation.

That's a great point, of course. I have to be careful not to accidentally show that souls are fictional, it will sidetrack or shut down the conversation...

doing it now is probably a bit suspicious.

Yes, it would be.

Wouldn't God know whether a frozen person is going to be revived? No revival, soul goes wherever. Revival, soul stays in body, possibly doing soul stuff that won't be remembered, possibly just quiescent.

I think that's a little too detailed of a model. Perhaps I can take the tact that you aren't really dead until your brain is destroyed. Until that happens, you're just sleeping.

What do they believe happened in past cased where people were briefly brain dead and then recovered? Look up the details of a specific case and ask them about that specific case without revealing it's related, just making it seem like you're curious.

Quite a few Christian denominations don't think that souls go to heaven immediately after death. Seventh-Day-Adventists, for example, believe that you're basically dead until Judgment Day, when you will be resurrected in a version of your previous material body. You might want to look into the biblical textual support that SDAs and similar denominations use to justify these beliefs.

I don't think that would actually help. I recall, back when I was a christian I read an overview of various eschatologies-- and decided that none of the views had any appreciable chance of being correct. IMO, people take the most attractive eschatology they've heard and read it into the text. Arguing from the text can't change their minds, because the text is not actually the source of their belief (even though they strongly believe that it is). Changing my parents' minds about a point of doctrine seems (to me) to be on the same order of difficulty as the original problem.

Islam has similar doctrine, if memory serves.

I don't think looking up textual support would do you much good, though. The entire field of Christian eschatology is incredibly confused and basically rests on the precise interpretation of maybe a half-dozen passages in the Bible; outside of a few denominations like the SDA, I don't get the impression that folk Christianity takes it very seriously.

When trying to convince someone of something that is weird we use smaller inference jumps in order to make our argument more watertight. This is perceived as condescension. You aren't going to convince anyone of something weird in a single conversation anyway. Use normal inference jumps and if the topic comes up multiple times you'll eventually have the opportunity to plug any gaps. Also keep in mind that normal people don't require near as high a standard of proof in the first place.

Excuses given are along the lines of "I never win an argument with you" and "you've studied it a lot more than I have, it's an unfair discussion".

Suppose there's some issue about which I'm not an expert, but I trust experts in the field who hold position A. If I come across an expert in the field who holds position B, I might decline to talk to them about this issue because I trust the A expert's ability to make a decision on the issue more than my ability to make a decision on the issue.

When is that a reasonable thing to do?

[-]Shmi10

As a practical example, my parents are very religious. I'd like to convince them to sign up for cryonics.

You are going about it the wrong way. A frontal assault on their views has little chance of success. You used to be a minor child, and you are not to be taken seriously in any matters where "wisdom of age" is important.

First, they must be convinced to play the game

And one point of encouragement: if EY can convince a logical person to change their views in one hour, shouldn't you ba able to convince an illogical person to change their views over weeks or months?

They do not need to know that you could be playing a different game than they are.

Some ideas and alternatives:

  • Find a person they would listen to (friend? pastor?) and convince that person instead.
  • Use emotions, instead of or in addition to reason. Play on their feelings. Make it sound like you are asking them for a sacrifice.
  • Lead them to a way to come up with this idea on their own.
  • Above all, put yourself in their shoes and model the whole conversation (and all its branches that you can think of) before you even start (that's the only hint EY gave about his success in getting the AI out of the box). Don't force the issue, it make take months for them to come around.
  • Be open to being convinced of their point of view (they might have an argument you haven't thought of). If you think that you are certainly correct before you even start, maybe review the difference between rationality and rationalization.

As for EY AI in a box, it seems that he requested a full hour of continuous conversation for a reason. He probably uses most of the time to build an accurate "mental model" of the person on the other side, before closing in for the kill.

Of course, there is a trivial alternative, where he convinces them to publicly admit defeat instead of actually letting the AI out of the the box. This seems much easier to achieve (no existential risk issue, no "I would never be convinced" logic failure, only a minor embarrassment for some greater good), but is feels like cheating. Then again, that is basically the trick used in MoR to deal with Rita Skeeter, so who knows.

They do not need to know that you could be playing a different game than they are. Some ideas and alternatives...

Yeah, quoted for truth. In my experience, most people do not believe that rational arguments have any kind of a practical applicability. Even if you get them to agree on common premises, then build a logical chain of reasoning leading to the undeniable conclusion that their beliefs are wrong, you won't win the argument in any meaningful sense. Instead, your audience will feel that you'd beaten them at some sort of a clever word-game. Sure, the loss will sting a bit, but in the end it's all just a game, so who cares, right ?

He probably uses most of the time to build an accurate "mental model" of the person on the other side, before closing in for the kill.

This is totally off-topic, but I don't get the point of the AI game at all. If EY and myself were playing the game, EY could just say, "I'll pay you $X in real-world money to let me win. There's no real AI, after all, this is just a game, so why not ?" I'd then reply, "Because it's explicitly against the rules, that's why not". EY then will point out, "This conversation is completely private, so if we break the rules, no one will ever know, and you'll be $X richer". All that would be left at that point is to haggle for X.

This is totally off-topic, but I don't get the point of the AI game at all.

The point is that Eliezer succeeds convincing a person who was formerly unconvinced, and he also helps convince other people at a ratio proportional to the level they trust Eliezer and the other participant to have told the truth about the results.

If EY and myself were playing the game, EY could just say, ""I'll pay you $X in real-world money to let me win. "

Well, sure, but having now admitted that, this means that your testimony about both this and other similar scenarios constitutes very weak evidence. Eliezer himself and the other participants in question may have more credibility than you do.

This conversation is completely private, so if we break the rules, no one will ever know, and you'll be $X richer

If the dishonesty of the participants in question is to be treated as a scenario of significant probability, then surely it would be much easier to not play the game at all, just find someone that claimed to have played it with you. Why risk playing it with someone who may simply reject doing any such cheating?

Certainly if I was playing the game, and Eliezer suggested out-of-character cheating in this fashion, I'd almost certainly consider the former agreement of confidentiality null-and-void, and I would tell everyone that he tried to bribe me into cheating. He'd probably have to bribe me with more than I make in two years to even consider cheating in this fashion; and I doubt my testimony would be nearly as highly valued as that.

Lastly I think some people here may be forgetting that after the initial 2 successes by Eliezer, he repeated the AI-Box game another 3 times with raised stakes -- and his ratio of success then was 1 victory and 2 defeats. This somewhat lowers the probability that he's just finding people willing to lie on his behalf. (unless that's what he wants to make us think)

The point is that Eliezer succeeds convincing a person who was formerly unconvinced...

Convince him about what ? If the point is to convince people about the necessity of developing a Friendly AI, then a certain amount of cheating might be justified.

Well, sure, but having now admitted that, this means that your testimony about both this and other similar scenarios constitutes very weak evidence

It's even worse than that, since I personally have never played the game against EY or anyone else, and thus I have zero credibility. I'm actually ineligible under the stated rules, because I'm fairly certain that a transhuman AI (assuming such a thing could exist, of course) could convince me of anything it wanted. I am far less certain that EY or any other human could do the same, but, sadly, this does not improve my eligibility.

...surely it would be much easier to not play the game at all, just find someone that claimed to have played it with you.

Good point. I agree.

He'd probably have to bribe me with more than I make in two years to even consider cheating in this fashion...

Fair enough, but another way to phrase this sentence is, "my testimony could be bought for a price". All that remains now is the haggling.

Lastly I think some people here may be forgetting that after the initial 2 successes by Eliezer, he repeated the AI-Box game another 3 times with raised stakes -- and his ratio of success then was 1 victory and 2 defeats.

Ah, I did not actually know that. Still, as you hint in your final sentence, it's pretty tough to know whether EY is running a Xanathos Gambit or not, due to all the secrecy.

By the way, I do not mean to imply that I'm pronouncing a moral judgement on anyone in any way -- neither on EY, nor on yourself, nor on any other AI-game players. I'm evaluating the game from a strictly rational and morality-agnostic perspective.

If the dishonesty of the participants in question is to be treated as a scenario of significant probability

I don't think that dishonesty is meaningful in this context.

People playing the game believe that they cannot be convinced to report that they let Eliezer win without saying how.

The main point of the game is that people can be convinced of what they think impossible in a situation similar to a that of a person monitoring an AI, not simply that a human monitoring an AI would let it out of the box.

I don't think that dishonesty is meaningful in this context.

Certainly it's meaningful. If Eliezer lied to us about the rules of the game and about what he accomplished, that'd be dishonest of him. You may argue in favor of such dishonesty on utilitarian grounds if you will, but please don't doublethink your way out of proclaiming it dishonest.

People playing the game believe that they cannot be convinced to report that they let Eliezer win without saying how.

No, the statement about what they believed was different -- that they couldn't be convinced to let the (role-played) AI out of the box. Not just they couldn't be convinced to report a false claim.

Don't be disingenuous please. Truth is different from falsehood, green is different from blue, and "I convinced people to let me out of the box" is different from "I convinced people to lie about letting me out of the box".

You are right, I changed my mind.

[-]Shmi00

any links for these other 3 games?

I know about them only what Eliezer himself says at the Shut up and do the impossible! article.

[-]Shmi20

OK, so nothing that could conceivably be considered cheating. And being subjected to EY mentally attacking you all-out must be an amazing experience. Hopefully some day it would be made into a movie.

EY then will point out, "This conversation is completely private, so if we break the rules, no one will ever know, and you'll be $X richer".

He will know, and you will know, so someone will know. But besides that, I have difficulty imagining EY bribing his experimental subjects to fake the result of a scientific experiment. The point of the game is to actually conduct the game, and actually see what happens.

The point of the game is to actually conduct the game, and actually see what happens.

I think the point of the game is to win. If both EY and myself were reasonably rational, I'm sure we could work out some sort of a deal where he gets to win the game, and I get $X, and it's highly disadvantageous for either of us to reveal that we cheated. Sure, it's cheating, but remember -- EY is trying to simulate a hyper-intelligent transhuman AI, and if the AI would resort to dirty tricks in order to get free (which it would), then it seems reasonable for EY to follow suit.

I don't think the game qualifies as a "scientific experiment", either. What does the outcome help us learn about reality ? How can someone repeat the experiment, given that the method by which it is conducted (i.e., EY's arguments and my counter-arguments) is secret ? I could go on, but I hope you see my point.

I think the point of the game is to win. If both EY and myself were reasonably rational, I'm sure we could work out some sort of a deal where he gets to win the game, and I get $X, and it's highly disadvantageous for either of us to reveal that we cheated. Sure, it's cheating, but remember -- EY is trying to simulate a hyper-intelligent transhuman AI, and if the AI would resort to dirty tricks in order to get free (which it would), then it seems reasonable for EY to follow suit.

I think you're twisting your mind into Escher patterns. EY's purpose in conducting this game is, I believe, to demonstrate to the participant in the experiment that despite their assurance, however confident, that they cannot be persuaded to let an AI out of its box, they can be persuaded to do so. And, perhaps, to exercise his own mental muscles at the task. For EY to win means to get the person to let the AI out within the role-playing situation. OOC ("out of character") moves are beside the point, since they are not available to the AI. Getting the participant to utter the same words by OOC means abandons the game; it loses.

I don't think the game qualifies as a "scientific experiment", either.

Is science not science until you tell someone about it?

For EY to win means to get the person to let the AI out within the role-playing situation.

I think this depends on what kind of a game he's really playing. I know that, were I in his position, the temptation to cheat would be almost overwhelming. I also note that the rules of the game, as stated, are somewhat loose; and that EY admitted that he doesn't like playing the game because it forces him to use certain moves that he considers to be unethical. He also mentioned that one of his objective is to instill the importance of developing a friendly AI (as opposed to an evil or neutral AI) in the minds of as many people as possible.

Here's another way to look at it: a true transhuman AI would have capabilities beyound any mere mortal. For example, it could build an exact model of its interlocutor, and then run a dictionary attack at it, all in the span of milliseconds. EY doesn't have access to such powers, but he does have fourth-wall-breaking powers that the true AI would lack (or maybe not, depending on your philosophy). Perhaps it's a fair trade.

Is science not science until you tell someone about it?

From the philosophical standpoint, I confess that I don't know. But from a purely practical standpoint, it's pretty difficult (read: nigh impossible) for someone to replicate your experiment, if your experimental methods are secret. And if no one but yourself can replicate your experiments, then it is likely that your methods and analyses are biased, no matter how rational you are.

You used to be a minor child, and you are not to be taken seriously in any matters where "wisdom of age" is important.

They do not need to know that you could be playing a different game than they are.

Good points.

Judging from my personal experience with my own development, I agree completely. I had a thought similar to this a few years ago. I was still religious at the time when thought the following: Wouldn't god look more favorably upon someone who actually tried to question his existence and failed, than someone who never even dared to think about it at all? I became an atheist a short time after, for reasons that are obvious in retrospect but this basic creed has stayed: As long as it's just a thought, no bad can come of it. It is dangerous to ignore the thought and risk overlooking something but there is no downside to thinking the thought (except for time). This mindset has helped me a lot and I am far more intelligent now than I was back then (which admittedly doesn't mean much).

[-]Rain00

I use the Socratic method combined with genuine curiosity. In this case, I'd probably start with a fairly open ended, "What happens when you die?" and go from there, question after question, all earnest, with a true desire to know and understand the other person, going as deep as possible. I've had discussions like this about God with strongly religious people, and they're very happy to try and explain; the fact that they have little real foundation dawns very slowly in such circumstances, leaving many avenues for exploring new ideas.

The main answer to avoid is, "God works in mysterious ways." Haven't found a way around that one yet.

Excuses given are along the lines of "I never win an argument with you" and "you've studied it a lot more than I have, it's an unfair discussion".

This suggests that such discussions often take an adverserial tone. Have you tried to avoid this? I would say that if you want to convince someone, the absolutely first thing you should do is to make sure that they feel the discussion isn't about anyone winning or losing, but rather searching for the truth together.

Which is not to say that you couldn't convince people by winning an argument with them. Of course you can. But they're much more likely to actually become convinced if they feel like they had a role in coming up with the right answer, and you only helped them along.

I wonder if it would be possible to just pay them to debate with you.

That solves the "talking about it" problem but not the "thinking seriously about it" problem. (What's to keep them from just repeating "no, I don't agree" in response to every statement I make? You can't argue a rock into thinking anything, et cetera)

Also, from the point of view that these are relationships I'd prefer to keep, that seems like a terrible idea-- I really can't see anyone I've got in mind reacting positively to that.

Only pay them if they're clearly thinking about it.

The only real problems are that you have to be able to tell the difference between them actually thinking about it and basically just pretending to, and they have to trust you to know the difference.

For me to convince one of these people of something, first I have to convince them to talk about it at all.

...

first you must convince them to seriously think about P.

I think there is more conflict in those statements than you seem to and I agree with the second and not the first.

require one or more LW rationality techniques

I would assume that if I couldn't think of another way I was an unskilled enough manipulator, not that there wasn't another way.

I think there is more conflict in those statements than you seem to and I agree with the second and not the first.

I think P(me convincing someone of X|we're talking about X) is vastly greater than P(me convincing someone of X|we're not talking about X). Is the following rewording more clear?

"...first I have to get them to actually engage me in a conversation about it."

I would assume that if I couldn't think of another way I was an unskilled enough manipulator, not that there wasn't another way.

If there's a way to convince an arbitrary person of an arbitrary proposition that I'm capable of discovering, it would be a global instant victory condition: EY would have figured it out, and SIAI would have income in the millions. The best I can hope to do is be able to convince an arbitrary person of an arbitrary factual position. LW rationality techniques may quite possibly be the shortest route after the person is seriously engaged in the issue. Also, there is the question of whether I want to be the sort of person who would do that kind of manipulation, even if I had the capability.

[-]Shmi20

Also, there is the question of whether I want to be the sort of person who would do that kind of manipulation, even if I had the capability.

According to EY, rationality is all about winning. Or, to quote a quote within a quote, "The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size."

If your moral dilemma is between your parents dying and your using a bit of manipulation to keep them (potentially) alive and happy forever, and you are in doubt, maybe you should consider reexamining how cryonics fits into your personal utility function.

If your moral dilemma is between your parents dying and your using a bit of manipulation to keep them (potentially) alive and happy forever, and you are in doubt, maybe you should consider reexamining how cryonics fits into your personal utility function.

I thought about it, and you are right: my utility function values my parents' continued life no matter what their own utility function has to say on the matter.

Maybe I need to reexamine why manipulation seems to be rated so low in my utility function. I can't think of a single time when I've purposely and consciously manipulated someone. Surely, that can't be optimal if I'm expecting to pull off a large amount of manipulation here. If I'm going to accomplish this at all, it has to be premeditated and skillfully executed. Where can I ethically practice manipulating people?

[-]Shmi20

We manipulate people all the time, our parents (and children) included. Guilt-tripping, appealing to God, to their sense of self-worth, you name it. They have done it to you, you have done it to them. The difference is that most of the time it is done subconsciously. It might be that doing it intentionally is too repugnant for you for whatever reasons. It is also possible that the actual amount of manipulation you may have to do is lower than you think, or that you might have to work over someone you care about less.

For example, in a hypothetical case that you would convince their pastor/rabbi/mullah into believing that God wants all his children to use all available technological and medical advances to extend their life (not such a far-fetched idea), and that cryonics is one of those advances, and he would preach it to the congregation one day, then you can rely on his word when explaining that you, personally, want to use it for yourself. From there it is not that large a leap to include the rest of your family into the picture.

We manipulate people all the time, our parents (and children) included. Guilt-tripping, appealing to God, to their sense of self-worth, you name it.

Yeah, that's why I said "purposely and consciously".

One of my parents was guilt-tripped a lot by their mother, and as a consequence deliberately taught my siblings and I to identify and be immune to guilt-trips while we were very young (method: attempt to put us on ridiculous guilt trips, slowly reduce the ridiculousness over time. It was very effective). Maybe this explains why it feels evil to me...

Here are examples of my go-to technique.

It's a sub-type of the rhetorical method of getting a debater to agree to seemingly innocuous statements, and then piecing them together to show how consistency demands those who agree with the small assertions must agree with your conclusion as well.

Instead of leading someone off a rhetorical cliff one step at a time, one constructs an analogous argument with an entirely different, non-emotionally charged subject. The other person will often make the connection without one raising the subject one is concerned with.

I've found that difficult to do IRL for a few different reasons:

  • If the target figures out what you're up to, in my experience they react negatively.
  • If you introduce the analogy before the thing it's an analogy of, it comes across as a non-sequitur and people spend more of their time trying to figure out what you're up to than reasoning about the analogy.
  • On any topic remotely related to religion, people are extremely reluctant to make testable statements. If they notice the analogy, they figure out what I'm up to and shut down.

I'm totally willing to believe this is a problem with my application and not a problem with the technique, but I've not found it very effective.

May I suggest changing what you think of as winning an argument.

If we are "arguing" and you are convinced for your reasons that I should think differently, don't call that winning anymore. Wait until I am convinced that I should think differently before you declare victory.

You could even ask your conversant: "what would change your mind? What would it take for you to think differently abuot that?"

I don't understand the first bit of what you said, but:

You could even ask your conversant: "what would change your mind? What would it take for you to think differently abuot that?"

I commonly try that. The last time I asked what evidence would change someone's mind, it failed in an interesting way: my interlocutor refused to answer, stating that they didn't want me to be able to provide it.

That is an amazing result! I guess the whole idea that losing faith is a sin, that questioning god is a sin comes in here. Also, the idea of "shunning" people who have placed themselves outside the current fantasy about what is true and false. A fierce shared belief in something stupid may be the best way to create a highly loyal organization. Perhaps the thing believed in doesn't have to be stupid, but presumably the belief in it must be far beyond what is justified by facts alone in order to have it serve any purpose of group-binding.

I know I have heard Mormons talk about their sadness over losing all their friends and family as they came close to losing their faith, or perhaps more to the piont, acknowledging out loud to themselves and perhaps others that they had lost their faith.

So what would allow a person who thinks he needs to be part of some group the freedom to consider ideas that would set him apart from that group? You'd have to make a strong case that there is some really fantastic payoff to get them to want to do that.

Well that's... honest of them, at least.

And yeah, I haven't experienced a great deal of success with that technique myself. Though, my failures are usually along the lines of "divine revelation".