Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 16 July 2013 09:59:49PM 5 points [-]

In addition to CronoDAS's point that it depends on the issue, I suggest that it also depends on how much you sway the 200, how firmly you convince the 10, and what sort of people (with what sort of connections) the 10 and the 200 are. It's hard to see what could usefully be said in general.

Comment author: JDM 16 July 2013 10:10:18PM 0 points [-]

I would assume that both groups have similar influence, but you can hand select ten near the most influential of the group you are convincing.

I would also assume those converted to a rational view would be relatively difficult to change back, while those swayed would be subject to the same biases you used to sway them in the first place.

Perhaps this was a foolish question, but even having my question picked apart is providing more for me to think about.

Comment author: CronoDAS 16 July 2013 09:43:21PM 5 points [-]

Honestly, it probably depends a bit on the issue.

Comment author: JDM 16 July 2013 09:51:21PM 0 points [-]

That is a fair point. I would assume that it is an issue that will have a noticeble difference on those involved, but not a catastrophic one if lost (no apocalypse, for example).

One issue: teach 10 or sway 200?

-4 JDM 16 July 2013 09:33PM

This is my first discussion topic, and I expect there is a reasonable chance I am doing something wrong and will be blasted for it, but shit happens. I fully intend to stay out of the discussion and try to understand other's insights before I add my own.

 

My question is this:

 

If you have one issue that you have decided is most important, is it better to teach 10 people to think about the issue the correct way or sway 200 to vote correctly, using some of your knowledge of their biases?

 

To answer this, I would suggest the following assumptions:

1. There is no third option to teach them to be rational in all things, however, your rational teachings may have some small effect on their rationality in all things.

2. Either group may spread your influence to others. There may be differences between the success rate of those who think rationally about it and those who don't, as well as some possibility of their mind being changed back.

3. There may be some risk that thinking in a way to sway the larger group has some "poisoning effect" on your own biases.

4. You may consider any other effects (guilt?) on yourself and your emotional state as a result of what you decide.

5. You are nearly certain that your side is correct. It is unlikely that there is much new information yet to become available to you.

6. The issue is of moderate importance. "Winning" will result in a noticeable positive impact, but losing is not catastrophic.

7. You may make any other reasonable assumptions, or disagree with the assumptions provided, if you can support them.

 

 

I am not considering this question for any practical purposes. It is merely an interesting question that crossed my mind, and I would like to hear some rationalist opinions on it.

Comment author: jdinkum 14 February 2012 04:22:40PM -2 points [-]

My point still holds. Most people, myself included, don't have a belief that an egg will spontaneously reform according any laws of physics. To use it as an example of the difference between certainty and likelihood is ineffective.

Comment author: JDM 17 June 2013 01:32:39AM 0 points [-]

If it were something too open to debate, it would take away from the point.

The point is as stated. There is a non-zero probability it will happen, so you shouldn't use "certain", but any reasonable person will act on the belief it isn't going to happen.

If he used religion, which is also extremely unlikely to be correct, it would distract from the point.

Comment author: Decius 09 June 2013 03:17:47AM 2 points [-]

No person may contribute to more than one entry.

I'm pretty sure that incorporating code written by someone else into your entry qualifies. I think the highest single entry might be to cheat and make everyone think you are in that tribe but defect anyway, or it might be dominant to defect against any program displaying tribal affiliations (other than this new tribe, of course). The dominant tribe is the tribe with the most members and best tribal identification, not the tribe with the best way of judging an opponents intentions.

Comment author: JDM 09 June 2013 03:55:49AM 0 points [-]

There is a difference between a "tribe system" as mentioned by yourself and one person winning by submitting 1000 entries. The goal as I understand it is simply to maximize your score by whatever means possible, not accurately guess your opponents intentions.

Comment author: Doug_S. 23 November 2007 08:53:28AM 2 points [-]

Hey, we stole this land fair and square! ;)

Anyway, on "The ends don't justify the means"...

I think, in some cases, the ends clearly do justify the means. For example, killing someone is generally considered wrong, but it's generally considered to me morally permissible to kill someone in self-defense or in defense of others. If you use some "evil" means to achieve a "good" end - and you do achieve that end - then, if the magnitude of the good achieved is greater than the magnitude of the evil, the use of the evil means can often be justified. (Of course, there is always the obligation to try to find a third alternative, but that's a complication beyond the scope of my argument.)

There is a catch, though. Justifying bad means through good ends is dangerous, because people often fail to achieve the ends they were hoping for. In the infamous trolley problem, if you push the fat man onto the tracks hoping to stop the runaway trolley, but the trolley still doesn't stop, you just killed the fat man for nothing. History is filled with examples of people who resorted to evil means to achieve good ends, and failed. When you resort to evil means, you have a greater obligation to verify that you really are going to achieve a net good, because if you screw up, the consequences are much, much worse than if you refused to employ evil means in the first place. As a practical matter, "the ends don't justify the means," although not strictly true, is still a very useful heuristic for making moral decisions, because it puts a floor on the amount of damage you end up doing when you make mistakes.

Does this make any sense?

Comment author: JDM 09 June 2013 03:49:04AM *  0 points [-]

I think the statement "the end doesn't justify the means" is somewhat silly in it's own right. While it would typically be argued in the sense that killing someone to improve someone else's life is not OK, for example, would the person dying not be equally a part of the end as the other's life improving? It seems more likely to result in double counting or a similar fallacy to try to separate an action into end and means in the first place, when everything already has an impact on the end in some way.

That said, the understood meaning is not the same as its literal value, and the meaning closer to how it is understood of "consider all the consequences of your actions" does have value.

Comment author: Eliezer_Yudkowsky 30 September 2007 09:51:16PM 11 points [-]

So the prior that you're updating for each point the clever arguer makes starts out low. It crosses 0.5 at the point where his argument is about as strong as you would expect given a 50/50 chance of A or B.

I don't believe this is exactly correct. After all, when you're just about to start listening to the clever arguer, do you really believe that box B is almost certain not to contain the diamond? Why would you listen to him, then? Rather, when you start out, you have a spectrum of expectations for how long the clever arguer might go on - to the extent you believe box A contains the diamond, you expect box B not to have many positive portents, so you expect the clever arguer to shut up soon; to the extent you believe box B contains the diamond, you expect him to go on for a while.

The key event is when the clever arguer stops talking; until then you have a probability distribution over how long he might go on.

The quantity that slowly goes from 0.1 to 0.9 is the estimate you would have if the clever arguer suddenly stopped talking at that moment; it is not your actual probability that box B contains the diamond.

Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points), and then suddenly drops precipitously as soon as he says "Therefore..." (because that excludes the possibility he has more points).

Comment author: JDM 09 June 2013 02:45:00AM *  0 points [-]

It is very possible I don't understand this properly, but assuming you have knowledge of what strength of evidence is possible, could you start at 0.5 and consider strong arguments (relative to possible strength) as increasing the possibility and weak arguments as decreasing the possibility instead? With each piece of evidence you could increase the point at which weak arguments are viewed as having a positive effect, so numerous weak arguments could still add up to a decently high probability of the box containing the diamond.

For example, if arguments are rated in strength from 0 to 1, and most arguments would not be stronger than .5, my approach would be as follows for each piece of evidence:

Piece 1: Probability += (strength-.25)

Piece 2: probability += (strength-.22)

Piece 3: probability += (strength-.20)

etc.

I am of course oversimplifying the math, and looking at how you are approaching stoppage, perhaps this isn't actually effectively much different from your approach. But this approach is more intuitive to me than considering stopping a separate event on its own. If he is struck by lightning, as mentioned several times throughout this discussion, it is hard to view this in the same light as if he had stopped on his own as an independent event, but I am not sure the difference is enough that the probability of the diamond being in the box should be substantially different in the two cases.

Can someone clear up what issues there are with my approach? It makes more sense to me and if it is wrong, I would like to know where.

Comment author: timtyler 07 June 2013 10:18:48AM *  2 points [-]

Winning is a conventional dictionary word, though. You can't easily just redefine it without causing confusion. "Winning" and "maximising" have different definitions and connotations.

Comment author: JDM 07 June 2013 07:34:20PM 0 points [-]

The first definition from google - Be successful or victorious in (a contest or conflict).

This is no different than I or most people would define it, and I don't think it contradicts with how I used it.

Comment author: timtyler 04 April 2009 08:05:34AM 2 points [-]

Indeed. Forget about "winning". It is not sexy if it is wrong.

Comment author: JDM 07 June 2013 01:47:10AM 0 points [-]

I think you're defining "winning" too strictly. Sometimes a minor loss is still a win, if the alternative was a large one.

Comment author: Jamesofengland 27 June 2008 08:20:00AM 14 points [-]

"Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.

--60 Minutes (5/12/96) Lesley Stahl on U.S. sanctions against Iraq: We have heard that a half million children have died. I mean, that's more children than died in Hiroshima. And, you know, is the price worth it?

Secretary of State Madeleine Albright: I think this is a very hard choice, but the price--we think the price is worth it.

She later expressed regret for it, after taking an awful lot of flack at the time, but this does sometimes happen.

Comment author: JDM 04 June 2013 11:33:28PM 7 points [-]

I think your point that she took a lot of flak for it is evidence for the original point. The only other reasonable responses to that could have been changing her mind on the spot, or disputing the data, and neither of those responses would have brought similar backlash on her. Conceding weak points to your arguments in politics is often looked upon as a weakness when it shouldn't be.

View more: Next