Honestly, it probably depends a bit on the issue.
That is a fair point. I would assume that it is an issue that will have a noticeble difference on those involved, but not a catastrophic one if lost (no apocalypse, for example).
My point still holds. Most people, myself included, don't have a belief that an egg will spontaneously reform according any laws of physics. To use it as an example of the difference between certainty and likelihood is ineffective.
If it were something too open to debate, it would take away from the point.
The point is as stated. There is a non-zero probability it will happen, so you shouldn't use "certain", but any reasonable person will act on the belief it isn't going to happen.
If he used religion, which is also extremely unlikely to be correct, it would distract from the point.
No person may contribute to more than one entry.
I'm pretty sure that incorporating code written by someone else into your entry qualifies. I think the highest single entry might be to cheat and make everyone think you are in that tribe but defect anyway, or it might be dominant to defect against any program displaying tribal affiliations (other than this new tribe, of course). The dominant tribe is the tribe with the most members and best tribal identification, not the tribe with the best way of judging an opponents intentions.
There is a difference between a "tribe system" as mentioned by yourself and one person winning by submitting 1000 entries. The goal as I understand it is simply to maximize your score by whatever means possible, not accurately guess your opponents intentions.
Hey, we stole this land fair and square! ;)
Anyway, on "The ends don't justify the means"...
I think, in some cases, the ends clearly do justify the means. For example, killing someone is generally considered wrong, but it's generally considered to me morally permissible to kill someone in self-defense or in defense of others. If you use some "evil" means to achieve a "good" end - and you do achieve that end - then, if the magnitude of the good achieved is greater than the magnitude of the evil, the use of the evil means can often be justified. (Of course, there is always the obligation to try to find a third alternative, but that's a complication beyond the scope of my argument.)
There is a catch, though. Justifying bad means through good ends is dangerous, because people often fail to achieve the ends they were hoping for. In the infamous trolley problem, if you push the fat man onto the tracks hoping to stop the runaway trolley, but the trolley still doesn't stop, you just killed the fat man for nothing. History is filled with examples of people who resorted to evil means to achieve good ends, and failed. When you resort to evil means, you have a greater obligation to verify that you really are going to achieve a net good, because if you screw up, the consequences are much, much worse than if you refused to employ evil means in the first place. As a practical matter, "the ends don't justify the means," although not strictly true, is still a very useful heuristic for making moral decisions, because it puts a floor on the amount of damage you end up doing when you make mistakes.
Does this make any sense?
I think the statement "the end doesn't justify the means" is somewhat silly in it's own right. While it would typically be argued in the sense that killing someone to improve someone else's life is not OK, for example, would the person dying not be equally a part of the end as the other's life improving? It seems more likely to result in double counting or a similar fallacy to try to separate an action into end and means in the first place, when everything already has an impact on the end in some way.
That said, the understood meaning is not the same as its literal value, and the meaning closer to how it is understood of "consider all the consequences of your actions" does have value.
So the prior that you're updating for each point the clever arguer makes starts out low. It crosses 0.5 at the point where his argument is about as strong as you would expect given a 50/50 chance of A or B.
I don't believe this is exactly correct. After all, when you're just about to start listening to the clever arguer, do you really believe that box B is almost certain not to contain the diamond? Why would you listen to him, then? Rather, when you start out, you have a spectrum of expectations for how long the clever arguer might go on - to the extent you believe box A contains the diamond, you expect box B not to have many positive portents, so you expect the clever arguer to shut up soon; to the extent you believe box B contains the diamond, you expect him to go on for a while.
The key event is when the clever arguer stops talking; until then you have a probability distribution over how long he might go on.
The quantity that slowly goes from 0.1 to 0.9 is the estimate you would have if the clever arguer suddenly stopped talking at that moment; it is not your actual probability that box B contains the diamond.
Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points), and then suddenly drops precipitously as soon as he says "Therefore..." (because that excludes the possibility he has more points).
It is very possible I don't understand this properly, but assuming you have knowledge of what strength of evidence is possible, could you start at 0.5 and consider strong arguments (relative to possible strength) as increasing the possibility and weak arguments as decreasing the possibility instead? With each piece of evidence you could increase the point at which weak arguments are viewed as having a positive effect, so numerous weak arguments could still add up to a decently high probability of the box containing the diamond.
For example, if arguments are rated in strength from 0 to 1, and most arguments would not be stronger than .5, my approach would be as follows for each piece of evidence:
Piece 1: Probability += (strength-.25)
Piece 2: probability += (strength-.22)
Piece 3: probability += (strength-.20)
etc.
I am of course oversimplifying the math, and looking at how you are approaching stoppage, perhaps this isn't actually effectively much different from your approach. But this approach is more intuitive to me than considering stopping a separate event on its own. If he is struck by lightning, as mentioned several times throughout this discussion, it is hard to view this in the same light as if he had stopped on his own as an independent event, but I am not sure the difference is enough that the probability of the diamond being in the box should be substantially different in the two cases.
Can someone clear up what issues there are with my approach? It makes more sense to me and if it is wrong, I would like to know where.
The first definition from google - Be successful or victorious in (a contest or conflict).
This is no different than I or most people would define it, and I don't think it contradicts with how I used it.
Indeed. Forget about "winning". It is not sexy if it is wrong.
I think you're defining "winning" too strictly. Sometimes a minor loss is still a win, if the alternative was a large one.
Am Julia Ferguson from Canada Life can be very displeasing especially when we loose the ones we love and cherish so much. in this kind of situation where one loses his/her soul mate there are several dangers engage in it. one may no longer be able to do the things he was doing before then success will be very scarce and happiness will be rare. that person was created to be with you for without him things may fall apart. That was my experience late last year. but thank god today i am happy with him again. all thanks to DR Paloma, i was nearly loosing hope until i saw an article on how DR Paloma could cast a love spell to make lovers come back. There is no harm in trying, i said to my self. i contacted him via email: palomaspelltemple@yahoo.com. words will not be enough to appreciate what he has done for me. i have promised to share the good news as long as i live
You're on the wrong site to sell that voodoo shit.
"Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.
--60 Minutes (5/12/96) Lesley Stahl on U.S. sanctions against Iraq: We have heard that a half million children have died. I mean, that's more children than died in Hiroshima. And, you know, is the price worth it?
Secretary of State Madeleine Albright: I think this is a very hard choice, but the price--we think the price is worth it.
She later expressed regret for it, after taking an awful lot of flack at the time, but this does sometimes happen.
I think your point that she took a lot of flak for it is evidence for the original point. The only other reasonable responses to that could have been changing her mind on the spot, or disputing the data, and neither of those responses would have brought similar backlash on her. Conceding weak points to your arguments in politics is often looked upon as a weakness when it shouldn't be.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
In addition to CronoDAS's point that it depends on the issue, I suggest that it also depends on how much you sway the 200, how firmly you convince the 10, and what sort of people (with what sort of connections) the 10 and the 200 are. It's hard to see what could usefully be said in general.
I would assume that both groups have similar influence, but you can hand select ten near the most influential of the group you are convincing.
I would also assume those converted to a rational view would be relatively difficult to change back, while those swayed would be subject to the same biases you used to sway them in the first place.
Perhaps this was a foolish question, but even having my question picked apart is providing more for me to think about.