Salutator
Salutator has not written any posts yet.

Salutator has not written any posts yet.

It gets very interesting if there actually are no stocks to buy back in the market. For details on how it gets interesting google "short squeeze".
Other than that exceptional situation it's not that asymmetrical:
-Typically you have to post some collateral for shorting and there will be a well-understood maximum loss before your broker buys back the stock and seizes your collateral to cover that loss. So short (haha) of a short squeeze there actually is a maximum loss in short selling.
-You can take similar risks on the long side by buying stocks on credit ("on margin" in financial slang) with collateral, which the bank will use to close your position if the stock drops too far. So basically long risks also can be made as big as your borrowing ability.
Let me be a bit trollish so as to establish an actual counter-position (though I actually believe everything I say):
This is where the sequences first turn dumb.
For low-hanging fruit, we first see modern mythology misinterpreted as actual history. In reality, phlogiston was a useful theory at the time, which was rationally arrived at and rationally discarded when evidence turned against it (With some attempst at "adding epicycles", but no more than other scientific theories) . And the NOMA thing was made up by Gould when he misunderstood actual religious claims, i.e. it is mostly a straw-man.
On a higher level of abstraction, the whole approach of this sequence is discussing other peoples... (read more)
You're treating looking for week points in your and the interlocutors belief as basically the same thing. That's almost the opposite of the truth, because there's a trade-off between those two things. If you're totally focused on the second thing, the first one is psychologically near impossible.
This was based on a math error, it actually is a prisoners dilemma.
I threw a D30, came up with 20 and cooperated.
Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.
I'm a bit out of my depth here. I understood an "ordered group" as a group with an order on its elements. That clearly can be finite. If it's more than that the question would be why we should assume whatever further axioms characterize it.
Two points:
I don't know the Holder theorem, but if it actually depends on the lattice being a group, that includes an extra assumption of the existence of a neutral element and inverse elements. The neutral element would have to be a life of exactly zero value, so that killing that person off wouldn't matter at all, either positively or negatively. The inverse elements would mean that for every happy live you can imagine an exactly opposite unhappy live, so that killing off both leaves the world exactly as good as before.
Proving this might be hard for infinite cases, but it would be trivial for finite generating groups. Most Less Wrong
I think it's just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham... (read more)
I think another thing to remember here is sampling bias. The actual conversion/deconversion probably mostly is the end point of a lengthy intellectual process. People far along that process probably aren't very representative of people not going through it and it would be much more interesting what gets the process started.
To add some more anecdata, my reaction to that style of argumentation was almost diametrically opposed. I suspect this is fairly common on both sides of the divide, but not being convinced by some specific argument just isn't such a catchy story, so you would hear it less.
If the reporter estimates every node of the human's Bayes net, then it can assign a node a probability distribution different from the one that would be calculated from the distributions simultaneously assigned to its parent nodes. I don't know if there is a name for that, so for now i will pompously call it inferential inconsistency. Considering this as a boolean bright-line concept, the human simulator is clearly the only inferentially consistent reporter. But one could consider some kind of metric on how different probability distributions are and turn it into a more gradual thing.
Being a reporter basically means being inferentially consistent on the training set. On the other hand being... (read more)