Comment author: philh 02 December 2015 10:15:15AM 2 points [-]

Repeating my question from late in the previous thread:

It seems to me that if you buy a stock, you could come out arbitrarily well-off, but your losses are limited to the amount you put in. But if you short, your payoffs are limited to the current price, and your losses could be arbitrarily big, until you run out of money.

Is this accurate? If so, it feels like an important asymmetry that I haven't absorbed from the "stock markets 101" type things that I've occasionally read. What effects does it have on markets, if any? (Running my mouth off, I'd speculate that it makes people less inclined to bet on a bubble popping, which in turn would prolong bubbles.) Are there symmetrical ways to bet a stock will rise/fall?

Comment author: Salutator 02 December 2015 01:49:15PM *  7 points [-]

It gets very interesting if there actually are no stocks to buy back in the market. For details on how it gets interesting google "short squeeze".

Other than that exceptional situation it's not that asymmetrical:

-Typically you have to post some collateral for shorting and there will be a well-understood maximum loss before your broker buys back the stock and seizes your collateral to cover that loss. So short (haha) of a short squeeze there actually is a maximum loss in short selling.

-You can take similar risks on the long side by buying stocks on credit ("on margin" in financial slang) with collateral, which the bank will use to close your position if the stock drops too far. So basically long risks also can be made as big as your borrowing ability.

Comment author: Salutator 13 May 2015 09:18:37PM 4 points [-]

Let me be a bit trollish so as to establish an actual counter-position (though I actually believe everything I say):

This is where the sequences first turn dumb.

For low-hanging fruit, we first see modern mythology misinterpreted as actual history. In reality, phlogiston was a useful theory at the time, which was rationally arrived at and rationally discarded when evidence turned against it (With some attempst at "adding epicycles", but no more than other scientific theories) . And the NOMA thing was made up by Gould when he misunderstood actual religious claims, i.e. it is mostly a straw-man.

On a higher level of abstraction, the whole approach of this sequence is discussing other peoples alleged rationalizations. This is almost always a terrible idea. For comparison, other examples would include Marxist talk about false consciousness, Christian allegations that atheists are angry at God or want a license to sin or the Randian portrayal of irrational death-loving leachers. [Aware of meta-irony following:] Arguments of this type almost always serve to feed the ingroup's sense of security, safely portraying the most scary kinds of irrationality as a purely outgroup thing. And that is the most simple sufficient causal explanation of this entire sequence.

Comment author: Kawoomba 08 July 2014 07:24:00PM 4 points [-]

Depending on how close and dear someone's belief are to their own identity, a context of warmth and growth could work against going against a wrong belief full-bore. Especially when you notice how essential such a belief is to someone's identity. Like telling a child there's no Santa, to their face.

rather than just looking for weak points in mine

Probably the crux of our disagreement: Looking for weak points in your own and the interlocutor's belief is what you should be doing, with as few distractions as possible. (If correct beliefs were the overriding goal. Which, all protestations to the contrary aside, they mostly aren't.)

However, I totally get that there are often more important things than correcting someone else's wrong beliefs. Such as building shared experiences, creating a sense of community et coetera. Singing Kumbaya ;-).

Comment author: Salutator 09 July 2014 01:36:03AM 0 points [-]

You're treating looking for week points in your and the interlocutors belief as basically the same thing. That's almost the opposite of the truth, because there's a trade-off between those two things. If you're totally focused on the second thing, the first one is psychologically near impossible.

Comment author: Salutator 23 November 2013 10:20:22AM 1 point [-]

I threw a D30, came up with 20 and cooperated.

Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.

Comment author: Salutator 23 November 2013 10:37:12AM 3 points [-]

This was based on a math error, it actually is a prisoners dilemma.

Comment author: Salutator 23 November 2013 10:20:22AM 1 point [-]

I threw a D30, came up with 20 and cooperated.

Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.

Comment author: Xodarap 02 November 2013 08:10:06PM *  1 point [-]

it would be trivial for finite generating groups... That would mean only finitely many utility levels and then the result is obvious

Z^2 lexically ordered is finitely generated, and can't be embedded in (R,+). [EDIT: I'm now not sure if you meant "finitely generated" or "finite" here. If it's the latter, note that any ordered group must be torsion-free, which obviously excludes finite groups.]

But your implicit point is valid (+1) - I should've spent more time explaining why this result is surprising. Just about every comment on this article is "this is obvious because <some proof which is invalid>", which I guess is an indication LWers are so immersed in utilitarianism that counter-examples don't even come to mind.

Comment author: Salutator 04 November 2013 10:30:52PM 0 points [-]

I'm a bit out of my depth here. I understood an "ordered group" as a group with an order on its elements. That clearly can be finite. If it's more than that the question would be why we should assume whatever further axioms characterize it.

Comment author: Salutator 28 October 2013 02:55:56PM *  1 point [-]

Two points:

  1. I don't know the Holder theorem, but if it actually depends on the lattice being a group, that includes an extra assumption of the existence of a neutral element and inverse elements. The neutral element would have to be a life of exactly zero value, so that killing that person off wouldn't matter at all, either positively or negatively. The inverse elements would mean that for every happy live you can imagine an exactly opposite unhappy live, so that killing off both leaves the world exactly as good as before.

  2. Proving this might be hard for infinite cases, but it would be trivial for finite generating groups. Most Less Wrong utilitarians would believe there are only finitely many brain states (otherwise simulations are impossible!) and utility is a function of brain states. That would mean only finitely many utility levels and then the result is obvious. The mathematically interesting part is that it still works if we go infinite on some things but not on others, but that's not relevant to the general Less Wrong belief system.

(Also, here I'm discussing the details of utilitarian systems arguendo, but I'm sticking with the general claim that all of them are mathematically inconsistent or horrible under Arrow's theorem.)

Comment author: passive_fist 21 October 2013 07:41:45AM 4 points [-]

There are some interesting points in there, especially about the fact that most people make themselves like what seems 'cultured' (I've definitely seen this type of appeal to majority among my friends - I was nearly roasted alive when I mentioned I honestly don't enjoy a particular classical composer).

There are also some fallacies in there too.

Anyway, the part where he talks about trickery is interesting:

What counts as a trick? Roughly, it's something done with contempt for the audience. For example, the guys designing Ferraris in the 1950s were probably designing cars that they themselves admired. Whereas I suspect over at General Motors the marketing people are telling the designers, "Most people who buy SUVs do it to seem manly, not to drive off-road. So don't worry about the suspension; just make that sucker as big and tough-looking as you can."

I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of 'better taste') than the former.

At any rate, in the end that paragraph was the closest I got to his definition of 'taste' - the ability to recognize trickery in artistic works.

And especially this paragraph about people with good taste:

Or to put it more prosaically, they're the people who (a) are hard to trick, and (b) don't just like whatever they grew up with.

Finally,

I wrote this essay because I was tired of hearing "taste is subjective" and wanted to kill it once and for all.

While the insights presented are interesting (in providing a window to the author's mind, at least), It has not actually succeeded in this purpose.

Comment author: Salutator 24 October 2013 09:07:48PM 1 point [-]

I think it's just elliptic rather than fallacious.

Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)

Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.

That's his basic argument for taste being a thing and it doesn't need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.

Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.

This really isn't about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person's perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.

Comment author: jaibot 02 October 2013 11:52:57PM *  9 points [-]

I've heard several stories in the last few months of former theists becoming atheists after reading The God Delusion or similar Four-Horsemen tract. This conflicts with my prior model of those books being mostly paper applause lights that couldn't possibly change anyone's mind.

Insofar as atheism seems like super-low-hanging fruit on the tree of increased sanity, having an accurate model for what gets people to take a bite might be useful.

Has anyone done any research on what makes former believers drop religion? More generally, any common triggers that lead people to try to get more sane?

Edit: Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?

Comment author: Salutator 10 October 2013 12:58:49PM 0 points [-]

I think another thing to remember here is sampling bias. The actual conversion/deconversion probably mostly is the end point of a lengthy intellectual process. People far along that process probably aren't very representative of people not going through it and it would be much more interesting what gets the process started.

To add some more anecdata, my reaction to that style of argumentation was almost diametrically opposed. I suspect this is fairly common on both sides of the divide, but not being convinced by some specific argument just isn't such a catchy story, so you would hear it less.

Comment author: gjm 03 February 2013 05:40:27PM *  0 points [-]

last night [...] because it's pegged to the actual holiday of Twelfth Night

which is in early January. Is that just because arranging parties takes time, or did someone get Twelfth Night mixed up with Candlemas?

[EDITED to add: great idea, though.]

Comment author: Salutator 03 February 2013 09:24:56PM 1 point [-]

But if you missed Twelfth Night, Candlemas would be a Schelling point for rescheduling, because it's the other "Christmas now definitely over" holiday.

View more: Next