Posts

Sorted by New

Wiki Contributions

Comments

If the reporter estimates every node of the human's Bayes net, then it can assign a node a probability distribution different from the one that would be calculated from the distributions simultaneously assigned to its parent nodes. I don't know if there is a name for that, so for now i will pompously call it inferential inconsistency. Considering this as a boolean bright-line concept, the human simulator is clearly the only inferentially consistent reporter. But one could consider some kind of metric on how different probability distributions are and turn it into a more gradual thing.

Being a reporter basically means being inferentially consistent on the training set. On the other hand being inferentially consistent everywhere means being the human simulator. So a direct translator would differ from a human simulator by being inferentially inconsistent for some inputs outside of the training set. This could in principle be checked by sampling random possible inputs. The human could then try to distinguish a direct translator from a randomly overfitted model by trying to understand a small sample of inferentially inconsistencies.

So much for my thoughts inside the paradigm, now on to snottily rejecting it. The intuition that the direct translator should exist seems implausible. And the idea that it would be so strong an attractor that a training strategy avoiding the human simulator would quasi-automatically borders on the absurd. Modeling a constraint on the training set and not outside of it basically is what overfitting is and overfitted solutions with many specialised degrees of freedom are usually highly degenerete. In other words, penalizing the human simulator would almost certainly lead to something closer to a pseudorandomizer than a direct translation. And looking at it a different way, the direct translator is supposed to be helpful in situations the human would perceive as contradictory. Or to put it differently, not bad model fits but rather models strongly misspecified and then extrapolated far out of the sample space. That's basically situations where statistical inference and machine learning have strong track records of not working.

It gets very interesting if there actually are no stocks to buy back in the market. For details on how it gets interesting google "short squeeze".

Other than that exceptional situation it's not that asymmetrical:

-Typically you have to post some collateral for shorting and there will be a well-understood maximum loss before your broker buys back the stock and seizes your collateral to cover that loss. So short (haha) of a short squeeze there actually is a maximum loss in short selling.

-You can take similar risks on the long side by buying stocks on credit ("on margin" in financial slang) with collateral, which the bank will use to close your position if the stock drops too far. So basically long risks also can be made as big as your borrowing ability.

Let me be a bit trollish so as to establish an actual counter-position (though I actually believe everything I say):

This is where the sequences first turn dumb.

For low-hanging fruit, we first see modern mythology misinterpreted as actual history. In reality, phlogiston was a useful theory at the time, which was rationally arrived at and rationally discarded when evidence turned against it (With some attempst at "adding epicycles", but no more than other scientific theories) . And the NOMA thing was made up by Gould when he misunderstood actual religious claims, i.e. it is mostly a straw-man.

On a higher level of abstraction, the whole approach of this sequence is discussing other peoples alleged rationalizations. This is almost always a terrible idea. For comparison, other examples would include Marxist talk about false consciousness, Christian allegations that atheists are angry at God or want a license to sin or the Randian portrayal of irrational death-loving leachers. [Aware of meta-irony following:] Arguments of this type almost always serve to feed the ingroup's sense of security, safely portraying the most scary kinds of irrationality as a purely outgroup thing. And that is the most simple sufficient causal explanation of this entire sequence.

You're treating looking for week points in your and the interlocutors belief as basically the same thing. That's almost the opposite of the truth, because there's a trade-off between those two things. If you're totally focused on the second thing, the first one is psychologically near impossible.

This was based on a math error, it actually is a prisoners dilemma.

I threw a D30, came up with 20 and cooperated.

Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.

[This comment is no longer endorsed by its author]Reply

I think it's just elliptic rather than fallacious.

Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)

Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.

That's his basic argument for taste being a thing and it doesn't need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.

Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.

This really isn't about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person's perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.

I think another thing to remember here is sampling bias. The actual conversion/deconversion probably mostly is the end point of a lengthy intellectual process. People far along that process probably aren't very representative of people not going through it and it would be much more interesting what gets the process started.

To add some more anecdata, my reaction to that style of argumentation was almost diametrically opposed. I suspect this is fairly common on both sides of the divide, but not being convinced by some specific argument just isn't such a catchy story, so you would hear it less.

Load More