Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Good_Burning_Plastic 27 March 2017 08:21:43PM 1 point [-]

let's say theta is modeled by a Gaussian

The conjugate prior of the binomial distribution is the beta distribution, so if you use a beta distribution for theta, the posterior is also a beta distribution, and the expected value of the posterior predictive is just (u0 + u)/(u0 + u + d0 + d) where u and d are the number of up- and downvotes and u0 and d0 are the parameters of the prior distribution, or pseudocounts.

Comment author: tristanm 27 March 2017 08:45:40PM 1 point [-]

You're right, that's in the second chapter of Gelman too. I'll edit that.

Comment author: tristanm 27 March 2017 07:59:55PM *  1 point [-]

Would a Bayesian notion of "upvotes / downvotes" work better than simple upvoting / downvoting? Suppose that instead of a simple sum of ups and downs, that there is some unknown latent "goodness" variable theta, which is the parameter of a Binomial distribution. Roughly, theta is the probability that a random reader of your post would upvote it. The sum of upvotes, or upvotes - downvotes, is not a very useful piece of information (since a highly upvoted / downvoted post could be highly controversial, but simply have a huge amount of voters). Instead of that, if you calculate the posterior distribution over theta (let's say theta is modeled by a Beta distribution), then you have information about what theta is likely to be along with the degree of confidence in that estimate. Would calculating that every time someone votes be a huge strain on the backend?

Comment author: Lumifer 24 March 2017 08:04:52PM 0 points [-]

I sort of doubt that there are any niche markets in AI

Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?

You really just don't see "disruption" (in the sense that Peter Thiel defines it) in the AI vertical. And you don't see niches.

Um. I don't think the AI vertical exists. And what do you mean about niches? Wouldn't, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.

Comment author: tristanm 24 March 2017 11:35:50PM 0 points [-]

Well, niche markets in AI aren't usually referred to as such, they're usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.

AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.

Thinking more about what YC meant in their "democratize AI' article, it seems they were referring to startups that want to use ML to solve problems that haven't traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren't served by a tech company. That's fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don't, strictly speaking, use AI to solve a given task. A lot of "AI" startups just do basic statistical analysis but have a really fancy GUI on top of it.

Comment author: Lumifer 20 March 2017 09:16:56PM 0 points [-]

we should expect that the firms with the most resources should have significant advantages over small startups

So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?

Comment author: tristanm 24 March 2017 07:54:28PM 0 points [-]

I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.

I've noticed this a lot with "data science" and AI startups - in what way is their product unique? Usually its not. It's usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don't see "disruption" (in the sense that Peter Thiel defines it) in the AI vertical. And you don't see niches.

Comment author: bogus 22 March 2017 11:58:46PM *  1 point [-]

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

The actual ground-level stance is more like: "If you think that you know some sort of objective reality, etc., it is overwhelmingly likely that you're in fact wrong in some way, and being deluded by cached thoughts." This is an eminently rational attitude to take - 'it's not what you don't know that really gets you into trouble, it's what you know for sure that just ain't so.' The rest of your comment has similar problems, so I'm not going to discuss it in depth. Suffice it to say, postmodern thought is far more subtle than you give it credit for.

Comment author: tristanm 23 March 2017 12:18:32AM 1 point [-]

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute. It seems like that's setting a very low bar for what postmodernism actually hopes to accomplish.

Comment author: TheAncientGeek 22 March 2017 08:58:54PM *  1 point [-]

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

Citation needed.

Well yeah, being able to unequivocally define anything is difficult, no argument there

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. T

Engineers use an intuitive and pragmatic definition of truth that allows them to actually do things. Rationalists are more in the philosophy business.

It happens to be an attitude that works really well in practice,

For some values of "work". It's possible to argue in detail that predictive power actually doesn't entail correspondence to ultimate reality, for instance.

I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict?

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

Except that you can't demonstrate superiority of anything within the framework of postmodernism

That's not what I said.

but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

There's no such thing as postmodernism and I'm not particularly in favour of it. My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

Comment author: tristanm 22 March 2017 11:04:11PM 0 points [-]

Citation needed.

Citing it is going to be difficult, even the Stanford Encyclopedia of Philosophy says "That postmodernism is indefinable is a truism." I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion. It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense. This is mostly the sense in which I refer to it in my original post.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

Comment author: TheAncientGeek 22 March 2017 11:56:51AM *  0 points [-]

Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme.

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought.

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof. What's worse still is that sceptical and relativistic claims can be supported using the toolkit of rationality. "Postmodernists" tend to be sceptics and relativists, but you don't have to be a "postmodernist" to be a relativist or sceptic. As non-bay-area, mainstream, rationalists understand well. If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

[*] "Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Comment author: tristanm 22 March 2017 06:28:32PM 0 points [-]

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth. And the 'goal' of postmodernism is to break apart and criticize everything that claims to be able to do those things. You would be hard pressed to find a better example of something diametrically opposed to rationalism. (I'm going to guess that with high likelihood I'll get accused of not understanding postmodernism by saying that).

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Well yeah, being able to unequivocally define anything is difficult, no argument there. But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. Then what happens is they get accused by postmodernists of claiming to have the One and Only True and Correct Definition of Truth and Correctness, and of claiming that we have access to the Objective Reality. The point is that as soon as you allow for any leeway in this at all (leeway in allowing for some in-between area of there being a true objective reality with 100% access to and 0% access to), you basically obtain rationalism. Not because the principles it derives from are that there is an objective reality that is possible to Truly Know, or that there are facts that we know to be 100% true, but only that there are sets of claims we have some degree of confidence in, and other sets of claims we might want to calculate a degree of confidence in based on the first set of claims.

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof.

It happens to be an attitude that works really well in practice, but the other two attitudes can't actually be used in practice if you were to adhere to them fully. They would only be useful for denying anything that someone else believes. I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict? In probability theory you can have degrees of confidence that are non-zero that add up to one, but it's unclear if this is the same thing as relativism in the sense of "multiple truths". I would guess that it isn't, and multiple truths really means holding two incompatible beliefs to both be true.

If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

Except that you can't demonstrate superiority of anything within the framework of postmodernism. Within rationalism it's very easy and straightforward.

I imagine the reason that some rationalists might find postmodernism to be useful is in the spirit of overcoming biases. This in and of itself I have no problem with - but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

Comment author: username2 21 March 2017 03:08:48AM 2 points [-]

(I thought the post was reasonably written.)

Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".

Comment author: tristanm 21 March 2017 08:48:54PM 0 points [-]

The process is not different from what occurred in the past, and I think this was basically the catalyst for anti-semitism during the post industrial revolution era. You observe a characteristic of a group of people who seem to be doing a lot better than you, in that case a lot of them happened to be Jewish, and so you then associate their Jewish-ness with your lack of success and unhappiness.

The main difference is that society continues to modernize and technology improves. Bad ideas for why some people are better off than others become unpopular. Actual biases and unfairness in the system gradually disappear. But despite that, inequality remains and in fact seems to be rising. What happens is that the only thing left to blame is instrumental rationality. I imagine that people will look as hard as they can for bias and unfairness for as long as possible, and will want to see it in people who are instrumentally rational.

In a free society, (and even more so as a society becomes freer and true bigotry disappears) some people will be better off just because they are better at making themselves better off, and the degree to which people vary in that ability is quite staggering. But psychologically it is too difficult for many to accept this, because no one wants to believe in inherent differences. So it's sort of a paradoxical result of our society actually improving.

Comment author: Viliam 21 March 2017 01:31:28PM 0 points [-]

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Comment author: tristanm 21 March 2017 08:27:51PM 0 points [-]

I don't think it's necessary for 'rationality' to be used an applause light for this to happen. The only things needed, in my mind, are:

  • A group of people who adopt rationality and are instrumentally rationalist become very successful, wealthy and powerful because of it.
  • This groups makes up an increasing share of the wealthy and powerful, because they are better at becoming wealthy and powerful than the old elite.
  • The remaining people who aren't as wealthy or successful or powerful, who haven't adopted rationality, make observations about what the successful group does and associates whatever they do / say as the tribal characteristics and culture of the successful group. The fact that they haven't adopted rationality makes them more likely to do this.

And because the final bullet point is always what occurs throughout history, the only difference - and really the only thing necessary for this to happen - is that rationalists make up a greater share of the elite over time.

Comment author: bogus 21 March 2017 05:54:48PM *  0 points [-]

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party.

Somewhat ironically, this is exactly the sort of cargo-cultish "rationality" that originally led to the emergence of postmodernism, in opposition to it and calling for some much-needed re-evaluation and skepticism around all "cached thoughts". The moral I suppose is that you just can't escape idiocy.

Comment author: tristanm 21 March 2017 08:09:21PM *  0 points [-]

Not exactly. What happened at first was that Marxism - which, in the early 20th century, became the dominant mode of thought for Western intellectuals - was based on rationalist materialism, until it was empirically shown to be wrong by some of the largest social experiments mankind is capable of running. The question for intellectuals who were unwilling to give up Marx after that time was how to save Marxism from empirical reality. The answer to that was postmodernism. You'll find that in most academic departments today, those who identify as Marxists are almost always postmodernists (and you won't find them in economics or political science, but rather in the english, literary criticism and social science departments). Marxists of the rationalist type are pretty much extinct at this point.

View more: Next