1 min read

11

... of LW: a while ago, a former boss and friend of mine said that rationality is irrational because you never have sufficient computational power to evaluate everything rationally. I thought he was missing the point - but after two posts on LW, I am inclined to agree with him.

It's kind of funny - every post gets broken down into its tiniest constituents, and these get overanalysed and then people go on tangents only marginally relevant to the intent of the original article.

This would be fine if the original questions of the post were answered; but when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics, and one subthread went off to discuss the appropriateness of the term "gender equality".

I am new here, and I don't want to be overly critical of a culture I do not yet understand. But I just want to point out - rationality is a great tool to solve problems; if it becomes overly abstract, it kind of misses its point I think.

New Comment
50 comments, sorted by Click to highlight new comments since:

"but when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics, and one subthread went off to discuss the appropriateness of the term "gender equality".

Pretend doctors are attempting to come up with a metric for how well patients are who have a certain disease. Wouldn't the first step be to discuss whether the classification of the disease is correct and then to discuss what does it mean to measure progress with this disease? Your post for measuring how Trump is doing wasn't important in and of itself for improving the art of rationality, but it was useful in a meta way of providing an example of how you can measure progress.

And just in case you think I was being deliberately silly in my response to your Trump post, I teach economics at a women's college and when I discuss gender inequality in my microeconomics class I bring up some of the points I was trying to make with my response to your post.

I teach economics at a women's college and when I discuss gender inequality in my microeconomics class

My condolences.

Lumifer being Lumifer at everyone

My condolences.

You in particular did provide metrics, so I am not complaining! Although, to be perfectly honest, I do think your delivery is sort of passive aggressive or disingenuous... you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged. You provide metrics to evaluate improvement in areas where men are disadvantaged - i.e. your underlying assumption/hypothesis is the opposite of everyone else, but you don't acknowledge it.

you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged.

Not on LessWrong, but in general yes. But this is in part because most people assume that on almost all important metrics women are disadvantaged.

And I am not saying that I agree with that majority view. All I am saying is that since you know that, to sort of pretend that it's not the case is a bit strange.

We mostly don't do politics here. You'll probably get better outcomes posting about other stuff.

In being ironically guilty of not addressing your actual argument here, I'll point out that flaws of LW, valid or otherwise, aren't flaws of rationality. Rationality just means avoiding biases/fallacies. Failure can only be in the community.

Yeah, this is particularly important. LW right now is a bit of a disgrace to rationality if I'm honest.

I was being facetious, of course I still believe in rationality. But you know, I was reading Slate Star Codex, which basically represents the rationalist community as an amazing group of people committed to truth and honesty and the scientific approach - and though I appreciate how open these discussions are, I am a bit disappointed at how pedantic some of the comments are.

It seems important to be extremely clear about the criticism's target, though. I agree overanalysis is a failure mode of certain rationalists, and statistically more so for those who comment more on LW and SSC (because of the selection effect specifically for those who nitpick). But rationality itself is not the target here, merely naive misapplication of it. The best rationalists tend to cut through the pedantry and focus on the important points, empirically.

What metrics did the SSC commentariat propose, and was your question received better there?

I haven't posted the question there.

I didn't read the original thread because politics. I'm somewhere between resentful and sad that you're judging a group of people for failing to solve your problem, which was under-specified and likely impossible.

All problems, if the right answer matters, SHOULD be broken down and analyzed - there is no "overanalyzed". Those questions that can't survive this are usually a mix of other problems, and resistance to analysis is a sign that the asker isn't seeking truth, but some other goal (see: politics).

There's not always time to fully break down and understand things you're interested in - understood. but acknowledging that you're looking for heuristics or shortcuts and doing the first level or two of breakdown into components is still the best approach if you're looking to understand the universe.

In summary: please keep political topics away from this site. If you're interested in decision theory or practical shortcuts and tricks to train your instincts to match your conscious beliefs, stick around.

[-]gjm90

and then people go on tangents only marginally relevant to the intent of the original article

I don't think I've ever seen any forum in which (1) informal unstructured discussions are commonplace and (2) what you just described doesn't happen. It may or may not be a bad thing, but it isn't the fault of rationality.

It's kind of funny - every post gets broken down into its tiniest constituents,

If you have 1000 line programme that crashes because of a bug in one line, do you focus on the one line or the other 999?

Regardless of what I do, I expect the program to provide a response at the end. Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".

Even worse, imagine your compsci professor asks you to write code to simulate objects falling from a skyscraper. What you are doing then here is telling me "aaah, but you are trying to simulate this using gravity! That is, of course, not a universal solution, so you should try relativity instead".

Regardless of what I do, I expect the program to provide a response at the end.

I dont think that is literally true.

Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".

What is that supposed to be analogous to? Which ethics is uniquely picked out by your criteria? I dont think any are. I think there are obviously a countable infinity of consistent ethical systems.

but when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics

This is actually the correct response.

It's trivially easy to generate tons of metrics -- hundreds of them. What's difficult is choosing the right ones. And which ones are the right ones? That depends. That depends on what do you want.

Without specifying what is it that you want to measure, the talk about metrics is premature. "Success" is not a specification.

This is actually the correct response.

And this is what I mean when I say rationalists often seem to be missing the point. Fair enough if you want to say "here is the right way to think about it... and here are the metrics this method produces, I think".

But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.

But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.

Imaging going to a Trump forum and asking them for advice on how to get Trump impeached. Then the answer: "Trump shouldn't be impeached." Did they give you the answer that you were looking for? No, they didn't.

The disagree on principles. Here there's also disagreement on principles.

Let's say you went to the homeopath. Afterwards you got cured. You go to a friend and ask him for metrics of the treatment you received. You suggest possible things to measure:

  • Improvement in my well-being.
  • Less sick days.
  • Whether the homeopath felt warm and emphatic.
  • The cost of the treatment.

But you have a problem. Measuring sick days and cost is easy but you really want help with proper metrics for well-being and the homeopath being warm and emphatic.

That's roughly the quality of your original post and you don't want to hear that n=1 evidence is not enough to do a good judgment.

and no metrics are given

Unfortunately, I cannot read minds.

I said that it depends on what do you want and I actually do not know what do you want.

Unfortunately, I cannot read minds.

But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]". I didn't say "I want suggestions on how to go about deciding the suitability of a metric".

But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]".

I guess I can read, kinda-sorta. How about you? I answered:

It's trivially easy to generate tons of metrics -- hundreds of them

and y'know, I'm a bit lazy to type it all up...

It's easy to generate tons of metrics, what's hard is generating a relatively small list that does the job. If you are too lazy to contribute to the discussion, fine. But contributing just pedantic remarks is a waste of everyone's time.

what's hard is generating a relatively small list that does the job

And since, as I've pointed out, you failed to specify the job, the task changes from hard to impossible.

But I don't know if it was a waste of everyone's time. Your responses were... illuminating.

The job was, evaluate a presidency. What metrics would you, as an intelligent person, use to evaluate a presidency. How much simpler can I make it? I didn't ask you to read my mind or anything like that.

What metrics would you, as an intelligent person, use to evaluate a presidency.

My metrics are likely to be quite different from yours since I expect to have axes of evaluation which do no match yours.

A good starting point is recalling that POTUS is not a king and his power is quite constrained. For example, he doesn't get to control the budget. Judging a POTUS on, say, unemployment, is silly because he just doesn't have levers to move it. In a similar way, attributing shifts in culture wars to POTUS isn't all that wise either.

My metrics are likely to be quite different from yours

And that's fine! If everyone here gave me a list of 5-10 metrics instead of pedantic responses, I'd be able to choose a few I like, and boom, problem solved.

A problem? Which problem? I don't have a problem.

Are you, by any chance, upset that people didn't hop to solving your problem?

For the love of... problem solved = the problem I asked for people to help me solve. I.e. finding metrics. If you don't want to help, fine. But as I said, being inane in attempt to appear smart is just stupid, counterproductive and frankly annoying.

Look, someone asks for your help with something. There are two legitimate responses: a) you actually help them achieve their goal or b) you say, "sorry, not my problem". Your response is to be pedantic about the question itself. What good does that do?

There are two legitimate responses

Nope. There are more, e.g.

(c) You misunderstand your problem, it's actually this

(d) Your problem is not solvable because of that

(e) Solving this problem will not help you (achieve a more terminal goal)

[-]Elo30

(f) the problem was not correctly conveyed, leading to someone trying to solve the one you conveyed not the one you wanted them to solve.

(g) get out of the car

I think you are conflating "is overly rational and insufficiently pragmatic" with "doesn't do what ArisC wants, on demand, in the way they want it done".

LW is a forum for discussing the art of rationality. It's not primarily a forum for discussing politics. If someone writes a post about politics that implies misconceptions about how we can determine causality, it makes sense to focus on the misconception about determining causality instead of ignoring it and stepping into the more political issues.

If you want the usual way to discuss politics there are plenty of other places on the internet.

If LW would simply debate politics on the term of every outsider to LW who wants to discuss politics on LW that would allow a lot of bad reasoning about politics into LW.

... of LW: a while ago, a former boss and friend of mine said that rationality is irrational because you never have sufficient computational power to evaluate everything rationally. I thought he was missing the point - but after two posts on LW, I am inclined to agree with him.

He's techinically correct on the first part, but what really bothers me is that while that statement is resource-aware, it totally disregards time. What can you do in 1, 2, 5, 10 minutes/hours/days/weeks/months/years (remind me to edit this to include decades/centuries/milleniums in some time) that will help you achieve your goals?

It's kind of funny - every post gets broken down into its tiniest constituents, and these get overanalysed and then people go on tangents only marginally relevant to the intent of the original article.

Typical online discussions.. okay, no data to back it up..

This would be fine if the original questions of the post were answered; but when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics, and one subthread went off to discuss the appropriateness of the term "gender equality".

You've just had a taste of why discussing politics can get difficult and annoying.

I am new here, and I don't want to be overly critical of a culture I do not yet understand. But I just want to point out - rationality is a great tool to solve problems; if it becomes overly abstract, it kind of misses its point I think.

Not the first to bring it up. Just so you won't feel lonely: https://lesswrong.com/lw/2po/selfimprovement_or_shiny_distraction_why_less/

He's techinically correct on the first part, but what really bothers me is that while that statement is resource-aware, it totally disregards time. What can you do in 1, 2, 5, 10 minutes/hours/days/weeks/months/years (remind me to edit this to include decades/centuries/milleniums in some time) that will help you achieve your goals?

There is a definition of rationality implicit in that, which seems to be along the lines of "use system 2, use if for eveything, and it keep on using it till you get an answer".

I've been around in LW for years, and I'd say it's tended more towards refining the art of pragmatism than rationality (though there's a good bit of intersection there).

I think that one failure mode is to think that "rationality" is actually exist as an object in the outside world. But it is akin to mind projection fallacy. It is not an object. It is not even mathematical object, like the digits.

Defining rationality through winning is also not explaining what rationality is. Other things also could produce winning: luck, force, power, rules manipulation, personal effectiveness, genetics, risk taking, or number of trials. Or just interpretation of what is winning.

if some one wins, its is not strong evidence for his rationality. Many politicians, billionaires, sportsmen are winners from the point of views of their peers, but they are not best rationalists.

So, rationality is not a physical object, not mathematical object, and not something we could extract from solving game theory.

Rationality also is not intelligence, as no body knows what is intelligence (or they will be able to build AI).

So may be better to think of rationality as of idealised way of thinking, and not any, but the way of thinking which could be presented in the finite number of finite rules. The last distinction is important as there is another possibility: that best way of thinking is petabyte size neural network, which works great but no body knows how.

By defining rationality as best way of thinking which could be presented in finite set of rules, we hope that this definition would converge into a one and only finite object, the best set of rules, and in this case we would be able to say that rationality actually exist. It may not be true. It could produce several contradictioning sets of rules, or a set of rules for which we can't mathematically prove that it is actually the best possible set.

Rationality is like communism: the great project which not yet exist, but some steps could be done in its direction. Actual rationality will probably created only with AI.

By defining rationality as best way of thinking which could be presented in finite set of rules, we hope that this definition would converge into a one and only finite object, the best set of rules, and in this case we would be able to say that rationality actually exist.

I think the average LW users would consider that to be Bayesian probability, which indeed have been used as the basis for an idealized AI (called AIXI).

Pretty much everyone is saying that this is a case of mismatched expectations: obviously from your point of view, it's others who are wrong.
I would object that rationality requires you to stop your immediate reaction and look at the thing from a meta-level: you've come to a place expecting something, instead it's another. Are you going to yell at reality and try to shame it into being something else?
You should simply re-evaluate your model and decide from there your course of action.

Of course they are wrong. Because if you examine everything at the meta-level, and forget about being pragmatic, you will starve.

I agree. I am a blogger & I am active over several forums, once while discussing UGC NET Answer keys, suddenly people went gaga over the debate of right & wrong answers they recorded. And I was like what we were discussing & how did we came to this abrupt discussion. It is always small chunks that caught your attention & you resonate that in your verdict.

Aris, in some ways I agree with you, though conditionally.

It is my understanding that the next best step is to find a solution to our computational power problem, hence all the talk about inventing friendly AI and keeping it in check for our uses.

As far as abstraction is concerned, I don't really know what sense you're using the term. when I see that word, the first thing that comes to mind where that can be problematic is the case of failing to boil down points of disagreement down to the 5 second level. Does this help?

Perhaps you've seen a lot of people not taking this advise, or maybe you've seen various examples of the ample amounts of disguised sealioning in the discussion sections. Or maybe the culture here merely requires some practice.

Nevertheless, you get used to it. I doubt what you've seen is representative of rationality as a whole. I base this solely on your claim of being new. And unfortunately we don't have a fantastic, more powerful form of computation at our disposal to do what we want yet. So we work with what we've got.

You might be right that we tend to focus on details too much, but I don't think your example shows this.

when I asked for metrics to evaluate a presidency, few people actually provided any - most started debating the validity of metrics, and one subthread went off to discuss the appropriateness of the term "gender equality".

All this shows is that we're bad at solving the problem you asked us to solve. But it's not like you're paying us to solve it. We can choose to talk about whatever we find most interesting. That doesn't mean we couldn't solve the problem if we wanted to.

I completely agree. Almost all of us here have jobs/pursuits/studies that we are good at, and that require a lot of object-level problem solving. LW is a quiet corner where we come in our free time to discuss meta-level philosophical questions of rationality and have a good time. For these two goals, LW has also acquired a norm of not talking about object-level politics too much, because politics makes it hard to stay meta-level rational and isn't always a good time.

Now with that said, you're of course welcome to post an object-level political question on the forum. It's an open community. But if people here don't want to play along, you should take it as a sign that you missed something about LW, not that we miss something about answering questions practically.

Of course, you have the right to do whatever you want. But, if someone new to a group of rationalists asks a question with a clear expectation for a response, and gets philosophising as an answer, don't be surprised if people get a perhaps unflattering view of rationalists.

What websites are you using where pedantry, sophistry, tangents, and oblique criticism aren't the default? Are you using the same Internet as me?

My parents always told me "we only compare ourselves to the best". I am only making these criticisms because rationalists self-define as, well, rational. And to be, rationality also has to do with achieving something. Pedantry, sophistry &c are unwelcome distractions.

I actually agree. I think one issue is that the kind of mind that is attraction to "rationality" as a topic also tends to be highly sensitive to perceived errors, and to be fond of taking things to the meta-level. These combine to lead to threads where nobody talks about the object-level questions. I frankly don't even try to bring up object-level problems on Less Wrong.