You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open Thread, Dec. 28 - Jan. 3, 2016 - Less Wrong Discussion

10 Post author: Clarity 27 December 2015 02:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (144)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 28 December 2015 04:49:28PM 6 points [-]

I'm not buying your elevator pitch. Primarily because lots of data is not nearly enough. You need smart people and, occasionally, very smart people. This means that

companies had access to tons of data that they could use to ACTUALLY make better decisions

is not true because they lack people smart enough to correctly process the data, interpret it, and arrive at the correct conclusions. And

the management consulting companies would come in as outsiders, charge a bunch of money, and use their clout to use the data to make big decisions

is also not quite true because companies like McKinsey and Bain actually look for and hire very smart people -- again, it's not just data. Besides, in a lot of cases external consultants are used as hatchet men to do things that are politically impossible for the insiders to do, that is, what matters is not their access to data but their status as outsiders.

there's no objective way to tell which companies are actually good at making decisions

Sure there is -- money. It's not "pure" capitalism around here, but it is capitalism.

An objective metric(bayesian scoring rule) that shows how good an organization or individual is at predicting the future.

So, what's wrong with the stock price as the metric?

Besides, evaluating forecasting capability is... difficult. Both theoretically (out of many possible futures only one gets realized) and practically (there is no incentive for people to give you hard predictions they make).

I don't think that McKinsey's and Bain's business is crunching data. I think it is renting out smart people.

Comment author: lusername 28 December 2015 05:23:14PM 8 points [-]

(using throwaway account to post this)

Very true.

I was recently involved in a reasonably huge data mining & business intelligence task (that I probably should not disclose). I could say this was an eye-opener, but I am old enough to be cynical and disillusioned so that it was not a surprise.

First, we had some smart people in the team (shamelessly including myself :-), "smart" almost by definition means "experts in programming, sw development and enough mathematics and statistics) doing the sw implementation, data extraction and statistics. Then there were slightly less smart people, but experts in the domain being studied, that were supposed to make the sense of the results and write the report. These people were offloaded from the team, because they were very urgently needed for other projects.

Second, the company bought very expensive tool for data mining and statistical analysis, and subcontracted other company to extend it with necessary functionality. The tool did not work as expected, the subcontracted extension was late by 2 months (they finished it at the time the final report should have been made!) and it was buggy and did not work with the new version of the tool.

Third, it was quite clear that the report should be bent towards what the customer wants to hear (that is not to say it would contain fabricated data - just the interpretations should be more favourable).

So, those smart people spent their time in 1) implementing around bugs in the sw we were supposed to use, 2) writing ad-hoc statistical analysis sw to be able to do at least something, 3) analysing data in the domain they were not experts in, 4) writing the report.

After all this, the report was stellar, the customer extremely satisfied, the results solid, the reasoning compelling.

Had I not been involved and had I not known how much of the potential had been wasted and on how small fraction of the data the analysis had been performed, I would consider the final report to be a nice example of a clever, honest, top level business intelligence job.

Comment author: [deleted] 28 December 2015 08:05:58PM 1 point [-]

So, those smart people spent their time in 1) implementing around bugs in the sw we were supposed to use, 2) writing ad-hoc statistical analysis sw to be able to do at least something, 3) analysing data in the domain they were not experts in, 4) writing the report.

After all this, the report was stellar, the customer extremely satisfied, the results solid, the reasoning compelling.

Had I not been involved and had I not known how much of the potential had been wasted and on how small fraction of the data the analysis had been performed, I would consider the final report to be a nice example of a clever, honest, top level business intelligence job.

So, this problem is NOT one I'm tackling directly (I'm more saying, how can they get smart people like you to make that cludge for much cheaper) but the model does indirectly incentivize better BI tools by creating competition directly in forecasting ability, and not just signaling ability.

Comment author: [deleted] 28 December 2015 07:51:34PM *  2 points [-]

I'm not buying your elevator pitch.

To be frank, I didn't expect you to based on our previous conversations on forecasting. You are too skeptical of it, and haven't read some of the recent research on how effective it can be in a variety of situations.

is not true because they lack people smart enough to correctly process the data, interpret it, and arrive at the correct conclusions.

Exactly, this is the problem I'm solving.

So, what's wrong with the stock price as the metric?

As I said, the signaling problem. Using previous performance as a metric means that there are lots of good forecasters out there who simply can't get discovered - right now, it's signaling all the way down (Top companies hire from top colleges, take from top highschools). Basically, I'm betting that there are lots of organizations and people out there who are good forecasters, but don't have the right signals to prove it.

Besides, evaluating forecasting capability is... difficult. Both theoretically (out of many possible futures only one gets realized) and practically (there is no incentive for people to give you hard predictions they make).

You should read the linked article on prediction polls - they weren't even paying people in Tetlock's study (only giving giftcard gifts not at all comensurate to the work people are putting in) and they solved the problem to the point where they could beat prediction markets.

Comment author: Lumifer 04 January 2016 05:06:50PM *  2 points [-]

You are too skeptical of it, and haven't read some of the recent research on how effective it can be in a variety of situations.

From my internal view I'm sceptical of it because I'm familiar with it :-/

it's signaling all the way down (Top companies hire from top colleges, take from top highschools)

Um, hiring from top colleges is not quite all signaling. There is quite a gap between, say, an average Stanford undergrad and an average undergrad of some small backwater college.

You should read the linked article on prediction polls - they weren't even paying people in Tetlock's study

Um, I was one of Tetlocks' forecasters for a year. I wasn't terribly impressed, though. I think it's a bit premature to declare that they "solved the problem".

With people who claim to have awesome forecasting power or techniques, I tend to point at financial markets and ask why aren't they filthy rich.

Comment author: [deleted] 04 January 2016 05:43:07PM *  0 points [-]

From my internal view I'm sceptical of it because I'm familiar with it :-/

You're right, I was assuming things about you I shouldn't have.

Um, hiring from top colleges is not quite all signaling. There is quite a gap between, say, an average Stanford undergrad and an average undergrad of some small backwater college.

Fair point. But the point is that they're going on something like "the average undergrad" and discounting all the outliers. Especially problematic in this case because forecasting is an orthogonal skillset to what it takes to get into a top college.

With people who claim to have awesome forecasting power or techniques, I tend to point at financial markets and ask why aren't they filthy rich.

Markets are one of the best forecasting tools we have, so beating them is hard. But using the market to get these types of questions answered is hard (liquidity issues in prediction markets) so another technique is needed.

Um, I was one of Tetlocks' forecasters for a year. I wasn't terribly impressed, though. I think it's a bit premature to declare that they "solved the problem".

What part specifically of that paper do you think was unimpressive?

Comment author: Lumifer 04 January 2016 05:51:29PM *  1 point [-]

discounting all the outliers

Not necessarily. Recall that a slight shift in the mean of a normal distribution (e.g. IQ scores) results in strong domination in the tails.

Besides, searching for talent has costs. You're much better off searching for talent at top tier schools than at no-name colleges hoping for a hidden gem.

using the market to get these types of questions answered is hard

What "types of questions" do you have in mind? And wouldn't liquidity issues be fixed just by popularity?

forecasting is an orthogonal skillset to what it takes to get into a top college.

Let me propose IQ as a common cause leading to correlation. I don't think the skillsets are orthogonal.

What part specifically of that paper do you think was unimpressive?

I read it a while ago and don't remember enough to do a critique off the top of my head, sorry...

Comment author: [deleted] 04 January 2016 06:04:19PM 1 point [-]

Besides, searching for talent has costs. You're much better off searching for talent at top tier schools than at no-name colleges hoping for a hidden gem.

That's the signalling issue - I'm trying to create a better signal so you don't have to make that tradeoff

What "types of questions" do you have in mind? And wouldn't liquidity issues be fixed just by popularity?

Question Example: "How many units will this product sell in Q1 2016?" (Where this product is something boring, like a brand of toilet paper)

This is a question that I don't ever see being popular with the general public. If you only have a few experts in a prediction market, you don't have enough liquidity to update your predictions. With prediction polls, that isn't a problem.

Comment author: Lumifer 04 January 2016 06:11:30PM 0 points [-]

That's the signalling issue

Why do you call that "signaling"? A top-tier school has a real, actual, territory-level advantage over a backwater college. The undergrads there are different.

If you only have a few experts in a prediction market, you don't have enough liquidity to update your predictions. With prediction polls, that isn't a problem.

I don't know about that not being a problem. Lack of information is lack of information. Pooling forecasts is not magical.

Comment author: [deleted] 04 January 2016 06:16:51PM 0 points [-]

Why do you call that "signaling"? A top-tier school has a real, actual, territory-level advantage over a backwater college. The undergrads there are different.

Because you're going by the signal (the college name), not the actual thing you're measuring for (forecasting ability).

I don't know about that not being a problem. Lack of information is lack of information. Pooling forecasts is not magical.

I meant a problem for frequent updates. Obviously, less participants will lead to less accurate forecasts - but by brier weighting and extremizing you can still get fairly decent results.