Comment author: MatthewW 05 February 2011 11:24:00AM 3 points [-]

I don't think there's much need for heuristics like "rate of effectiveness change times donation must be much smaller - say, a few percent of - effectiveness itself."

If you're really using a Landsburg-style calculation to decide where to donate to, you've already estimated the effectiveness of the second-most effective charity, so you can just say that effectiveness drop must be no greater than the corresponding difference.

Comment author: pete22 04 February 2011 12:49:37PM 3 points [-]

Thanks for all the replies. As I said in the post, I also don't think Adams is completely serious. Here is the weaker version of his argument that I find interesting: if someone can make you (or maybe other rational/informed people) laugh at your beliefs, should that cause you to reassess your level of certainty in those beliefs?

In other words, I don't think Adams really believes that someone "successfully" mocking your opinions automatically makes them false -- but he's asserting at least some connection between this kind of humor and truth. Which feels right to me, though I can't really articulate it any better than he did.

Or maybe it's more of a connection to self-deception -- the easier it is to laugh at your own beliefs, the more likely they are to be somehow insincere, regardless of their truth or falsehood.

Comment author: MatthewW 05 February 2011 10:47:31AM 0 points [-]

If it's a belief you've previously thought of as obvious and left unexamined, then this is probably a useful heuristic. Otherwise, no.

Comment author: SilasBarta 21 January 2011 07:21:09PM *  16 points [-]

Besides, it's not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can't get to. There aren't any legal restrictions on that.

You joke, but the world [1] really is choking with inefficient, kludgey workarounds for the legal prohibition of effective employment screening. For example, the entire higher education market has become, basically, a case of employers passing off tests to universities that they can't legally administer themselves. You're a terrorist if you give an IQ test to applicants, but not if you require a completely irrelevant college degree that requires taking the SAT (or the military's ASVAB or whatever the call it now).

It feels so good to ban discrimination, as long as you don't have to directly face the tradeoff you're making.

[1] Per MattherW's correction, this should read "Western developed economies" instead of "the world" -- though I'm sure the phenomenon I've described is more general the form it takes in the West.

Comment author: MatthewW 21 January 2011 07:25:47PM 3 points [-]

You say 'the world', but it seems to me you're talking about a region which is a little smaller.

Comment author: Student_UK 18 January 2011 11:19:19AM *  7 points [-]

I have two concerns about the practical implementation of this sort of thing:

  1. It seems like there are cases where if a rule is being used then people could abuse it. For example, in job applications or admissions to medical schools. A better understanding of how the rule relates to what it predicts would be needed.

If X+Y predicts Z does that mean enhancing X and Y will up the probability of Z? Not necessarily, consider the example of happy marriages. Will having more sex make your relationship happier? Or does the rule work because happy couples tend to have more sex?

  1. It is not true in every case that we equally value all true beliefs, and equally value all false beliefs. Certain rules might work better if we take into consideration a person's race, sex, religion and nationality. But most people find this sort of thing unpalatable because it can lead to the systematic persecution of sub groups, even if it results in more true, and fewer false, beliefs overall. It also might be the case that some of these rules discriminate against groups of people in more subtle ways that won't be immediately obvious.

Of course neither of these problems mean that there won't be perfectly good cases where these rules would improve decision making a lot.

Comment author: MatthewW 18 January 2011 06:43:36PM 6 points [-]

Yes, several of these models look like they're likely to run into trouble of the Goodhart's law type ("Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes").

Comment author: [deleted] 23 October 2010 09:30:42PM *  3 points [-]

In part 2, I sort of glossed over the technical stuff, but I was not talking about making up political dimensions like "more anti-war" and rating the answers to poll questions by hand. That is way too arbitrary for my taste. I'm talking about plain old dimensionality reduction. I had something like PCA in mind (if we were careful we might use a different method, but this is just illustrative.)

If you don't know about Principal Components Analysis, it's an important notion.

Wiki

Tutorial with a practical example

Another intro

The principle is that if you decide a priori what the "coordinates" are, you might pick wrong. You might not explain the variability in the data very well. Amazon.com doesn't have a pre-set category called "horror" and recommend you horror movies based on the fact that you've watched other horror movies. Amazon gauges "similarity" based on coordinates that arise naturally from the data (and maybe don't easily correspond to a property that can be given an English name.)

Maybe I'll write a top-level post explaining this sometime, if it isn't common knowledge.

In response to comment by [deleted] on Three kinds of political similarity
Comment author: MatthewW 23 October 2010 09:51:31PM 1 point [-]

Principal component analysis of UK political views, from a few years back: http://politicalsurvey2005.com/themap.pdf

Comment author: MatthewW 23 October 2010 12:42:00PM 6 points [-]

I think trolley problems suffer from a different type of oversimplification.

Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why 'you' ended up being in the situation where you get to control the direction of the trolley.

In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.

(Or if you have a formulation which explicitly mentions the 'mad philosopher' and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)

Comment author: CronoDAS 21 October 2010 04:36:39AM *  9 points [-]

Actually, something like this exists:

Reading the Mind in the Eyes

Apparently, people with high-functioning autism or Aspergers do much worse than control subjects.

ETA: I took the test myself and scored below normal:

Your score: 20
A typical score is in the range 22-30. If you scored over 30, you are very accurate at decoding a person's facial expressions around their eyes. A score under 22 indicates you find this quite difficult

I did, indeed, find the test extremely difficult. I usually look at the mouth more than the eyes when trying to read faces...

Comment author: MatthewW 21 October 2010 08:51:52PM 4 points [-]

I wonder whether the 'right answers' are what the subject of the photograph was actually feeling, what an expert intended the photograph to represent, or what most people respond.

Comment author: MatthewW 12 October 2010 06:50:39PM 7 points [-]

I think it's quite normal that if someone is acknowledged by their peers to be among the very best at what they do, they won't waste much time with status games.

There's an exception if doing what they do requires publicity to bring in sales or votes.

Comment author: cousin_it 29 September 2010 09:19:16PM *  0 points [-]

The analysis if all you know is that Omega is right 60% of the time would look different.

How exactly different?

Comment author: MatthewW 29 September 2010 10:25:56PM 2 points [-]

It would become a mind game: you'd have to explicitly model how you think Omega is making the decision.

The problem you're facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the 'all your behaviour' part, because Omega is always right. But in the 'imperfect Omega' case you can't.

Comment author: cousin_it 29 September 2010 12:01:45PM *  5 points [-]

All these thought experiments are realizable as simple computer programs, not only PD. In fact the post I linked to shows how to implement Newcomb's Problem.

The 99% case is not very different from the 100% case, it's continuous. If you're facing a 99% Omega (or even a 60% Omega) in Newcomb's Problem, you're still better off being a one-boxer. That's true even if both boxes are transparent and you can see what's in them before choosing whether to take one or two - a fact that should make any intellectually honest CDT-er stop and scratch their head.

No offense, but I think you should try to understand what's already been done (and why) before criticizing it.

Comment author: MatthewW 29 September 2010 05:39:01PM 3 points [-]

To get to the conclusion that against a 60% Omega you're better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.

I think that's really the original problem in disguise (it's a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.

View more: Prev | Next