Comment author: Qiaochu_Yuan 23 January 2013 09:59:50AM *  1 point [-]

Good point! But having too many charity effectiveness evaluators might be bad ("who evaluates the charity evaluators?"). Not that I think this is likely to be a problem.

Comment author: EricHerboso 24 January 2013 07:15:03PM 2 points [-]

Case in point: Charity Navigator, which places unreasonable importance on irrelevant statistics like administrative overhead. There are already charity effectiveness evaluators out there that are doing counter-productive work.

Personally, I think adding another good charity evaluator to the mix as competition to GiveWell/Giving What We Can is important to the overall health of the optimal philanthropy movement.

Comment author: ygert 23 January 2013 09:07:21AM *  3 points [-]

It is good that there are more organizations in this important area. However, it seems very strange which outcomes they list.

Let's put it this way: I don't care about how many people are dying of malaria. I just don't. What I do care about is people dying, or suffering, of anything. That is why I find the attitude of AidGrade to be (almost) completely useless. The outcome I care about is maximizing QALYs, or maybe some other similar measure, and I actually don't care about the listed outcomes at all, except for as much as optimizing on them may help people not suffer and die. Basically, AidGrade tries to help with our instrumental goals, and that is well and fine, but in the end what we are trying to optimize are our terminal goals, and AidGrade doesn't help at that at all.

Comment author: EricHerboso 24 January 2013 05:33:15PM 4 points [-]

I agree with the spirit of this comment, but I think you are perhaps undervaluing the usefulness of helping with instrumental goals.

I am a huge fan of GiveWell/Giving What We Can, but one of the problems that many outsiders have with them is that they seem to have already made subjective value judgments on which things are more important. Remember that not everyone is into consequentialist ethics, and some find problems just with the concept of using QALYs.

Such people, when they first decide to start comparing charities, will not look at GiveWell/GWWC. They will look at something atrocious, like Charity Navigator. They will actually prefer Charity Navigator, since CN doesn't introduce subjective value judgments, but just ranks by unimportant yet objective stuff like overhead costs.

Though I've only just browsed their site, I view AidGrade as a potential way to reach those people. The people who want straight numbers. People who maybe aren't utilitarians, but recognize anyway that saving more is better than saving less, and so would use AidGrade to direct their funding to a better charity within whatever category they were going to donate to anyway. These people may not be swayed by traditional optimal philanthropy groups' arguments on mosquito nets over hiv drugs. But by listening to AidGrade, perhaps they will at least redirect their funding from bad charities to better charities within whatever category they choose.

Comment author: Raemon 22 January 2013 07:22:35PM 4 points [-]

They also provide a useful function, but so far, for the most part they rely upon GiveWell recommendations.

Comment author: EricHerboso 23 January 2013 12:55:43AM 6 points [-]

That speaks to GWWC's favor, I think. It would be odd for them to not take into account research done by GiveWell.

Remember that they don't agree on everything (e.g., cash transfers). When they do agree, I take it as evidence that GWWC has looked into GiveWell's recommendation and found it to be a good analysis. I don't really view it as parroting, which your comment might unintentionally imply.

Comment author: somervta 20 January 2013 08:48:39AM 1 point [-]

Would it be possible for you to send me the original data with the comments/justifications attached/ I'm interested in doing a side-by-side comparison with Kurzweil's own analysis of his predictions.

Comment author: EricHerboso 22 January 2013 07:03:12PM 0 points [-]

I am only one of the contributors, but you're welcome to view my comments. I doubt it will be helpful for your purpose, though.

Comment author: MichaelAnissimov 16 January 2013 06:33:43PM 0 points [-]

Which predictions are very obvious?

Comment author: EricHerboso 22 January 2013 06:59:32PM 4 points [-]

As a (perhaps) trivial example, consider the pair of predictions:

  • "Intelligent roads are in use, primarily for long-distance travel."
  • "Local roads, though, are still predominantly conventional."

As one of the people who participated in this study, I marked the first as false and the second as true. Yet the second "true" prediction seems like it is only trivially true. (Or perhaps not; I might be suffering from hindsight bias here.)

Comment author: Friendly-HI 21 January 2013 08:01:45PM *  3 points [-]

@ Everyone:

What are the most interesting and useful conclusions we can reasonably draw from this?

I'm not being facetious, it's just that after I've read this and most of the top rated comments I'm not sure what to draw from all of this. We have a rough estimate of how K. is doing in absolute terms, but not in relative terms because we're left without a baseline to compare him to. Chance or the "average predictions of the average human" can't be a meaningful baseline (for me) because I'm not going to use them as potential sources for my personal predictions/beliefs anyway. What actually interests me is how seriously I can take Kurzweil's predictions for the upcoming future (!) in comparison to other competent predictos. But how K. is doing in comparison to other predictors is very hard to judge because we simply can't standardise relatively vague predictions by different people in order to reasonably compare them.

So what I am left with is only that a bunch of random better-than-average informed people (regarding current technology) estimated that slightly less than one third of K's predictions came true, one third was hard to judge and consists of shades of grey somewhere between definitely true and definitely false, and one third was judged as plain false. So the only thing I really take away from this is that K. seems like a reasonably competent predictor in absolute terms, since any given prediction of his had roughly the same chance of leaning towards "true" as towards "false". Assuming he keeps this rate up for the upcoming decade(s) my ultimate takeaway for now is that he's at least worth reading for inspiration.

Also, I take away that his self-assessment of accuracy is probably either iffy at best or plain dishonest at worst. But to judge this point further I'd have to read his personal accounts on each of his predictions and the specific reasons why he apparently counted many of them as "essentially true", while most technophiles didn't.

Comment author: EricHerboso 22 January 2013 06:53:44PM *  1 point [-]

As one of the people who contributed to this project by assessing his predictions, I do want to point out that several of the predictions marked as "True" seemed very obvious to me. Of course, this might be the result of hindsight bias, and in fact it is actually very impressive for him to have predicted something like the following examples:

  • "[Among portable computers,] Memory is completely electronic, and most portable computers do not have keyboards."
  • "However, nanoengineering is not yet considered a practical technology."
  • "China has also emerged as a powerful economic player."

Note also that some of the statements marked "True" are only vacuously true. For example, one of his wrong predictions was that "intelligent roads are in use...for long-distance travel". But he follows this up with the following prediction which got marked as "True":

"Local roads, though, are still predominantly conventional."

As you can see, I do not think that looking just at the percentage of true predictive statements he made is enough. Some of those predictions seem almost trivial. And yet we can't just dismiss them out of hand, because the reason I think they are trivial might just be because I'm looking at it from after the fact. Counterfactually, if intelligent roads had come about, but local roads were still conventional, would I still call the prediction trivial? What if local roads weren't conventional? Would I then still call it a trivial prediction?

We had no choice but to just mark such statements as true and count them in the percentage he got correct, because there's just no way I know of to disregard such "trivial" predictions. And this means we shouldn't really be looking at the percentage marked as true except to compare it with Kurzweil's own self-assessment of accuracy. Using the percentage marked as true for other reasons, like "should I trust Kurzweil's predictive power more than others'", seems like a misuse of this data.

Comment author: EricHerboso 16 January 2013 05:37:23PM 10 points [-]

While I don't agree with much of the linked post, the line portraying civil disobedience as an application of might makes right really hits hard for me. I need to do more thinking on this to see if there is justification for me to update my current beliefs.

Comment author: gwern 16 January 2013 05:27:54AM 5 points [-]

18 people initially volunteered to do varying amounts of assessment of Kurzweil's predictions; 9 ultimately did so.

Not uncommon.

Comment author: EricHerboso 16 January 2013 01:53:44PM 4 points [-]

My initial impression was that the volunteer completion rate would be higher among a group like LW members. But now I realize that was a naive assumption to make.

Comment author: JMiller 15 January 2013 09:51:25PM *  0 points [-]

I guess what I meant is, what happens if what is right is not doable. This has been addressed below though. Thank you!

Comment author: EricHerboso 16 January 2013 12:18:14AM 1 point [-]

Whether something is doable is irrelevant when it comes to determining whether it is right.

A separate question is what should we do, which is different from what is right. We should definitely do the most right thing we possibly can, but just because we can't do something does not mean that it is any less right.

A real example: There's nothing we can realistically do to stop much of the suffering undergone by wild animals through the predatory instinct. Yet the suffering of prey is very real and has ethical implications. Here we see something which has moral standing even though there appears to be nothing we can do to help the situation (beyond some trivial amount).

Comment author: James_Miller 08 January 2013 03:37:33AM *  0 points [-]

Everybody Loves Raymond Season 4, Ep.22 "Bad Moon Rising." Available on Netflix's Watch Instantly.

Sitcom episode that brilliantly explores the relationship between reflective irrationality and empathy when a wife is exceptionally irritable because of PMS.

Comment author: EricHerboso 10 January 2013 03:03:36PM 0 points [-]

While I appreciate the recommendation and understand why you recommended it after just now watching it on netflix, I honestly can't get over this laugh track. How do people watch shows with laughs in the background like this? I find it not only extremely distracting but also a bit insulting to have the show give me a cue of when I should find things funny.

View more: Prev | Next