Posts and comments containing predictions by Eliezer:
best track record of any top pundit in the US
The study linked to meant next to nothing in my eyes. It studied political predictions in an election year by political hacks on tv. 2007-2008. Guess what? IN an election cycle that liberals beat conservatives, the liberal predictions more often came true than conservative predictions.
Reminds me of the reported models of mortgage securities, created using data from boom times only.
Krugman was competing with a bunch of other political hacks and columnists. I doubt that accuracy is the highest motivation for any of them. The political hacks want to curry support, and the columnists want to be invited on tv and have their articles read. I'd put at least 3 motivations above accuracy for that crowd: manipulate attitudes, throw red meat to their natural markets, and entertain. It's Dark Arts, all the way, all the time.
This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17/19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.
From their executive summary:
According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.
From the paper:
Krugman...primarily discussed economics...
That is awesome! They actually put out their data.
Pretty much, Krugman successfully predicted that the downturn would last a while (2-6,8,15), made some obvious statements (7,9,12,16,17,18), was questionably supported on one (11), was unfairly said to miss another (14), hit on a political prediction (10), and missed on another (13).
He was 50-50 or said nothing, except for successfully predicting that the downturn would take at least a couple of years, which wasn't going out too far on a limb itself.
So I can't say that I'm impressed much with the authors of the study, as their conclusions about Krugman seem like a gross distortion to me, but I am very impressed that they released their data. That was civilized.
Yes, and the paper had several other big problems. For example, it didn't treat mild belief and certainty differently; someone who suspected Hilary might be the Democratic Nominee was treated as harshly as someone who was 100% sure the Danish were going to invade.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied! And then they have the audacity to claim that they've discovered that making conditional predictions predicts low accuracy.
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
it didn't treat mild belief and certainty differently;
It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!
They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.
If you think you can improve on their methodology, the full dataset is here: .xls.
Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.
I suggest looking at his implicit beliefs, not just explicit predictions. For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it. On the other hand, his decision to delay publishing his TDT document seems to imply some rather wrong beliefs.
I think at least a large part of the slowness on publishing TDT is due to his procrastination habits with publishable papers, which he openly acknowledges.
Although this may be a personal problem of his, it doesn't say anything against the ideas he has.
For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it.
Isn't the simpler explanation that he just enjoys writing fiction?
Relevant: Don't Revere The Bearer Of Good Info, Argument Screens Off Authority. Sequences are to a significant extent a presentation of a certain selection of standard ideas, which can be judged on their own merit.
I'm going through all of Eliezer's comments looking for predictions by using search terms like "predict" and "bet". Just letting people know so that we don't duplicate effort.
I think it is very important to keep in mind that it is not very relevant to judge if someone is good at predicting stuff simply by dividing all the predictions made by the ones that turned out to be correct.
I predict that tomorrow will by Friday. Given that today it is Thursday, that's not so impressive. So my point is that it is more important to look at how difficult to make was that prediction and what the reasoning behind it is.
And I would also look and see if I can find improvements in the predictions that are made, if the person making the predictions actually learns something from the mistakes that he/she made in the past and applies the things that are learned to the new predictions he/she makes from that point on.
Here's two recent bets he made, both successful: He made a prediction against the propagation of any information faster than c and in a nearby thread a similar bet against the so-called proof of inconsistency for Peano Arithmetic.
And here's a prediction against the discovery of Higgs Boson, as yet undetermined.
Also got Amanda Knox right. As others said, he made bad predictions about transhumanist technologies as a teen, although he would say this was before he benefited from learning about rationality and disavowed them (in advance).
Although I agree that he got the Amanda Knox thing right, I don't think it actually counts as a prediction - he wasn't saying whether or not the jury would find her innocent the second time round, and as far as I can tell no new information came out after Eliezer made his call.
As I pointed out in the postmortem, there were at least 3 distinct forms of information that one could update on well after that survey was done.
Except for the Higgs prediction, which has a good chance of being proven wrong this year or next, the other two are in line with what experts in the area had predicted, so they can hardly be used to justify the EY's mad prediction skillz.
Suppose it is falsified. What conclusions would you draw from it? I.e. what subset of his teachings will be proven wrong? Obviously none.
His justification, " I don't think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass." is basically a personal opinion of a non-expert. While it would make for an entertaining discussion, a discovery of the Higgs boson should not affect your judgement of his work in any way.
I'll update by putting more trust in mainstream modern physics - my probability that something like string theory is true would go way up after the detection of a Higgs boson, as would my current moderate credence in dark matter and dark energy. It's not clear how much I should generalize beyond this to other academic fields, but I probably ought to generalize at least a little.
I, non-programmer, have a question my programmer friends asked me: why can't they find any of Eliezer's papers that have any hard math in them at all? According to them, it's all words words words. Excellent words, to be sure, but they'd like to see some hard equations, code, and the like, stuff that would qualify him as a researcher rather than a "mere" philosopher. What should I link them to?
AFAIK, nothing of the kind is publicly available. The closest thing to it is probably his Intuitive Explanation of Bayes' Theorem; however, Bayes' Theorem is high-school math. (His Cartoon Guide to Lob's Theorem might also be relevant- although they may think it's just more words.) Two relevant quotes by Eliezer:
On some gut level I’m also just embarrassed by the number of compliments I get for my math ability (because I’m a good explainer and can make math things that I do understand seem obvious to other people) as compared to the actual amount of advanced math knowledge that I have (practically none by any real mathematician’s standard).
My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof. (Robin Hanson spends a lot of time usefully discussing which activities are most prestigious in academia, and it would be a Hansonian observation, even though he didn’t say it AFAIK, that complicated proofs are prestigious but it’s much more important to figure out which theorem to prove.)
The paper on TDT has some words that mean math, but the hard parts are mostly not done.
Eliezer is deliberately working on developing basics of FAI theory rather than producing code, but even then either he spends little time writing it down or he's not making much progress.
Rather than talking about Eliezer as if he's not here and trying to make complicated inferences based on his writings, it would be much easier to come up with a list of topics we would like Eliezer to make predictions about, and then just ask him.
Or, alternatively: Eliezer, are there any particularly important predictions you got right/wrong in the past? And what important predictions of yours are still pending?
There's a lot wrong with it, but I think it's better than trying to track implicit predictions based on things Eliezer wrote/did.
Regardless, the much better test of accuracy is asking for current predictions, which I also proposed in the parent comment.
I know it's not much as far as verifiable evidence goes but this one is pretty interesting.
When Eliezer writes about QM he's not trying to do novel research or make new predictions. He's just explaining QM as it is currently understood with little to no math. Actual physicists have looked over his QM sequence and said that it's pretty good. Source, see the top comment in particular.
I'm pretty interested in the AI part of the question though.
That being said, I'm shocked Eliezer was able to convince them.
Humans are much, much dumber and weaker than they generally think they are. (LessWrong teaches this very well, with references.)
On quantum mechanics: the many-worlds interpretation probably shouldn't be referred to as "his" interpretation (around before he was born, etc.), and there's been some experimental work defending it (for example, entangling 10^4 particles and not seeing a certain kind of collapse), bit it's not very strong evidence.
He also made a lot of super-wrong plans/predictions over a decade ago. But on the other hand, that was a while ago.
The cognitive bias stuff is pretty good popular science writing. That's worth quite a bit to the world.
"Has his interpretation of quantum physics predicted any subsequently-observed phenomena?"
Interpretations of quantum mechanics are untestable, including the MWI. MWI is also not "his," it's fairly old.
I think the test for EY's articles aren't predictions he makes, which I don't recall many of, but what you can do with the insight he offers. For example, does it allow you to deal with your own problems better? Has it improved your instrumental rationality, including your epistemic rationality?
For example, does it allow you to deal with your own problems better?
I don't agree with this at all. I could become a Christian, and then believe that all of my problems are gone because I have an eternity in heaven waiting for me simply because I accepted Jesus Christ as my savior. Christianity makes few falsifiable predictions. I want to hold EY up to a higher standard.
My understanding is that for the most part SI prefers not to publish the results of their AI research, for reasons akin to those discussed here. However they have published on decision theory, presumably because it seems safer than publishing on other stuff and they're interested in attracting people with technical chops to work on FAI:
http://singinst.org/blog/2010/11/12/timeless-decision-theory-paper-released/
I would guess EY sees himself as more of a researcher than a forecaster, so you shouldn't be surprised if he doesn't make as many predictions as Pau...
If AI is actually impossible, then trying to design a friendly AI is a waste of time
No, if our current evidence suggests that AI is impossible, and does so sufficiently strongly to outweigh the large downside of a negative singularity, then trying to design a freindly AI is a waste of time.
Even if it turns out that your house doesn't burn down, buying insurance wasn't necessarily a bad idea. What is important is how likely it looked beforehand, and the relative costs of the outcomes.
Claiming AI constructed in a world of physics is impossible is equivalent to saying intelligence in a world of physics is impossible. This would require humans to work by dualism.
Of course, this is entirely separate from feasibility.
I thought about it some more and the relevant question is - how do we guess what are his abilities? And what is his aptitude at those abilities? Is there statistical methods we can use? (e.g. SPR) What would the outcome be? How can we deduce his utility function?
Normally, when one has e.g. high mathematical aptitude, or programming aptitude, or the like, as a teenager one still has to work on it and train (the brain undergoes significant synaptic pruning at about 20 years of age, limiting your opportunity to improve afterwards), and regardless of the final...
Meanwhile, I know that Thomas Friedman is an idiot.
It might well be that he's hopelessly biased and unreflexive on every significant issue, but this neither says a lot about his purely intellectual capabilities, nor does it make any claim that he makes automatically wrong; he might turn out to be biased in just the right way.
(I haven't ever heard of him before, just applying the general principles of evaluating authors.)
Threads like that make me want to apply Bayes theorem to something.
You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than s...
It seems you are talking about high-functioning psychopaths, rather than psychopaths according to the diagnostic DSM-IV criteria. Thus the prior should be different from 0.03. Assuming a high-functioning psychopath is necessarily a psychopath then it seems it should be far lower than 0.03, at least from looking at the criteria:
A) There is a pervasive pattern of disregard for and violation of the rights of others occurring since age 15 years, as >indicated by three or more of the following: failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are >grounds for arrest; deception, as indicated by repeatedly lying, use of aliases, or conning others for personal profit or pleasure; impulsiveness or failure to plan ahead; irritability and aggressiveness, as indicated by repeated physical fights or assaults; reckless disregard for safety of self or others; consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial >obligations; lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another; B) The individual is at least age 18 years. C) There is evidence of conduct disorder with onset before age 15 years. D) The occurrence of antisocial behavior is not exclusively during the course of schizophrenia or a manic episode."
I am talking about conditional independence. Let us assume that the answer to your first two questions is true, and now you have a posterior of 0.1 that he is a sociopath. Next you want to update on the third claim "Is he talking people into giving him money as source of income?". You have to estimate the ratio of people for whom the third claim is true, and you have to do it for two groups. But the two group is not sociopaths versus non-sociopaths. Rather, sociopaths for whom the first two claims are true versus non-sociopaths for whom the first two claims are true. You don't have any data that would help you to estimate these numbers.
To explain the issue here in intuitive terms: let's say we have the hypothesis that Alice owns a cat, and we start with the prior probability of a person owning a cat (let's say 1 in 20), and then update on the evidence: she recently moved from an apartment building that doesn't allow cats to one that does (3 times more likely if she has a cat than if she doesn't), she regularly goes to a pet store now (7 times more likely if she has a cat than if she doesn't), and when she goes out there's white hair on her jacket sleeves (5 times more likely if she has a cat than if she doesn't). Putting all of these together by Bayes' Rule, we end up 85% confident she has a cat, but in fact we're wrong: she has a dog. And thinking about it in retrospect, we shouldn't have gotten 85% certainty of cat ownership. How did we get so confident in a wrong conclusion?
It's because, while each of those likelihoods is valid in isolation, they're not independent: there are a big chunk of people who move to pet-friendly apartments and go to pet stores regularly and have pet hair on their sleeves, and not all of them are cat owners. Those people are called pet owners in general, but even if we didn't know tha...
I'm 45, started this donation cycle at 44. Limit in the UK is 40-45 depending on clinic. I went to KCH, that link has all the tl;dr you could ever use on the general subject.
I thought I said this in email before ... the UK typically has ~500 people a year wanting sperm, but only ~300 donors' worth of sperm. So donate and it will be used if they can use it.
They don't notify, but I can inquire about it later and find out if it's been used. This will definitely not be for at least six months. The sperm may be kept and used up to about 10 years, I think.
My incentive for this was that I wanted more children but the loved one doesn't (having had two others before). The process is sort of laborious and long winded, and I didn't get paid. (Some reimbursement is possible, but it's strictly limited to travel expenses, and I have a monthly train ticket anyway so I didn't bother asking.) Basically it's me doing something that feels to me like I've spread my genes and is a small social good - and when I said this was my reason for donating, they said that's the usual case amongst donors (many of whom are gay men who want children but are, obviously, quite unlikely to have them in the usual sort...
First, please avoid even a well-intentioned discussion of politics here (Krugman is as political as it gets), as you will likely alienate a large chunk of your readers.
As for predictions, you are probably out of luck for anything convincing. Note that the AI and singularity research are not expected to provide any useful predictions for decades, the quantum sequence was meant as a background for other topics, and not as original research. Cognitive science and philosophy are the two areas where one can conceivable claim that EY made original contributions ...
We may need to update just how much of a mind-killer politics is here. The Krugman discussion stayed civil and on-topic.
(I had a longish comment here defending the politics taboo, then decided to remove it, because I've found that in the past the responses to defenses of the politics taboo, and the responses to those responses, have done too much damage to justify making such defenses. Please, though, don't interpret my silence from now on as assent to what looks to me like the continuing erosion or maybe insufficiently rapid strengthening of site quality norms.)
Thomblake and I both noted that "politics is the mindkiller" is the mindkiller a few months ago. It would be nice if we could possibly ease off a bit on behaving quite so phobically about actually practical matters that people would be interested in applying rationality to, if we can stop it turning the site into a sea of blue and green.
One reason not to use political examples to illustrate a (nonpolitical) point is that it invites a lot of distracting nitpicking from those who identify with the targeted political group.
But another is that if you're trying to make a normative point to a broad audience, then alienating one subset and elevating another — for no good reason — is a losing strategy.
For instance, if you want to talk to people about improving rationality, and you use an example that revolves around some Marxists being irrational and some Georgists being rational, then a lot of the Marxists in the audience are just going to stop listening or get pissed off. But also, a lot of the Georgists are going to feel that they get "rationality points" just for being Georgists.
I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been? Has his interpretation of quantum physics predicted any subsequently-observed phenomena? Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?
Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions. Meanwhile, I know that Thomas Friedman is an idiot. Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman. But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.
Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening. He was wrong about nearly everything. So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.
Has Eliezer offered anything falsifiable, or put his reputation on the line in any way? "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.