New Year's Predictions Thread
I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I'll probably make some in the comments.
I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I'll probably make some in the comments.
Comments (426)
Next 10 years:
Nativism discredited (80%)
Traditional economics discredited (80%)
Cognitivism/computationalism discredited (70%)
Generative linguistics discredited (60%)
To elaborate somewhat: By #1 I mean that in the fields of biology, psychology and neuroscience the idea that behaviours or ideas or patterns of thought can be "innate" will be marginalised and not accepted by mainstream researchers.
By #2 I mean that, not only will behavioural economics provide accounts of deviations from traditional economic models, but mainstream economists will accept that these models need to be discarded completely and replaced from the ground-up with psychologically-plausible models.
By #3 I mean the idea that the brain can be thought of as a computer and the "mind" as its algorithms will be marginalised. I give this lower odds than nativism being discredited only because the cognitivist tradition has managed to sustain itself through belligerence rather than evidence and is therefore likely to be more persistent and pernicious. Nativism, on the other hand, has persisted because of the difficulty of experimentally demonstrating that certain behaviours are learned rather than innate (as well as belligerence).
By #4 I mean that traditional linguistics, and especially generative grammar, will be marginalised. This one has long puzzled me since the generative grammarians based their ideas on intuition and explicitly deny a role for data or experiment (or the need to reconcile their beliefs with biology). The main problem has been the absence of a viable alternative research program. This is beginning to change.
I'm not entering any of these into PredictionBook because all 4 strike me as hopelessly argumentative and subjective. (Take #1 - what, you mean stigmatised even more than it already is as the province of racists/sexists/-ists?)
A few thoughts:
It would be valuable to do an outside view sanity check: historically, how frequently have research programs of similar prestige been discredited?
There are all the standard problems with authority---lots of folks insist that they're in the mainstream and that opposing views have been discredited. Clearly nativism &c. have been discredited in your mind; when do they get canonically discredited? Sometimes I almost think that everyone would be better off if everyone just directly talked about how the world really is rather than swiping at the integrity of each other's research programs, but I'm probably just being naive.
Re 3, my domain knowledge is somewhat weak, so everyone ignore me if my very words are confused, but I'm not sure what would count as a refutation or the mind being an algorithm. Surely (surely?) most would agree that the brain is not literally a computer as we ordinarily think of computers, but I understand algorithm in the broadest sense to refer to some systematic mechanism for accomplishing a task. Thought isn't ontologically fundamental; the brain systematically accomplishes something; why shouldn't we speak of abstracting away an algorithm from that? Maybe I've just made computationalism an empty tautology, but I don't ... think so.
I don't think the innate/learned dichotomy is fundamental; it's both, everyone knows that's it's both, everyone knows that everyone knows that it's both. Like that old analogy, a rectangle's area is a product of length and width. What specific questions of fact are people confused about?
I think these research programs represent something without a clear historical precedent. Traditional economics and generative linguistics, for example, could be compared to pre-scientific disciplines that were overthrown by scientific disciplines. But both exhibit a high degree of formal and institutional sophistication. I don't think pre-Copernican astronomy had the same level of sophistication. Economics also has data (although so did geocentric astronomy) whereas the generative tradition in linguistics considers data misleading and prefers intuitive judgement. What neither has is a systematic experimental research program or a desire to integrate with the natural sciences.
Cognitivism is essentially Cartesian philosophy with a computer analogy and experiments. In practice it just becomes experimental psychology with some extra jargon. Nativism, too, comes from Cartesian philosophy (Chomsky was quite explicit about this). While cognitivism has experiments it has an interpretation that isn't founded in experiment (the type of computer the brain is supposed to be and the algorithms it could be said to run is not addressed) and an opposition to integration with the natural sciences (the so-called "autonomy of psychology" thesis).
These research programs are similar to pre-scientific research programs but have managed to persist in a world where you have to attempt to "look scientific" in order to secure research grants and they reflect this fact.
You point to many problems and I wouldn't take any bets because of these. It would be too difficult to judge who had won. On the nature/nurture debate: Empiricism evolved into constructivism/interactionism (i.e., the developing organism interacting with the environment with genes driving development), which is the dominate view in biology, and it's not obvious what, precisely, modern Nativists believe. But it is obvious that they still exist since naive nativist talk persists almost everywhere else. It's similarly difficult to figure out what computationalists mean by their analogies and the degree to which they intend them to be analogies vs. literal propositions. This is probably why the natural sciences tend not to base research programs on analogies. What is clear is that they have a particular style of interpreting their results in terms of representations and sequential processing that is clearly at odds with biology and display no interest in addressing the issue.
First, this is the genetic fallacy. Secondly, I don't take Chomsky's authority seriously.
The experimental evidence that, say, Steven Pinker presents in How the Mind Works for innate mental traits and for the computational perspective are sound, and have nothing to do with Cartesian dualism.
The point is that the views have their origins in philosophy rather than experiment. We're not dealing with a research program developed from a set of compelling experimental results but a research program that has inherited a set of assumptions from a non-empirical source. This is more obviously the case with computationalism, where advocates have shown almost no interest in establishing the foundational assumptions of their discipline experimentally, and some claim that to do so would be irrelevant. But it's also true for nativism where almost no thought is given to how nativist mechanisms would be realised biologically.
If we could agree on a suitable judging mechanism, I would bet up to $10,000 against you on #1 and on #3 at those odds (or even at substantially different odds). I also disagree on the latter claim in #2, but that's not as much of a slam dunk for me as the others.
Can you unpack what you mean by innate. I think babies would have a hard time surviving if sucking things wasn't a behaviour that was with them from their genes.
And more generally, the distinction innate/learned is overly simplistic in a lot of contexts; rather, there are adaptations that determine the way organism develops depending on its environment. The standard reference I know of is
J. Tooby & L. Cosmides (1992). `The psychological foundations of culture'. In J. Barkow, L. Cosmides, & J. Tooby (eds.), The adapted mind: Evolutionary psychology and the generation of culture. Oxford University Press, New York.
For the next decade: Videoconferencing.
Videoconferencing what, exactly?
I've been using it for years. I'm not sure how to correctly expand your sentence, and it shouldn't be subject to interpretation.
Eliezer seems to be predicting that videoconferencing will become common in the next decade. Yes, some use it now, but it is still not common. I predict that it will not become common until someone uses a utility to modify your appearance so that when you look at the eyes of the person on the screen, your image on the remote end will look like it is looking at the eyes of the person on the other end. This might well be developed in much less than 10 years, however.
I suspect Eliezer is making broad predictions about what is important in the next 10 years. As if someone said smartphone for the next decade in 2000. Not giving too much detail makes it more likely to be true...
coughmakingbeliefspayrentcough
Carry-on luggage on US airlines will be reduced to a single handbag that inspectors can search thoroughly, in 2010 or 2011.
I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn't happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar's AR prediction, and based on the iPhone's success. That doesn't seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. "Sixth Sense" type interfaces are interesting, but still strike me as overly intrusive on others' personal space.
It would make sense to me to be able, say, to subvocalize a command such as "Show me the way to metro station X", then have my smartphone gently "tug" me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into "augmented humans".
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don't mind this comment being downvoted, even as shorthand for "I disagree", but I'd be genuinely curious to know what flaws you're seeing in my thinking, or what facts you're aware of that make my degree of confidence seems way off.
I'm not thrilled about your vagueness about what technologies count as a BCI. Little electrodes? The gaming device that came out last year or so got a lot of hype, but the gamers I've talked to who have actually used it were all deeply unimpressed. Voice recognition? Already here in niches, but not really popular.
If you can't think of what interfaces specifically*, then maybe you should phrase your prediction as a negative: 'by 2020, >50% of the smart cellphone market will use a non-gestural non-keyboard based interface' etc.
* and you really should be able to - just 9 years means that any possible tech has to have already been demonstrated in the lab and have a feasible route to commercialization; R&D isn't that fast a process, and neither is being good & cheap enough to take over the global market to the point of 'pervasive'
Given the feasibility that currently exists for gadgets that you envision... and Apple's uncanny ability to bring those ides to market... I say 2015 is a 75% target for the iThought side-processor device. :) .
Atleast one asian movie will exceed $400 mn in worldwide box office gross before the end of the decade.
It will most probably not be a wuxia movie. My guess of its genre is urban action or speculative fiction.
I am 99% confident that AGI comparable to or better than a human, friendly or otherwise, will not be developed in the next ten years.
I am 75% confident that within ten years, the Bayesian paradigm of AGI will be just yet another more or less useful spinoff of the otherwise failed attempt to build AGI.
Shane Legg gives a 10% probability of that here:
http://www.churchofvirus.org/bbs/attachments/agi-prediction.png
My estimate here is a bit bigger - maybe around 15%:
http://alife.co.uk/essays/how_long_before_superintelligence/graphics/pdf_no_xp.png
You seem to be about ten times more confident than us. Is that down to greater knowledge - or overconfidence?
You seem to be about ten times less confident than me. Is that down to greater knowledge - or underconfidence?
I'm not very confident - primarily because we are talking ten years out - and the future fairly rapidly turns into a fog of possibilities which makes it difficult to predict.
Which brings us back to why you seem so confident. What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off. Indeed, how do you know that the NSA doesn't have such a machine chained up in its basement right now?
The NSA does have some scary machines chained in their "Basement," yet I doubt any of them approach AGI. All of them(that I am aware of - so, that would be 2) are geared toward some pretty straightforward real-time data mining, and I am told that the other important gizmos do pretty much the same thing (except with crypto).
I doubt that they have anything in the NSA (or other spooky agencies) that significantly outstrips many of the big names in Enterprise. After all, the Government does go to the same names to buy its supercomputers that everyone else does. It's just the code that would differ.
So: you have a hotline to the NSA, and they tell you about all their secret technology?!? This is one of the most secretive organisations ever! If you genuinely think you know what they are doing, that is probably because they have you totally hoodwinked.
Hardly a hotline... A long, long time ago, when I was very young, I wound up working with the NSA for about six months. I was supposed to have finished school and gone to work for them full time... But, I flaked when I discovered that I could get laid pretty easily (women seemed much more important than an education at the time).
I still keep in touch, and I have found that an awful lot of their work is not hard to find out about. They may have me hoodwinked, as my job was hoodwinking others. However, I don't usually spend my time with any of my former co-workers talking about stuff that they shouldn't be talking about. Most of it is about stuff that is out in the open, yet that most people don't care about, or don't know about (usually because it's dead boring to most people).
And, I am not aware that I have stumbled onto any secret technology. Just two machines that I found to be freakishly smart. One of them did stuff that Google can probably now do (image recognition), and I am pretty sure that the other used something very similar to Mathematica. I was really impressed by them, but then I also did not know that things like Mathematica existed at the time. At the time I saw them, I was told by my handler than they were "Nothing compared to the monsters in the garage."
Edit: Anyone may feel free to think that I am a nut-job if they wish. At this point, I have little to no proof of anything at all about my life due to the loss of everything I ever owned when my wife ran off. So, you may take my comments with a grain of salt until I am better known.
It hasn't worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on -- those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don't think any of these people are stupid or crazy (which is why I don't mention Mentifex in the same breath as them), and I wouldn't try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don't believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
This is my sense as well. I also think there is a substantial limit on what we're likely to learn about the brain given that we can't study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
http://lesswrong.com/lw/vx/failure_by_analogy/
From quote in that post:
There's no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.
I'm much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
All that is fine.
But 1) I'm not sure anyone has a good grasp of what the properties we're trying to duplicate are. I'm sure some people think they do and it is possible someone has stumbled on to the answer but I'm not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn't then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study --weigh, measure and see in action-- sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you've published something that rebuts at least one of the above. :-)
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest - or that there will be at some time over the next ten years. For example, Voss's estimate (from a year ago) was "8 years" - see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden - on national security grounds. After all, if China's agent found out for sure that America had an agent too, who knows what might happen?
I would guess that the NSA is more interested in quantum computing than in AI.
http://predictionbook.com/predictions/1670
I don't know how one would judge this and so haven't made a prediction for this one.
Thanks for putting that up. I hadn't been aware of PredictionBook, so I've just made an account and posted a more precise prediction there myself.
Hopefully my comments and importation of predictions will lead to more PB awareness on LW.
Can you be more specific about what you mean by the Bayesian paradigm of AGI? Is it necessarily a subset of good-old-fashioned symbolic AI? In that case, it's been dead for years. But if not, I can't easily imagine how you're going to enforce Bayes' theorem; or what you're going to enforce it on.
Within ten years either genetic manipulation or embryo selection will have been used on at least 10,000 babies in China to increase the babies’ expected intelligence- 75%.
Within ten years either genetic manipulation or embryo selection will have been used on at least 50% of Chinese babies to increase the babies’ expected intelligence- 15%.
Within ten years the SAT testing service will require students to take a blood test to prove they are not on cognitive enhancing drugs. – 40%
All of the major candidates for the 2016 presidential election will have had samples of their DNA taken and analyzed (perhaps without the candidates’ permission.) The results of the analysis for each candidate will be widely disseminated and will influence many peoples' voting decisions - 70%
While president, Obama will announce support for a VAT tax - 70%.
While president, Obama will announce support for means testing Social Security - 70%
Within ten years the U.S. repudiates its debt either officially or with an inflation rate of over 100% for one year - 20%.
Within five years the Israeli economy will have been devastated because many believe there is a high probability that an atomic bomb will someday be used against Israel – 30%
Within ten years there will be another $200 billion+ Wall Street Bailout - 80%
I think you are on crack for this one. 15% ?! You seriously think there's a 15% chance that a embryo selection and/or genetic manipulation for IQ will be developed, commercialized, and turned into an infrastructure capable of modifying roughly 9 million pregnancies a year? Where the hell are all the technicians and doctors going to come from, for one thing? There's a long lead time for that sort of thing.
Ditto - America doesn't have that many phlebotomists, and would go batshit over a Collegeboard requirement like that. There would have to be an enormous national outcry over nootropics, and there's zero sign of that, and tremendous takeup of drugs like modafinil. Even a urine or spit test would encounter tremendous opposition, and the Collegeboard has no incentive for such testing. (Cost, blame for false positives, and possibly dragging down scores which would earn it even more criticism. To name just the very most obvious negatives.)
I think you forgot the part of your prediction where all the candidates went insane and agreed to such an incredibly status-lowering procedure, gave up all privacy, and completely forgot about how past candidates got away with not releasing all sorts of germane records.
I recently read a book on old age public policy; amidst the endless details and financial minutia, I was deeply impressed how many ways there were to effectively means-test, even inadvertently, without obviously being means-testing or having that name. Judging could be very difficult.
With a probability that high, shouldn't you be desperately diversifying your personal finances overseas? Either fork of your prediction means major pain for US debt, equity, or cash holders.
The odds of an Iranian bomb aren't that terribly high, much less such an outcome happening.
Definitions here are an issue. Some forecasts are for 2-500 billion dollars in defaults on student loans, which likely would provoke another bailout. Would that count? Does a 0% Fed rate and >0% Treasury rate constitute an ongoing bailout? etc.
All in all, this is a set of predictions that makes me think that I really should go on Intrade. I did manage to double my money at the IEM; at the time I assumed it was because I got lucky on picking McCain and Obama for the nominations, but if this is the best a random LWer can do, even aware of biases, basic data, and the basics of probability...
"While president, Obama will announce support for means testing Social Security - 70%"
I'd be wiling to take those odds, with some refinements.
How about this - I win if before he leaves office I can point to a speech Obama gave in which he advocates means testing Social Security. Otherwise you win. The speech has to be given after today, so you don't fear this is some kind of trick.
If I win I get $100 from you. If you win I give you $233. But with these odds I'm indifferent to making the bet. So for me to be willing to bet I want you to agree that if Obama makes such a speech you have to pay me right away.
That works for me, with one little change. The end of his term needs to be counted as the end of a presidential election he doesn't win, rather than the inauguration of his successor. This is because the reason I don't think its very likely is that the political effects on him would be dire, so if he does it as a lame duck president he has nothing to lose. I'm still willing to take the risk on his second term since even a second-term president is subject to some political forces.
And as a clarification, I take "means testing" to mean increasing or decreasing social security payouts based on a person's assets or income. It also has to apply to US citizens to count.
And since I'm not an American, I'd just like to confirm that the best is in US dollars. That works for me, and I assume it works for you too.
OK, I accept - and yes the bet should be in U.S. dollars.
Please contact me at
EconomicProf@Yahoo.com so we can exchange addresses.
I'd take the other side on any of these if we can find a way to make it precise.
One word: subcultures.
I think we'll see an expansion to most of the First World of the trend we see in cities like San Francisco, where the Internet has allowed people to organize niche cultures (steampunk, furries, pyromaniacs, etc.) like never before. I think that, by and large, people would prefer to seek out a smaller culture based on a common idiosyncratic interest if it were an option, not least because rising in status there is often easier than getting noticed in the local mainstream culture. I think that the main reason the mainstream culture is presently so large, therefore, is because it's hard for a juggling enthusiast in Des Moines to find like-minded people.
I expect that over the next 10 years, more and more niche cultures will arise and begin to sprout their own characteristics, with the measurable effect that cultural products will have to be targeted more narrowly. I expect that the most popular books, music, etc. of the late 2010s will sell fewer copies in the US than the most popular books, music, etc. of the Aughts, but that total consumption of media will go up substantially as a thousand niche bands, niche fiction markets, etc. become the norm. I expect that high schoolers in 2020 will spend less social time with their classmates and more time with the groups they met through the Internet.
And I expect that the next generation of hipsters will find a way to be irritatingly disdainful of a thousand cultures at once.
You forgot us!
What do you make of criticism that sales currently show the exact opposite trend?
Thanks for the link! I didn't know there was already a version of this theory out there, and I didn't know the actual figures.
So what do I make of this data (assuming the veracity of the Wikipedia summary, since I'm not dedicated enough to read the papers)? Well, I'm surprised by it.
I'm not especially surprised. Aside from possible confounding factors like the rise of Free & free stuff (strongest in subcultures) which obviously wouldn't get counted in commercial metrics, technological and economic development means that mass media can spread even further than Internet-borne stuff can. cue anecdotes about Mickey Mouse posters in African huts, etc.
The subcultures seem to me to appeal mostly to the restricted 1st World wealthier demographics that powered the mass media you are thinking of; one might caricature it as 'white' stuff. It makes sense that a subculture like anime/manga or FLOSS, which primarily is cannibalizing the 'white' market, can shrink ever more in percentage terms as the old 'white' stuff like Disney expand overseas into South America, Africa, Southeast Asia and so on.
If you had formulated your thesis in absolute numbers ('there will be more FLOSS enthusiasts in 2020 than 2010'), then I think you would be absolutely right. You might be able to get away with restricted areas too ('there will be more otaku in Japan in 2020 than 2010, despite a ~static population'). But nothing more.
So it's possible that, if we had a really huge, dense, wired city with excellent transportation, we would find a significant subculture of steampunk furries, or vampire gothic lolita hip-hop dance squads? Actually, this sounds like a lot like Tokyo.
It's easy, really. Practice this phrase: "Man, what weirdos." You just have to selectively overlook the weirdness of your own subculture while recognizing and stigmatizing it in others. It's an elegant approach.
I would say better-than even chances that sites like intrade gain prestige in the next decade
and betting on predictions will become common ( 90% that there is a student at 75% or so of high schools in 2020 that will take bets on future predictions on any subject, 40% that >5% of US middle class will have made a bet about a future prediction)
naive guesses based largely on http://www.fivethirtyeight.com/2009/11/case-for-climate-futures-markets-ctd.html
I predict further that I will continue to post on LW at least once a month next year (90%) and in 2020 (50%)
http://predictionbook.com/predictions/1710
Kind of vague, but I suppose it's not too hard to do a search and note that the NYT only mentioned Intrade a few times in the 2000s and more in the 2010s.
http://predictionbook.com/predictions/1709
I have no idea how one would measure this one. I'm sure that at any high school you could find a student willing to wager with you on any damn topic you please.
http://predictionbook.com/predictions/1712
Agree with orthonormal that this is seriously over-optimistic. The only site I even use today that I did in 2000 would be Slashdot, and I haven't commented there in a dog's age.
Is there any comparable website that you were posting on in 2000 and continue to post on today? I agree that LW is awesome, but web communities have a short shelf life (and a tendency to be superseded as web technology improves).
Probably a good reason to adjust the estimate down. On the other hand I was 11 in 2000 so I wouldn't have been on this kind of site anyway, and conditional on the prediction that news-betting becomes more prestigious rationality almost certainly will.
Point taken, with the real point being that I have no sense of how long a decade is, so I'll adjust that down to a 20%
I have stayed in touch with a different web community for five years, with which I'm still in touch, although only barely at the level of once a month. So my odds for awesomeness overcoming shelf-lifes may be higher than for most.
By the end of 2013: Either the Iranian regime is overthrown by popular revolution, or there is an overt airstrike against Iran by either the US or Israel, or Israel is attacked by an Iranian nuclear weapon (70%).
Essentially seconding mattnewport: the price of gold reaches $3000USD, or inflation of the US dollar exceeds 12% in one year (65%).
The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).
Android-descended smartphones outnumber iPhone-descended smartphones (60%).
The number of IMAX theaters in the US triples (40%).
I'm 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn't apply to humanoid robots (1%) and video game characters (5%).
Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.
How would you verify a crossing of the uncanny valley? A movie critic invoking it by name and saying a movie doesn't trigger it?
An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.
There isn't much financial incentive to CGI a human - if they are just acting like a regular human. That's what actors are for.
I suppose Avatar is a case in point - it's worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.
''There are a number of shots of CGI humans,'' James Cameron says. ''The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there's a threshold of proximity to the camera that we didn't feel comfortable going beyond. We didn't get too close.''
Interesting, it seems that they are currently ahead with image synthesis than voice/speech synthesis.
You don't think that the Valley will be crossed for video games in the next ten years?
Considering how rapidly the digital technologies make it from big screen to small, I'm guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).
Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.
I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.
IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.
In a way, the uncanny valley has already been crossed-- video game characters in some games are sufficiently humanlike that I hesitate to kill them.
I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don't play them myself, but I've seen a couple.)
Is there a reason Avatar doesn't count as crossing the threshold already?
Avatar and Digital Emily are the reasons why I'm so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley -- or at least found the way across it -- I just don't think that this is proof enough for general audience and critics.
I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn't Sci Fi, or fantastic in its setting or characters.
Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).
Because the giant blue Na'vi people are not human.
You mean you didn't notice the shots with the simulated humans in Avatar? ;-)
Why such a big gulf between your confidence for cinema and your confidence for video games?
Movies are 'pre-computed' so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don't think that it can be realistically imitated by blending between pre-recorded animations like it's done today with mo-cap animations -- e.g. you can't pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.
The second estimation in each paragraph is conditional on the first.
By 2020 some kind of CO2 emissions regulation (cap and trade) will be in place in the US(.85). But total CO2 emissions in the US for 2019 will be no less than 95% of total CO2 emissions for 2008 (.9).
Obama wins reelection (.7). The result will be widely attributed to an improving economy (in the media and in polls and whether or not the economy actually improves) (.85)
By 2020 open elections are held for the Iranian presidency (no significant factions excluded from participation) (.5). The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)
"The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)"
Qualify this. Formally, the highest position in the British state is unelected. In terms of political power, the highest position in the British state is elected.
In terms of political power.
I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)
I'll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.
I kinda hope you're right, but I just don't see how.
By 2020, an Earth-like habitable extrasolar planet is detected. I would take a wager on this one but doubt anyone would give me even odds.
Will anyone give me even odds if the bet is by 2015?
I think I'd give better-than-even odds for either date, and would be shocked if no one else would. How are you defining "Earth-like" and "habitable"?
At even odds I would take a loan to make the bet.
I think he just meant with liquid water, some type of atmosphere, and approximately earth sized. Given this, my guess is that they find one within the next three years. If he meant "habitable" to human beings without protection, i.e. oxygen atmosphere etc., then this is extremely unlikely (less than 2% chance) that they will find such a thing by 2020.
Next Year
Next Decade
US states aren't allowed to secede. Not even Texas. The US government would lose so much prestige from the loss of a state, that they would never allow it. So it would require some kind of armed conflict that no one state could ever win.
Are you really certain that the federal government would send the military in to prevent a state seceding if secession was clearly the democratic will of the people of the state? I wouldn't rule out the possibility but I think it would be an unlikely outcome.
http://en.wikipedia.org/wiki/2010_European_sovereign_debt_crisis
Not bad.
I guess now is a good time for a 6 month review of how the predictions in this thread are panning out.
Retail sales were a bit worse than expected but despite a bit of a dip in the stock market in late Jan / early Feb it took longer than I expected for the recovery in the US to be seriously questioned. It's only in the last few weeks that talk of a double dip recession has become really widespread. The problems in Europe and more recently in China brought the global recovery into question a bit earlier but overall the jury is still out. I think I could argue that this prediction was correct as written but I was expecting more problems earlier in the year.
I think the problems in Greece (and to a lesser extent Spain and Portugal) and the resulting turmoil in the Euro are sufficient to say this prediction was correct. The UK pound has also had a rough time but in both cases 'currency crisis' could still be argued. I expect further problems before the year is out.
Hasn't happened yet. Option ARM resets will be picking up through the second half of the year so I still expect problems from that. A little less confident that it will mean a major bank failure - that is somewhat dependent on the political climate as well.
The attempted bombing in Times Square appears to have had a Pakistan link. It can't really be called a 'major' attack however. I still think there is a fair chance of this happening before the year is out but odds are a little lower (my estimate of how incompetent most terrorists are has increased a little).
The iPad and iBooks launch bear this out I think.
Won't know until November. I think the prediction is still reasonable.
The riots and strikes in Greece and strikes in Spain arguably confirm this. The prediction is a little vague however and I was expecting somewhat more serious civil unrest than we've seen so far. It remains to be seen what will happen as the rest of the year unfolds.
No change here.
Some early signs of this with retirement age increases and other austerity measures in Greece and elsewhere in Europe. I still expect to see a lot more of this before the decade is out.
Odds on this down slightly I think - there's some evidence that the new government is serious about addressing the problems. Less evidence that they will succeed.
I think the problems here have been more widely recognized than when I wrote the prediction. My odds haven't changed much though.
No change here. And it's secession week.
I think this prediction has failed utterly. In the Euro zone, There are/were debt crises in Greece and Ireland, but the currency, the Euro itself did fine. A graph of the variation of the Euro against the US dollar shows no special variation in 2010 compared to its "typical" variations over the last decade. The pound maintained the value in 2010 that it had already fallen to in 2009, hardly even slightly adhering to a prediction about 2010.
Those were exciting predictions. Had you predicted a sovereign debt crisis in a developed country, you would have been right, and it would have been a much less exciting prediction than a currency crisis.
Shouldn't the odds go down by about half, just because half the year is used up?
The failed Times Square attack raised my probability for attempts at attacks this year but lowered my probability that any attempted attacks would be effective enough to classify as 'major'. On balance I think the odds of a major attack in the remaining 6 months are lower than 50% at this point but events since my original prediction weigh into my estimate now and so it's not a simple matter of adjusting the odds based on elapsed time.
I don't see that happening -- which one or ones do you think are most likely to leave?
Scotland may well leave the UK (10%), or the UK leave the EU (15%).
Texas is probably the most likely but I can imagine a number of other possibilities. MatthewB's post above outlines a plausible case for California for example.
Being from Texas (I was born in Texas, but moved to CA in my mid-20s), I agree with you.
I noticed, when I went to school in Europe in the mid 80s that people there acted as if Texas was almost a different country from the rest of the USA. It was also easy for Europeans to recognize. When a foreign citizen, in Europe, was asked where they were from, Texans would usually answer "Texas", yet if a person from Louisiana, Alabama, Montana, Idaho, or some other more obscure state attempted to explain where they were from in the terms of their home state, it would usually devolve to "I am from the Southern USA" or "I am from the Northwest/Midwest USA".
Only New York and California seemed to enjoy this same recognition in Europe.
But, for Texans, they would consider themselves from Texas, first, and the USA second. Whereas most of the other US citizens from other states seemed to identify as USA citizens first, and then by their state.
Texas has a really strong independence from the USA, and it is pretty much the only state with an active Federal movement (movement to recognize the state as its own Nation). California also have one, but it is not nearly as diverse nor as active as that in TX.
However, despite the strong state recognition of its citizens, I think that there are other states that might lead the pack in an attempt to secede. Most of the former Confederate States still seem to have Very deep grudges against the federal gov't, and when I lived in GA for a few years back in 91/92, I was stunned at how many people I encountered who really believed that the Civil War was still not finished, and that The South Shall Rise Again!
Many Republicans seem to be fomenting this sort of thinking as well, with things like the Tea Baggers, or trying to force the recognition of the USA as a Christian Nation
Referring to a (presumably) disfavored political group by a crude sexual dysphemism earned you a vote down. This is not how discourse is done here, please make a note of it.
Relevant wikipedia link
I will take a bet on this, if you like. Also, did you perhaps mean "attempt to secede", or are you predicting actual success? I'll take the bet either way.
You'll have to define what constitutes an attempt.
Perhaps a vote goes through the state legislature in favor of secession?
I would be very happy to accept a bet with you on those odds if there's a way to sort it out. I'd define major as any attack with more than ten deaths.
I voted all the betting comments up because I think this is awesome. Does this kind of thing happen often here?
I occasionally offer people bets, but I think this has been the first time for me that the subject of contention is the right shape for betting to be a real possibility.
I've added this prediction to PredictionBook: http://predictionbook.com/predictions/1565 based on the description at http://wiki.lesswrong.com/wiki/Bets_registry
So now that 2010 is more than half over with no attack that I know of, have you or mattnewport's opinions changed?
(I notice that domestic terrorism seems kind of spiky - quite a few in one year, and none the next: http://en.wikipedia.org/wiki/Category:Islamist_terrorism_in_the_United_States omits entire years but has several in one year, like 2007 or 2009.)
Do you have a PayPal account? I'd be willing to wager $50 USD to be paid within 2 weeks of Jan 1st 2011 if you're interested. I can provide my email address. That would rely on mutual trust but I don't know of any websites that can act as trusted intermediaries. Do you know of anything like that?
What makes your think 2010 is the year? I mean, this has even been floating around lately. And at 99%^h^h^h50% confidence!
For $50, trust-based is OK with me.
How about this wording? "10 or more people will be killed on US soil during 2010 as the result of a deliberate attack by a party with a political goal, not overtly the act of any state". And if we hit an edge case where we disagree on whether this has been met, we'll do a poll here on LW and accept the results of the poll. Sound good?