Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[LINK] Hyperloop officially announced — predictions, anyone?

4 malcolmmcc 12 August 2013 09:30PM

I was studying in the LW Study Hall, and during our break someone posted this link to the official hyperloop announcement:

http://www.spacex.com/sites/spacex/files/hyperloop_alpha-20130812.pdf

One member was doubtful it would get past regulations, and another said "tentative p>0.05 that a hyperloop gets made by 2100", which was met with "p>0.05 that uploading people and moving them between bodies will be available by 2100".

It struck me that people might be interested in betting on things like this, or at least having a conversation about it.

A few predictions to start:

More predictions, based on comments:

AI prediction case study 5: Omohundro's AI drives

5 Stuart_Armstrong 15 March 2013 09:09AM

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

What drives an AI?

  • Classification: issues and metastatements, using philosophical arguments and expert judgement.

Steve Omohundro, in his paper on 'AI drives', presented arguments aiming to show that generic AI designs would develop 'drives' that would cause them to behave in specific and potentially dangerous ways, even if these drives were not programmed in initially (Omo08). One of his examples was a superintelligent chess computer that was programmed purely to perform well at chess, but that was nevertheless driven by that goal to self-improve, to replace its goal with a utility function, to defend this utility function, to protect itself, and ultimately to acquire more resources and power.

This is a metastatement: generic AI designs would have this unexpected and convergent behaviour. This relies on philosophical and mathematical arguments, and though the author has expertise in mathematics and machine learning, he has none directly in philosophy. It also makes implicit use of the outside view: utility maximising agents are grouped together into one category and similar types of behaviours are expected from all agents in this category.

In order to clarify and reveal assumptions, it helps to divide Omohundro's thesis into two claims. The weaker one is that a generic AI design could end up having these AI drives; the stronger one that it would very likely have them.

Omohundro's paper provides strong evidence for the weak claim. It demonstrates how an AI motivated only to achieve a particular goal, could nevertheless improve itself, become a utility maximising agent, reach out for resources and so on. Every step of the way, the AI becomes better at achieving its goal, so all these changes are consistent with its initial programming. This behaviour is very generic: only specifically tailored or unusual goals would safely preclude such drives.

The claim that AIs generically would have these drives needs more assumptions. There are no counterfactual resiliency tests for philosophical arguments, but something similar can be attempted: one can use humans as potential counterexamples to the thesis. It has been argued that AIs could have any motivation a human has (Arm,Bos13). Thus according to the thesis, it would seem that humans should be subject to the same drives and behaviours. This does not fit the evidence, however. Humans are certainly not expected utility maximisers (probably the closest would be financial traders who try to approximate expected money maximisers, but only in their professional work), they don't often try to improve their rationality (in fact some specifically avoid doing so (many examples of this are religious, such as the Puritan John Cotton who wrote 'the more learned and witty you bee, the more fit to act for Satan will you bee'(Hof62)), and some sacrifice cognitive ability to other pleasures (BBJ+03)), and many turn their backs on high-powered careers. Some humans do desire self-improvement (in the sense of the paper), and Omohundro cites this as evidence for his thesis. Some humans don't desire it, though, and this should be taken as contrary evidence (or as evidence that Omohundro's model of what constitutes self-improvement is overly narrow). Thus one hidden assumption of the model is:

  • Generic superintelligent AIs would have different motivations to a significant subset of the human race, OR
  • Generic humans raised to superintelligence would develop AI drives.
continue reading »

AI prediction case study 4: Kurzweil's spiritual machines

2 Stuart_Armstrong 14 March 2013 10:48AM

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

Note this is very similar to this post, and is mainly reposted for completeness.

How well have the ''Spiritual Machines'' aged?

  • Classification: timelines and scenarios, using expert judgementcausal modelsnon-causal models and (indirect) philosophical arguments.

Ray Kurzweil is a prominent and often quoted AI predictor. One of his most important books was the 1999 ''The Age of Spiritual Machines'' (Kur99) which presented his futurist ideas in more detail, and made several predictions for the years 2009, 2019, 2029 and 2099. That book will be the focus of this case study, ignoring his more recent work (a correct prediction in 1999 for 2009 is much more impressive than a correct 2008 reinterpretation or clarification of that prediction). There are five main points relevant to judging ''The Age of Spiritual Machines'': Kurzweil's expertise, his 'Law of Accelerating Returns', his extension of Moore's law, his predictive track record, and his use of fictional imagery to argue philosophical points.

Kurzweil has had a lot of experience in the modern computer industry. He's an inventor, computer engineer, and entrepreneur, and as such can claim insider experience in the development of new computer technology. He has been directly involved in narrow AI projects covering voice recognition, text recognition and electronic trading. His fame and prominence are further indications of the allure (though not necessarily the accuracy) of his ideas. In total, Kurzweil can be regarded as an AI expert.

Kurzweil is not, however, a cosmologist or an evolutionary biologist. In his book, he proposed a 'Law of Accelerating Returns'. This law claimed to explain many disparate phenomena, such as the speed and trends of evolution of life forms, the evolution of technology, the creation of computers, and Moore's law in computing. His slightly more general 'Law of Time and Chaos' extended his model to explain the history of the universe or the development of an organism. It is a causal model, as it aims to explain these phenomena, not simply note the trends. Hence it is a timeline prediction, based on a causal model that makes use of the outside view to group the categories together, and is backed by non-expert opinion.

A literature search failed to find any evolutionary biologist or cosmologist stating their agreement with these laws. Indeed there has been little academic work on them at all, and what work there is tends to be critical.

The laws are ideal candidates for counterfactual resiliency checks, however. It is not hard to create counterfactuals that shift the timelines underlying the laws (see this for a more detailed version of the counterfactual resiliency check). Many standard phenomena could have delayed the evolution of life on Earth for millions or billions of years (meteor impacts, solar energy fluctuations or nearby gamma-ray bursts). The evolution of technology can similarly be accelerated or slowed down by changes in human society and in the availability of raw materials - it is perfectly conceivable that, for instance, the ancient Greeks could have started a small industrial revolution, or that the European nations could have collapsed before the Renaissance due to a second and more virulent Black Death (or even a slightly different political structure in Italy). Population fragmentation and decrease can lead to technology loss (such as the 'Tasmanian technology trap' (Riv12)). Hence accepting that a Law of Accelerating Returns determines the pace of technological and evolutionary change, means rejecting many generally accepted theories of planetary dynamics, evolution and societal development. Since Kurzweil is the non-expert here, his law is almost certainly in error, and best seen as a literary device rather than a valid scientific theory.

continue reading »

AI prediction case study 3: Searle's Chinese room

6 Stuart_Armstrong 13 March 2013 12:44PM

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligence conference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

Locked up in Searle's Chinese room

  • Classification: issues and metastatements and a scenario, using philosophical arguments and expert judgement.

Searle's Chinese room thought experiment is a famous critique of some of the assumptions of 'strong AI' (which Searle defines as the belief that 'the appropriately programmed computer literally has cognitive states). There has been a lot of further discussion on the subject (see for instance (Sea90,Har01)), but, as in previous case studies, this section will focus exclusively on his original 1980 publication (Sea80).

In the key thought experiment, Searle imagined that AI research had progressed to the point where a computer program had been created that could demonstrate the same input-output performance as a human - for instance, it could pass the Turing test. Nevertheless, Searle argued, this program would not demonstrate true understanding. He supposed that the program's inputs and outputs were in Chinese, a language Searle couldn't understand. Instead of a standard computer program, the required instructions were given on paper, and Searle himself was locked in a room somewhere, slavishly following the instructions and therefore causing the same input-output behaviour as the AI. Since it was functionally equivalent to the AI, the setup should, from the 'strong AI' perspective, demonstrate understanding if and only if the AI did. Searle then argued that there would be no understanding at all: he himself couldn't understand Chinese, and there was no-one else in the room to understand it either.

The whole argument depends on strong appeals to intuition (indeed D. Dennet went as far as accusing it of being an 'intuition pump' (Den91)). The required assumptions are:

continue reading »

AI prediction case study 2: Dreyfus's Artificial Alchemy

11 Stuart_Armstrong 12 March 2013 11:07AM

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

 

Dreyfus's Artificial Alchemy

  • Classification: issues and metastatements, using the outside viewnon-expert judgement and philosophical arguments.

Hubert Dreyfus was a prominent early critic of Artificial Intelligence. He published a series of papers and books attacking the claims and assumptions of the AI field, starting in 1965 with a paper for the Rand corporation entitled 'Alchemy and AI' (Dre65). The paper was famously combative, analogising AI research to alchemy and ridiculing AI claims. Later, D. Crevier would claim ''time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier'' (Cre93). Ignoring the formulation issues, were Dreyfus's criticisms actually correct, and what can be learned from them?

Was Dreyfus an expert? Though a reasonably prominent philosopher, there is nothing in his background to suggest specific expertise with theories of minds and consciousness, and absolutely nothing to suggest familiarity with artificial intelligence and the problems of the field. Thus Dreyfus cannot be considered anything more that an intelligent outsider. 

This makes the pertinence and accuracy of his criticisms that much more impressive. Dreyfus highlighted several over-optimistic claims for the power of AI, predicting - correctly - that the 1965 optimism would also fade (with, for instance, decent chess computers still a long way off). He used the outside view to claim this as a near universal pattern in AI: initial successes, followed by lofty claims, followed by unexpected difficulties and subsequent disappointment. He highlighted the inherent ambiguity in human language and syntax, and claimed that computers could not deal with these. He noted the importance of unconscious processes in recognising objects, the importance of context and the fact that humans and computers operated in very different ways. He also criticised the use of computational paradigms for analysing human behaviour, and claimed that philosophical ideas in linguistics and classification were relevant to AI research. In all, his paper is full of interesting ideas and intelligent deconstructions of how humans and machines operate.

continue reading »

AI prediction case study 1: The original Dartmouth Conference

7 Stuart_Armstrong 11 March 2013 06:09PM

Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligence conference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

As this is the first case study, it will also introduce the paper's prediction classification shemas.

 

Taxonomy of predictions

Prediction types

There will never be a bigger plane built.

Boeing engineer on the 247, a twin engine plane that held ten people.

A fortune teller talking about celebrity couples, a scientist predicting the outcome of an experiment, an economist pronouncing on next year's GDP figures - these are canonical examples of predictions. There are other types of predictions, though. Conditional statements - if X happens, then so will Y - are also valid, narrower, predictions. Impossibility results are also a form of prediction. For instance, the law of conservation of energy gives a very broad prediction about every single perpetual machine ever made: to wit, that they will never work.

continue reading »

Generalizing from One Trend

14 katydee 18 January 2013 01:21AM

Related: Reference Class of the Unclassreferenceable, Generalizing From One Example

Many people try to predict the future. Few succeed.

One common mistake made in predicting the future is to simply take a current trend and extrapolate it forward, as if it was the only thing that mattered-- think, for instance, of the future described by cyberpunk fiction, with sinister (and often Japanese) multinational corporations ruling the world. Where does this vision of the future stem from?

Bad or lazy predictions from the 1980s, when sinister multinational corporations (and often Japanese ones) looked to be taking over the world.[1]

Similar errors have been committed by writers throughout history. George Orwell thought 1984 was an accurate prediction of the future, seeing World War II as inevitably bringing socialist revolution to the United Kingdom and predicting that the revolutionary ideals would then be betrayed in England as they were in Russia. Aldous Huxley agreed with Orwell but thought that the advent of hypnosis and psychoconditioning would cause the dystopia portrayed in 1984 to evolve into that he described in Brave New World. In today's high school English classes, these books are taught as literature, as well-written stories-- the fact that the authors took their ideas seriously would come as a surprise to many high school students, and their predictions would look laughably wrong.

Were such mistakes confined solely to the realm of fiction, they would perhaps be considered amusing errors at best, reflective of the sorts of mishaps that befall unstudied predictions. Unfortunately, they are not. Purported "experts" make just the same sort of error regularly, and failed predictions of this sort often have negative consequences in reality.

For instance, in 1999 two economists published the book Dow 36,000, predicting that stocks were about to reach record levels; the authors of the book were so wrapped up in recent gains to the stock market that they assumed that such gains were in fact the new normal state of affairs, that the market hadn't corrected for this yet, and that once stocks were correctly perceived as safe investments the market would skyrocket. This not only did not happen, but the dot-com bubble burst shortly after the book was published.[2] Anyone following the market advice from this book lost big.

In 1968, the biologist Paul R. Ehrlich, seeing disturbing trends in world population growth, wrote a book called The Population Bomb, in which he forecast (among other things) that "The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now." Later, Ehrlich doubled down on this prediction with claims such as  "By the year 2000 the United Kingdom will be simply a small group of impoverished islands, inhabited by some 70 million hungry people ... If I were a gambler, I would take even money that England will not exist in the year 2000."

Based on these predictions, Ehrlich advocated cutting off food aid to India and Egypt in favor of preserving food supplies for nations that were not "lost causes;" luckily, his policies were not adopted, as they would have resulted in mass starvation in the countries suddenly deprived of aid. Instead, food aid continued, and as population grew, food production did as well. Contrary to the increase in starvation and global death rates predicted by Ehrlich, global death rates decreased, the population increased by more than Ehrlich had predicted would lead to disaster, and the average amount of calories consumed per person increased as well.[3]

 

So what, then, is the weakness that causes these analysts to make such errors?

Well, just as you can generalize from one example when evaluating others and hence fail to understand those around you, you can generalize from one trend or set of trends when making predictions and hence fail to understand the broader world. This is a special case of the classic problem where "to a man with a hammer, everything looks like a nail;" if you are very familiar with one trend, and that's all you take into account with your future forecasts, you're bound to be wrong if that trend ends up not eating the world.

On the other hand, the trend sometimes does eat the world. It's very easy to find long lists of buffoonish predictions where someone woefully understimated the impact of a new technology.[4] Further, determining exactly when and where a trend is going to stop is quite difficult, and most people are incompetent at it, even at a professional level-- if this were easy, the stock market would look extraordinarily different!

So my advice to those who would predict the future is simple. Don't generalize from one trend or even one group of trends. Especially beware of viewing evidence that seems to support your predictions as evidence that other people's predictions must be wrong-- the notebook of rationality cares not for what "side" things are on, but rather for what is true. Even if the trend you're relying on does end up being the "next big thing," the rest of the world will have a voice as well.[5]


[1] I predict that the work of Cory Doctorow and those like him will seem similarly dated a decade down the line, as the trends they're riding die down. If you're reading this during or after December 2022, please let me know what you think of this prediction.

[2] The authors are, of course, still employed in cushy think-tank positions.

[3]  Ehrlich has doubled down on his statements, now claiming that he was "way too optimistic" in The Population Bomb and that the world is obviously doomed.

[4] I personally enjoy the Bad Opinion Generator (warning: potentially addictive)

[5] Technically, this isn't always true. But you should assume it is unless you have extremely good reasons to believe otherwise, and even still I would be very careful before assuming that your thing is the thing.

Assessing Kurzweil: the gory details

14 Stuart_Armstrong 15 January 2013 02:29PM

This post goes along with this one, which was merely summarising the results of the volunteer assessment. Here we present the further details of the methodology and results.

Kurzweil's predictions were decomposed into 172 separate statements, taken from the book "The Age of Spiritual Machines" (published in 1999). Volunteers were requested on Less Wrong and on reddit.com/r/futurology. 18 people initially volunteered to do varying amounts of assessment of Kurzweil's predictions; 9 ultimately did so.

Each volunteer was given a separate randomised list of the numbers 1 to 172, with instructions to go through the statements in the order given by the list and give their assessment of the correctness of the prediction (the exact instructions are at the end of this post). They were to assess the predictions on the following five point scale:

  • 1=True, 2=Weakly True, 3=Cannot decide, 4=Weakly False, 5=False

They assessed a varying amount of predictions, giving 531 assessments in total, for an average of 59 assessments per volunteer (the maximum attempted was all 172 predictions, the minimum was 10). They generally followed the randomised order correctly - there were three out of order assessments (assessing prediction 36 instead of 38, 162 instead of a 172, and missing out 75). Since the number of errors was very low, and seemed accidental, I decided that this would not affect the randomisation and kept those answers in.

The assessments (anonymised) can be found here.

continue reading »

Prediction Sources

6 lukeprog 04 December 2012 05:48AM

I'd like to become better calibrated via PredictionBook and other tools, but coming up with well-specified predictions can be very time-consuming. It's handy to be provided with a stock of specific claims to make predictions (or post-dictions) about, as with CFAR's Credence Game.

Therefore, I asked Jake Miller and Gwern put together a list of prediction sources. Feel free to suggest others!

Prediction Sites

"How We're Predicting AI — or Failing to"

11 lukeprog 18 November 2012 10:52AM

The new paper by Stuart Armstrong (FHI) and Kaj Sotala (SI) has now been published (PDF) as part of the Beyond AI conference proceedings. Some of these results were previously discussed here. The original predictions data are available here.

Abstract:

This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analysing them. It will propose a variety of theoretical tools for analysing, judging and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are born out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.

Analyzing FF.net reviews of 'Harry Potter and the Methods of Rationality'

25 gwern 03 November 2012 11:47PM

The unprecedented gap in Methods of Rationality updates prompts musing about whether readership is increasing enough & what statistics one would use; I write code to download FF.net reviews, clean it, parse it, load into R, summarize the data & depict it graphically, run linear regression on a subset & all reviews, note the poor fit, develop a quadratic fit instead, and use it to predict future review quantities.

Then, I run a similar analysis on a competing fanfiction to find out when they will have equal total review-counts. A try at logarithmic fits fails; fitting a linear model to the previous 100 days of _MoR_ and the competitor works much better, and they predict a convergence in <5 years.

Master version: http://www.gwern.net/hpmor#analysis

Competence in experts: summary

9 Stuart_Armstrong 16 August 2012 02:53PM

Just giving a short table-summary of an article by James Shanteau on which areas and tasks experts developed a good intuition - and which ones they didn't. Though the article is old, the results seem to be in agreement with more recent summaries, such as Kahneman and Klein's. The heart of the article was a decomposition of characteristics (for professions and for tasks within those professions) where we would expert experts to develop good performance:


Good performance Poor performance

Static stimuli

Decisions about things

Experts agree on stimuli

More predictable problems

Some errors expected

Repetitive tasks

Feedback available

Objective analysis available

Problem decomposable

Decision aids common

Dynamic (changeable) stimuli

Decisions about behavior

Experts disagree on stimuli

Less predictable problems

Few errors expected

Unique tasks

Feedback unavailable

Subjective analysis only

Problem not decomposable

Decision aids rare

I do feel that this may go some way to explaining the expert's performance here.

The weakest arguments for and against human level AI

14 Stuart_Armstrong 15 August 2012 11:04AM

While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:

  • Moore's Law hence AI!

Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:

  • Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Vinge, 1993)
  • Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers. [...] Since computer capacity doubles every two years or so, we expect that in about 40 years, the computers will be as powerful as human brains. (Eder 1994)
  • Suppose my projections are correct, and the hardware requirements for human equivalence are available in 10 years for about the current price of a medium large computer.  Suppose further that software development keeps pace (and it should be increasingly easy, because big computers are great programming aids), and machines able to think as well as humans begin to appear in 10 years. (Moravec, 1977)

At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

The weakest argument against AI was the standard:

  • Free will (or creativity) hence no AI!

Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.

A question about Eliezer

32 perpetualpeace1 19 April 2012 05:27PM

I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been?  Has his interpretation of quantum physics predicted any subsequently-observed phenomena?  Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?   

Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions.  Meanwhile, I know that Thomas Friedman is an idiot.  Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman.  But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.  

Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening.  He was wrong about nearly everything.  So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.  

Has Eliezer offered anything falsifiable, or put his reputation on the line in any way?  "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.

Harry Potter and the Methods of Rationality predictions

6 gwern 09 April 2012 09:49PM

The recent spate of updates has reminded me that while each chapter is enjoyable, the approaching end of MoR, as awesome as it no doubt will be, also means the end of our ability to learn from predicting the truth of the MoR-verse and its future.

With that in mind, I have compiled a page of predictions on sundry topics, much like my other page on predictions for Neon Genesis Evangelion; I encourage people to suggest plausible predictions that I've omitted, register their probabilities on PredictionBook.com, and come up with their own predictions. Then we can all look back when MoR finishes and reflect on what we (or Eliezer) did poorly or well.  

The page is currently up to >182 predictions.

Pooling resources for valuable actuarial calculations

12 michaelcurzi 15 February 2012 05:01PM

It occurred to me this morning that, if it's actually valuable, generating true beliefs about the world must be someone's comparative advantage. If truth is instrumentally important, important people must be finding ways to pay to access it. I can think of several examples of this, but the one that caught my attention was actuarial science.

I know next to nothing about what actuaries actually do, but Wikipedia says:

"Actuaries mathematically evaluate the likelihood of events and quantify the contingent outcomes in order to minimize losses, both emotional and financial, associated with uncertain undesirable events."

Why, that sounds right up our alley. 

So what I'm wondering is: for those who can afford it, wouldn't it be worth contracting with actuaries to make important personal decisions? Not merely with regards to business, but everything else as well? My preliminary ideas include:

  • Lifestyle choices to reduce personal risk of death
  • Health and wellness decisions
  • Vehicle choice for economic and safety considerations
  • Where to send your kid to college and otherwise improve life success
Lastly, if consulting actuaries is worth doing as a wealthy individual, shouldn't it also be worth doing as a group? Couldn't we pool money to get excellent information about questions that haven't yielded answers to our research attempts?
If I am not misunderstanding the work that actuaries do, there may indeed be low-hanging fruit here. 

[Link] "It'll never work": a collection of failed predictions

7 Alexandros 19 February 2011 06:02PM

http://www.lhup.edu/~dsimanek/neverwrk.htm

(cross-posted from Hacker News)

Reliably wrong

2 NancyLebovitz 09 December 2010 02:46PM

Discussion of a book by "Dow Jones 36,000" Glassman". I'm wondering whether there are pundits who are so often wrong that their predictions are reliable indicators that something else (ideally the opposite) will happen.