The Irrationality Game
Please read the post before voting on the comments, as this is a game where voting works differently.
Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.
Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.
Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.
Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."
If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.
That's the spirit of the game, but some more qualifications and rules follow.
If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.
The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.
Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.
Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?
Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.
Additional rules:
- Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
- If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post.
- Don't post propositions as comment replies to other comments. That'll make it disorganized.
- You have to actually think your degree of belief is rational. You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average. This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
- Debate and discussion is great, but keep it civil. Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
- No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
- Multiple propositions are fine, so long as they're moderately interesting.
- You are encouraged to reply to comments with your own probability estimates, but comment voting works normally for comment replies to other comments. That is, upvote for good discussion, not agreement or disagreement.
- In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (910)
The gaming industry is going to be a major source of funding* for AGI research projects in the next 20 years. (85%)
*By "major" I mean contributing enough to have good odds of causing actual progress. By gaming industry I include joint ventures, so long as the game company invested a nontrivial portion of the funding for the project.
EDIT: I am referring to video game companies, not casinos.
Upvoted for overconfidence, but I'd downvote at 40%.
Between (edit:) 10% and 0.1% of college students understand any mathematics beyond elementary arithmetic above the level of rote calculation. ~95%
I believe that virtually perfect gender egalitarianism will not be achieved within my lifetime in the United States with certainty of 90%.
This depends on the assumption that I will only live at most about eighty more years, i.e. that the transhumanist revolution will not occur within that time and that I am either not frozen or fail to thaw. My belief in that assumption is 75%.
Upvoted for drastic underconfidence.
Define "virtually perfect gender egalitarianism".
I have to admit that I knew in my heart I should define it but didn't, mostly because I know that the tenets are purely subjective and there's no way I can cover everything that would be involved. Here are a couple points:
I hope this doesn't fall into a semantics controversy.
"Considered" by whom? Can I have, say, an aesthetic preference about these things (suppose I think that women look better in aprons than men do, can I prefer on this obviously trivial basis that women do more of the cooking?), or is any preference about the division of traits amongst sexes a problem for this criterion?
"Potential utility" meaning the utility that the person under consideration might experience/get, or might produce? Also, does this lack of preconception thing seem to you to be compatible with Bayesianism? If I have no reason to suspect that John and Jane are anything other than average, on what epistemic basis do I not guess that he is likelier (by the hypothetical proofs you suppose) to be better at math and more likely to cause scandal?
So what gender should the default human be, or should we somehow have two defaults, or should the default human be one with a set of sex/gender characteristics that rarely appear together in the species, or should there be no default at all (in which case what will serve the purposes currently served by having a default)?
I'm totally in favor of gender egalitarianism as I understand it, but it seems a little wooly the way you've written it up here. I'm sincerely trying to figure out what you mean and I'll back off if you want me to stop.
Perhaps an aesthetic preference isn't a problem (obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences). Note that I used the word "personality traits" - some division of other traits is inevitable. Things that upset me with the current state of affairs are where one boy fights with another and it is dismissed as boys being boys, while any other combination of genders would probably result in disciplinary action. Or how the general social trends (in Western cultures, at least) think that women wearing suits is commendable and becoming ordinary, but a man in a dress is practically lynched.
Potential utility produced, for your company or project. I think I phrased this one a little wonkily earlier - you're right, under the proofs I layed out, if all you know about John and Jane are their genders, then of course the Bayesian thing to do is assume John will be better at math. What I mean is more that, if you do know more about John and Jane, having had an interview or read a resume, the assumption that they necessarily reflect the averages of their gender is like not considering whether a woman's positive mammogram could be false. For an extreme example, the majority of homocides in many countries are committed by men. Should the employer therefore assume that Jane is less likely than John to commit such a crime, even if she has a criminal record?
I don't see why having an ungendered default is so difficult, besides for the linguistic dance associated with it in our language (and several others, but far from all of them), which is probably not going to be a problem for many more generations due to the increasing use of "they" as a singular pronoun. For instance, having a raceless or creedless default has proven not to be that hard, even if members of different races or creeds would react differently in such a situation. If one of the things I'm talking about actually happens in a cishuman lifetime, my bet would go on this one. Now, in situations where you need a more specific everyman, who goes to church every Sunday and has two children and a dog, there might be more use in a gendered, race-bearing, creed-bearing individual.
Maybe I should just go back and say "where virtually perfect acknowledges that there are some immutable differences between the sexes but that all others with detrimental effect have been eradicated".
This is why it surprises me so much that the levels of communication post had so little focus on the level of values or potential misunderstandings that can occur on the level of facts due to the ambiguity of language. The value that I am trying to express, and which I assume that you are as well or something close to it, is that men and women should be treated equally, but completely equal treatment would be impractical and not equal in the terms of benefit conferred. (For example, growth of breasts in men should be taken as a health concern, not a sign of attractiveness.) So we are forced to add specifics to our definitions that make them less clear.
Unless you still think something is wrong or missing in my definition to the point that we're talking about significantly different things, I would appreciate it if we moved on from this aspect of the issue.
Some personality traits are considered attractive in one sex and not another.
What are those purposes, anyway?
Literary "everyman" types, not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults, and probably something I'm not remembering.
How do you do that in English as it is now?
People say things like "Take your average human. He's thus and such." If you want to start a paragraph with "Take your average human" and not use gendered language, you have to say things like "They're thus and such" (sometimes awkward, especially if you're also talking about plural people or objects in the same paragraph) or "Ey's thus and such", which many people don't understand and others don't like.
I don't have an average human, and I don't think the universe does either. I think there's a lot to be said for not having a mental image of an average human.
Furthermore, since there are nearly equal numbers of male and female humans, gender is trait where the idea of an average human is especially inaccurate.
I think the best substitute is "Take typical humans. They're thus and such." Your average alert listener will be ready to check on just how typical (modal?) those humans are.
Exactly. People make a fuss about a lack of singular nongendered pronouns. The plural nongendered pronouns are right there.
How is "they" any more ambiguous than "you"? Both can easily qualified with "all".
It's not always grammatically feasible or elegant to do so. Also, the singular "you" is much more common than the singular "they," so your readers are more likely to expect it and are prepared for the potential ambiguity.
Alicorn:
I find these invented pronouns awful, not only aesthetically, but also because they destroy the fluency of reading. When I read a text that uses them, it suddenly feels like I'm reading some language in which I'm not fully fluent so that every so often, I have to stop and think how to parse the sentence. It's the linguistic equivalent of bumps and potholes on the road.
After reading one story that used these pronouns, I was sufficiently used to them that they do not impact my reading fluency.
Hmm. It's true, people do, but I think it's getting less common already. Were you asking, then, which of those alternatives the original commenter preferred?
Not really, I'm just pointing out that gendered language isn't a one-sided policy debate. (I favor a combination of "they" and "ey", personally, or creating specific example imaginary people who have genders).
When it is technologically feasible for our descendants to simulate our world, they will not because it will seem cruel (conditional on friendly descendants, such as FAI or successful uploads with gradual adjustments to architecture.) I would be surprised if it were different, but not THAT surprised. (~70%)
As:
formal complexity [http://en.wikipedia.org/wiki/Complexity#Specific_meanings_of_complexity] is inherent in may real-world systems that are apparently significantly simpler than the human brain,
and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]
and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,
any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident
1 THz semiconductor-based computing will prove to be impossible. ~50%
(Note for the optimistic: I expect multiplying cores will continue to increase consumer computer performance for some years after length-scale limitations on clock rate are reached.)
The natural world is only different from other mathematically describable worlds in content not in type. Any universe that is described by some mathematical system has the same ontological status as the one that we experience directly. (90% about)
What's with all this 'infinite utility/disutility' nonsense? Utility is a measure of preference, and 'preference' itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market conditions so inconceivably bizarre that such notes are widely accepted at face value) isn't even remotely possible. Protestations of willingness in the absence of demonstrated ability don't count; talk is cheap, if you really cared that much you'd be finding a way instead of whining.
I've had a funny feeling about this subject for a while, but the logic finally clicked just recently. Still, there could be some flaw I missed. ~98%
There will be a net positive to society by measures of overall health, wealth and quality of life if the government capped reproduction at a sustainable level and distributed tradeable reproductive credits for that amount to all fertile young women. (~85% confident)
Talent is mostly a result of hard work, passion and sheer dumb luck. It's more nurture than nature (genes). People who are called born-geniuses more often than not had better access to facilities at the right age while their neural connections were still forming. (~90%)
Update: OK. It seems I've to substantiate. Take the case of Barrack Obama. Nobody would've expected a black guy to become the US President 50 years ago. Or take the case of Bill Gates, Bill Joy or Steve Jobs. They just happened to have the right kind of technological exposure at an early age and were ready when the technology boom arrived. Or take the case of mathematicians like Fibonacci, Cardano, the Bernoulli brothers. They were smart. But there were other smart mathematicians as well. What separates them is the passion and the hard work and the time when they lived and did the work. A century earlier, they would've died in obscurity after being tried and tortured for blasphemy. Take Mozart. He didn't start making beautiful original music until he was twenty-one by when he had enough musical exposure that there was no one to match him. Take Darwin and think what he would have become if he hadn't boarded the Beagle. He would have been some pastor studying bugs and would've died in obscurity.
In short a genius is made not born. I'm not denying that good genes would help you with memory and learning, but it takes more than genes to be a genius.
This comment currently (at the time of reading) has at least 10 net upvotes.
Confidence: 99%.
Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)
You will downvote this comment (Not confident at all - 0%).
Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)
Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you're counting "ones that lack self-concepts sufficiently strong to have any real preference to exist". Are you saying the presence of a notochord gives a fish higher moral worth than a crab?
That's a good point - there are almost certainly invertebrate species on the same side of the line. Squid, for example.
"At least a little bit" is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.
Are we only supposed to upvote this post if we think it is irrational?
Many-world interpretation of quantum physics is wrong. Reasonably certain (80%).
I suppose the MWI is an artifact of our formulation of physics, where we suppose systems can be in specific states that are indexed by several sets of observables. I think there is no such thing as a state of the physical system.
Richard Dawkins' genocentric ("Selfish Gene") view is a bad metaphor for most of what happens with sufficiently advanced life forms. Organism-centered view is a much better metaphor. New body forms and behaviors first appear in phenotype, in response to changing environment. Later, they get "written" into the genotype if the new environment persists for enough time. Baldwin effect is ubiquitous. (60%)
The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)
Apprentice:
Downvoted for agreement.
However, I must add that it would be extremely fallacious to conclude from this fact that the country is being run competently and not declining or even headed for disaster. This fallacy would be based on the false assumption that the country is actually run by the politicians in practice. (I am not arguing for these pessimistic conclusions, at least not in this context, but merely that given the present structure of the political system, optimistic conclusions from the above fact are generally unwarranted.)
Far too confident.
The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.
I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.
Predicated on MWI being correct, and Quantum Immortality being true:
It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%
Which way do I vote things that aren't so much wrong as they are fundamentally confused?
Thinking about QI as something about which to ask 'true or false?' implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to 'desired or undesired'.
Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that'd be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.
However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.
Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don't work out.
This seems like a great reason not to trust quantum immortality.
Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)
Um, so when Nate Silver tells us he's calculated odds of 2 in 3 that Republicans will control the house after the election, this number should be discarded as noise because it's a common-sense belief that the Republicans will gain that many seats?
Boy did I hit a hornets' nest with this one!
No, of course I didn't mean anything like that. Here is how I see this situation. Silver has a model, which is ultimately a piece of mathematics telling us that some p=0.667, and for reasons of common sense, Silver believes (assuming he's being upfront with all this) that this model closely approximates reality in such a way that p can be interpreted, with reasonable accuracy, as the probability of Republicans winning a House majority this November.
Now, when you ask someone which party is likely to win this election, this person's brain will activate some algorithm that will produce an answer along with some rough level of confidence. Someone completely ignorant about politics might answer that he has no idea, and cannot say anything with any certainty. Other people will predict different results with varying (informally expressed) confidence. Silver himself, or someone else who agrees with his model, might reply that the best answer is whatever the model says (i.e. Republicans win with p=0.667), since it is completely superior to the opaque common-sense algorithms used by the brains of non-mathy political analysts. Others will have greater or lesser confidence in the accuracy of the model, and might take its results into account, with varying weight, alongside other common-sense considerations.
Ultimately, the status of this number depends on the relation between Silver's model and reality. If you believe that the model is a vast improvement over any informal common-sense considerations in predicting election results, just like Newton's theory is a vast improvement over any common-sense considerations in predicting the motions of planets, then we're not talking about a common-sense conclusion any more. On the other hand, if you believe that the model is completely out of touch with reality, then you would discard its result as noise. Finally, if you believe that it's somewhat accurate, but still not reliably superior to common sense, you might revise its conclusion using common sense.
What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.
Want to make a bet on that?
I have read most of the responses and still am not sure whether to upvote or not. I doubt among several (possibly overlapping) interpretations of your statement. Could you tell to what extent the following interpretations really reflect what you think?
That’s an excellent list of questions! It will help me greatly to systematize my thinking on the topic.
Before replying to the specific items you list, perhaps I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight. Therefore, I believe that whenever one encounters people talking about numbers of any sort that look even slightly suspicious, they should be considered guilty until proven otherwise -- and this entire business with subjective probability estimates for common-sense beliefs doesn’t come even close to clearing that bar for me.
Now to reply to your list.
My answer to (1) follows from my opinion about (2).
In my view, a number that gives any information about the real world must ultimately refer, either directly or via some calculation, to something that can be measured or counted (at least in principle, perhaps using a thought-experiment). This doesn’t mean that all sensible numbers have to be derived from concrete empirical measurements; they can also follow from common-sense insight and generalization. For example, reading about Newton’s theory leads to the common-sense insight that it’s a very close approximation of reality under certain assumptions. Now, if we look at the gravity formula F=m1*m2/r^2 (in units set so that G=1), the number 2 in the denominator is not a product of any concrete measurement, but a generalization from common sense. Yet what makes it sensible is that it ultimately refers to measurable reality via a well-defined formula: measure the force between two bodies of known masses at distance r, and you’ll get log(m1*m2/F)/log(r) = 2.
Now, what can we make out of probabilities from this viewpoint? I honestly can’t think of any sensible non-frequentist answer to this question. Subjectivist Bayesian phrases such as “the degree of belief” sound to me entirely ghostlike unless this “degree” is verifiable via some frequentist practical test, at least in principle. In this sense, I do confess frequentism. (Though I don’t wish to subscribe to all the related baggage from various controversies in statistics, much of which is frankly over my head.)
That depends on the concrete problem under consideration, and on the thinker who is considering it. The thinker’s brain produces an answer alongside a more or less fuzzy feeling of confidence, and the human language has the capacity to express these feelings with about the same level of fuziness as that signal. It can be sensible to compare intuitive confidence levels, if such comparison can be put to a practical (i.e. frequentist) test. Eight ordered intuitive levels of certainty might perhaps be too much, but with, say, four levels, I could produce four lists of predictions labeled “almost impossible,” “unlikely,” “likely,” and “almost certain,” such that common-sense would tell us that, with near-certainty, those in each subsequent list would turn out to be true in ever greater proportion.
If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified in the sense discussed above under (1) and (2). This requires justification both in the sense of defining what aspect of reality they refer to (where frequentism seems like the only answer), and guaranteeing that they will be accurate under empirical tests. If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice, and there are only two major exceptions.
The first possible path towards accurate calibration is when the same person performs essentially the same judgment many times, and from the past performance we extract the frequency with which their brain tends to produce the right answer. If this level of accuracy remains roughly constant in time, then it makes sense to attach it as the probability to that person’s future judgments on the topic. This approach treats the relevant operations in the brain as a black box whose behavior, being roughly constant, can be subjected to such extrapolation.
The second possible path is reached when someone has a sufficient level of insight about some problem to cross the fuzzy limit between common-sense thinking and an actual scientific model. Increasingly subtle and accurate thinking about a problem can result in the construction of a mathematical model that approximates reality well enough that when applied in a shut-up-and-calculate way, it yields probability estimates that will be subsequently vindicated empirically.
(Still, deciding whether the model is applicable in some particular situation remains a common-sense problem, and the probabilities yielded by the model do not capture this uncertainty. If a well-established physical theory, applied by competent people, says that p=0.9999 for some event, common sense tells me that I should treat this event as near-certain -- and, if repeated many times, that it will come out the unlikely way very close to one in 10,000 times. On the other hand, if p=0.9999 is produced by some suspicious model that looks like it might be a product of data-dredging rather than real insight about reality, common sense tells me that the event is not at all certain. But there is no way to capture this intuitive uncertainty with a sensible number. The probabilities coming from calibration of repeated judgment are subject to analogous unquantifiable uncertainty.)
There is also a third logical possibility, namely that some people in some situations have precise enough intuitions of certaintly that they can quantify them in an accurate way, just like some people can guess what time it is with remarkable precision without looking at the clock. But I see little evidence of this occurring in reality, and even if it does, these are very rare special cases.
I disagree with this, as explained above. Calibration can be done successfully in the special cases I mentioned. However, in cases where it cannot be done, which includes the great majority of the actual beliefs and conclusions made by human brains, devising numerical probabilities makes no sense.
This should be clear from the answer to (3).
[Continued in a separate comment below due to excessive length.]
[Continued from the parent comment.]
I have revised my view about this somewhat thanks to a shrewd comment by xv15. The use of unjustified numerical probabilities can sometimes be a useful figure of speech that will convey an intuitive feeling of certainty to other people more faithfully than verbal expressions. But the important thing to note here is that the numbers in such situations are mere figures of speech, i.e. expressions that exploit various idiosyncrasies of human language and thinking to transmit hard-to-convey intuitive points via non-literal meanings. It is not legitimate to use these numbers for any other purpose.
Otherwise, I agree. Except in the above-discussed cases, subjective probabilities extracted from common-sense reasoning are at best an unnecessary addition to arguments that would be just as valid and rigorous without them. At worst, they can lead to muddled and incorrect thinking based on a false impression of accuracy, rigor, and insight where there is none, and ultimately to numerological pseudoscience.
Also, we still don’t know whether and to what extent various parts of our brains involved in common-sense reasoning approximate Bayesian networks. It may well be that some, or even all of them do, but the problem is that we cannot look at them and calculate the exact probabilities involved, and these are not available to introspection. The fallacy of radical Bayesianism that is often seen on LW is in the assumption that one can somehow work around this problem so as to meaningfully attach an explicit Bayesian procedure and a numerical probability to each judgment one makes.
Note also that even if my case turns out to be significantly weaker under scrutiny, it may still be a valid counterargument to the frequently voiced position that one can, and should, attach a numerical probability to every judgment one makes.
So, that would be a statement of my position; I’m looking forward to any comments.
It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.
The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.
Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian probability:
So, if you want to deprecate as "meaningless" my estimate that the Democrats have a 40% chance to maintain their House majority in the next election, go ahead. But you cannot then also deprecate my estimate that the Republicans have a 70% of reaching a House majority. Because the conjunction of those two probability estimates is not meaningless. It is quite respectably false.
Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we're giving to CFAR and MIRI aren't going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn't donate money when EY wants you to. ~5%, maybe?
The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)
Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.
Conditional on this universe being a simulation, the universe doing the stimulating has laws vastly different from our own. For example, it might contain more than 3 extended-spacial dimensions, or bear a similar relation to our universe as our universe does to second life. 99.999%
Upvoted for disagreement. The most detailed simulations our current technology is used to create (namely, large networks of computers operating in parallel) are created for research purposes, to understand our own universe better. Galaxy/star formation, protein folding, etc. are fields where we understand enough to make a simulation but not enough that such a simulation is without value. A lot of our video games have three spatial dimensions, one temporal one, and roughly Newtonian physics. Even Second Life (which you named in your post) is designed to resemble our universe in certain aspects.
Basically, I fail to see why anyone would create such a detailed simulation if it bore absolutely no resemblance to reality. Some small differences, yes (I bet quantum mechanics works differently), but I would give a ~50% chance that, conditional on our universe being a simulation, the parent universe has 3 spatial dimensions, one temporal dimension, matter and antimatter, and something that approximates to General Relativity.
I have seen simulators of Conway’s Game of Life (or similar) that contain very complex things, including an actual Turing machine.
I could see someone creating a simulator for CGL that simulates a Turing machine that simulates a universe like ours, at least as a proof of concept. With ridiculous amounts of computation available I’m quite sure they’d run the inner universe for a few billion of years.
If by accident a civilization arises in the bottom universe and they found some way of “looking above” they’d find a CGL universe before finding the one similar to theirs.
I disagree with this one more than any other comment by far. Have you looked into Tegmark level 4 cosmology? It's really important to take into account concepts like measure and the utility functions of likely simulating agents when reasoning about this kind of thing. Upvoted.
I'm supposed to downvote if I think the probability of that is >= 99.999% and upvote otherwise? I'm upvoting, but I still the probability of that is > 90%.
Army1987: Not sure what the rules are for comments replying to the original, but hell. Voted down for agreement.
(I think we just vote normally in these replies. I agree with army too.)
Why in the world would the parent be downvoted? I'm having difficulty unraveling the paradox.
Well, someone might agree with wedrifid (that second-order comments are to be voted on normally) but still disapprove of his comment for reasons other than disagreement (for example, think it clarifies what would otherwise have been a valuable point of confusion), and downvote (normally) on that basis.
Religion is a net positive force in society. Or to put it another way religious memes, (particularly ones that have survived for a long time) are more symbiotic than parasitic. Probably true (70%).
Nobody has ever come up with the correct solution to how Eliezer Yudkowsky won the AI-Box experiment in less than 15 minutes of effort. (This includes Eliezer himself). (75%)
Previous survey on this topic: http://lesswrong.com/lw/2l/closet_survey_1/
I think that there are better-than-placebo methods for causing significant fat loss. (60%)
ETA: apparently I need to clarify.
It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.
voted up because 60% seems WAAAAAYYYY underconfident to me.
Upvoted, because I say diet and exercise work at 85% (for a significant fraction of people; there may be some with unlucky genes who can't lose weight that way).
Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.
Furthermore: if the above is false, it will proven such within thirty years. If the above is true it will become the majority position among both natural scientists and academic philosophers within thirty years. Barring AI singularity in both cases. Confidence level 70%.
Confidence level?
Let's say 65%.
Unless you are familiar with the work of a German patent attorney named Gunter Wachtershauser, just about everything you have read about the origin of life on earth is wrong. More specifically, there was no "prebiotic soup" providing organic nutrient molecules to the first cells or proto-cells, there was no RNA world in which self-replicating molecules evolved into cells, the Miller experiment is a red herring and the chemical processes it deals with never happened on earth until Miller came along. Life didn't invent proteins for a long time after life first originated. 500 million years or so. About as long as the time from the "Cambrian explosion" to us.
I'm not saying Wachtershauser got it all right. But I am saying that everyone else except people inspired by Wachtershauser definitely got it all wrong. (70%)
I have no idea whether to disagree with this or not (the Wiki god barely has any info on the guy!) but I'm tempted to downvote this anyway for being so provocative! ;)
This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.
We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.
(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)
So: you think there's a god who created the universe?!?
Care to lay out the evidence? Or is this not the place for that?
Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.
Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.
Pope Francis will do more good than harm in the world. (80%)
Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.
Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.
There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.
I don't understand..By what plausible mechanism could such a disastrous loss of knowledge happen specifically NOW?
The good news is that some version of this knowledge keeps getting rediscovered.
The bad news is that the knowledge seems to be mostly tacit and (so far) unteachable.
Down voted because I think this is very plausible.
Life on earth was seeded, accidentally or on purpose, from outer space.
No probability estimate. I assign this hypothesis some probability, but unless you list yours I can only guess as to whether it is similar to mine.
Mine is quite low, however, so upvoted.
There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)
It does not all add up to normality. We are living in a weird universe. (75%)
Metadiscussion: Reply to this comment to discuss the game itself, or anything else that's not a proposition for upvotes/downvotes.
This post makes the recent comments thread look seriously messed up!
the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)
*that humans have invented so far
Open source.
The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)
There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)
Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.
I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.
(I agree with your premise, but not your conclusion.)
To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.
I probably stole from The Economist too.
Did you have this in mind? Cognitive Surplus.
Yes - thank you for the cite.
Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)
Can you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.
Upvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.
The many worlds interpretation of Quantum Mechanics is false in the strong sense that the correct theory of everything will incorporate wave-function collapse as a natural part of itself. ~40%
A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).
Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).
Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).
I want to upvote each of these points a dozen times. Then another few for the first.
It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.
The hard problem of consciousness will be solved within the next decade (60%).
All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)
Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)
This prediction isn't falsifiable -- the word "crazy" is not precise enough, and the word "sufficient" is a loophole you can drive the planet Jupiter through.
I'm trying to figure out what this statement means. What would the universe look like if it were false?
You can't. We live in an intrinsically meaningless universe, where all statements are intrinsically meaningless. :-)
In context, I took it to predict something like "Above a certain limit, as a system becomes more intelligent and thus more able to discern the true nature of existence, it will become less able to motivate itself to achieve goals."
I'm not sure it's a bug if "all existence is meaningless" turns out to be meaningless.
Aren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect.
I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.
But humans are crazy! Aren't they?
Eating lots of bacon fat and sour cream can reverse heart disease. Very confident (>95%).
I doubt you are following this rule.
Downvoted. I've seen the evidence, too.
Downvoted means you agree (on this thread), correct? If so, I've wanted to see a post on rationality and nutrition for a while (on the benefits of high-animal fat diet for health and the rationality lessons behind why so many demonize that and so few know it).
The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)
NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.
I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.
I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.
I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).
This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.
In more detail:
Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism, so the map/territory distinction is not dissolved.
Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us.
So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.
What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)
Upvoted for 'not even being wrong'.
There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).
How do the votes work in this game again? "Upvote for insane", right?
There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.
Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.
Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.
Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%
Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.
Can you rephrase this statement tabooing the words experience and qualia.
If he could, he wouldn't be making that mistake in the first place.
75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.
At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.
(Edited for clarity.)
(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)
There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)
EDIT: Quote from here
Sure there is - see:
The only assumption about the environment is that Occam's razor applies to it.
Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
IMO, it is best to think of power and breadth being two orthogonal dimensions - like this.
The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct.
I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can.
I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
That is a very good point, with wideness orthogonal to power.
Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
Do you behave intelligently in domains you were not specifically designed(/selected) for?
Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%
God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)
You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.
Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!
It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.
I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?
We should learn to present this argument correctly, since complexity of hypothesis doesn't imply its improbability. Furthermore, the prior argument drives probability through the floor, making 99% no more surprising than 1%, and is thus an incorrect argument if you wouldn't use it for 1% as well (would you?).
I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.
Wow, telepathy is a pretty big thing to discuss. Sure there isn't a simpler hypothesis? Upvoted.
Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)
Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.
I upvoted you because 95% is way high, but I agree with you that it's non-negligible. There's way too much weirdness in some of the cases to be easily explainable by mass hysteria or hoaxes or any of that stuff - and I'm glad you pointed out Fatima, because that was the one that got me thinking, too.
That having been said, I don't know what they are. Best guess is easter eggs in the program that's simulating the universe.
And do you believe in Santa Claus, too? :P
I would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.
Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)
The resemblance is shallow at best.
Don't joke posts ruin the of the point of the Irrationality Games?
In any case you are taking the wrong approach. Clearly it is ultimately the fault of the Jews because they run everything, no further thought required.
I'm truly not joking!!! You know perfectly well that I don't share much of what's commonly known as "sanity". So to me it's worthy of totally non-ironic consideration..
I'm sorry for the misunderstanding. I think my brain misfired because the theory involved a video game.
Can you elaborate on it? Also this probably isn't the only such incident you think is plausible, can you name others?
If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)
What reason do you have for assigning such high probability to time travel being possible?
nick voted up, robin voted down... This feels pretty weird.
And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation?
;)
Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.
I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.
Edit: Of course, evidence for that 95%+ would be appreciated.
If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?
The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.
The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)
Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."
Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.
As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.
Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.
Upvoted.
Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%