Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)
Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.
Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.
If there are mutliple witnesses who can see each others reactions, it's a good candidate for mass hysteria
To be fair to the aliens, the actions of Westerners probably seem equally weird to Sentinel Islanders. Coming every couple of years in giant ships or helicopters to watch them from afar, and then occasionally sneaking into abandoned houses and leaving gifts?
Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%
I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.
If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.
Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.
Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%
Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.
This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.
We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.
(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)
I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!
The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.
The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)
Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."
Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.
As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.
Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.
Upvoted.
If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)
If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?
God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)
I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:
Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.
But yes - I'm not good at arguing.
There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.
Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.
Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.
"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.
How about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)
A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).
Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).
Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).
75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.
At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.
(Edited for clarity.)
(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)
Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)
I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.
There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)
The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)
No. I intend to revive one. Possibly all four, if necessary. Consider it thawing technology so advanced it can revive even the pyronics crowd.
What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)
What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)
What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?
I think that there are better-than-placebo methods for causing significant fat loss. (60%)
ETA: apparently I need to clarify.
It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.
the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)
*that humans have invented so far
Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)
(Absolutely certain.)
I'm not sure whether to chide you or giggle at the self-reference. I suspect, though, that "absolutely certain" is not a confidence level.
assigning numerical probabilities to common-sense conclusions and beliefs is meaningless
It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.
The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.
Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian prob...
Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes this number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.
Disagree here. Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.
Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.
Disagree here also. The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.
For example, consider my current probability estimate of 10^(-3) that Aman...
Let's see if we can try to hug the query here. What exactly is the mistake I'm making when I say that I believe such-and-such is true with probability 0.001?
Is it that I'm not likely to actually be right 999 times out of 1000 occasions when I say this? If so, then you're (merely) worried about my calibration, not about the fundamental correspondence between beliefs and probabilities.
Or is it, as you seem now to be suggesting, a question of attire: no one has any business speaking "numerically" unless they're (metaphorically speaking) "wearing a lab coat"? That is, using numbers is a privilege reserved for scientists who've done specific kinds of calculations?
It seems to me that the contrast you are positing between "numerical" statements and other indications of degree is illusory. The only difference is that numbers permit an arbitrarily high level of precision; their use doesn't automatically imply a particular level. Even in the context of scientific calculations, the numbers involved are subject to some particular level of uncertainty. When a scientist makes a calculation to 15 decimal places, they shouldn't be interpreted as distinguishing betwe...
I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.
If we replaced "mystical experiences" with something of less religious connotations like "raging hard-ons", you wouldn't think that 'souls brushing up against each other' is the most natural explanation -- you'd instead conclude that some aspect of psychology/biochemistry/pheromones is causing you to have a more intense reaction towards certain people and vice-versa.
From a physicalist perspective the brain is as much an organ as the penis, and "mystical experiences" as much a physical event in the brain as erections are a physical event in the penis.
The many worlds interpretation of Quantum Mechanics is false in the strong sense that the correct theory of everything will incorporate wave-function collapse as a natural part of itself. ~40%
Religion is a net positive force in society. Or to put it another way religious memes, (particularly ones that have survived for a long time) are more symbiotic than parasitic. Probably true (70%).
Around the time of J. S. Mill, I think. The Industrial Revolution helped crystallize an elite political and academic movement which had the germs of scientific and quantitative thinking; but this movement has been far too busy fighting for its life each time it conflicts with religious mores, instead of being able to examine and improve itself. It should have developed far more productively by now if atheism had really caught on in Victorian England.
Anyway, I'm not as confident of the above as I am that we've passed the crossover point now. (Aside from the obvious political effects, the persistence of religion creates mental antibodies in atheists that make them extremely wary of anything reminiscent of some aspect of religion; this too is a source of bias that wouldn't exist were it not for religion's ubiquity.)
Note that it is in general very hard to tell if the artistic and cultural contributions associated with religion are actually due to religion. In highly religious cultures that's often the only form of expression that one is able to get funding for. Dan Barker wrote an essay about this showing how a lot of classical composers were agnostics, atheists or deists who wrote music with religious overtones mainly because that was their only option.
There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).
Unless you are familiar with the work of a German patent attorney named Gunter Wachtershauser, just about everything you have read about the origin of life on earth is wrong. More specifically, there was no "prebiotic soup" providing organic nutrient molecules to the first cells or proto-cells, there was no RNA world in which self-replicating molecules evolved into cells, the Miller experiment is a red herring and the chemical processes it deals with never happened on earth until Miller came along. Life didn't invent proteins for a long time after life first originated. 500 million years or so. About as long as the time from the "Cambrian explosion" to us.
I'm not saying Wachtershauser got it all right. But I am saying that everyone else except people inspired by Wachtershauser definitely got it all wrong. (70%)
Meh. What's the chances of some germanic guy sitting around looking at patents all day coming up with a theory that revolutionizes some field of science?
Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)
There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)
Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.
I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.
(I agree with your premise, but not your conclusion.)
Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.
Conditional on this universe being a simulation, the universe doing the stimulating has laws vastly different from our own. For example, it might contain more than 3 extended-spacial dimensions, or bear a similar relation to our universe as our universe does to second life. 99.999%
I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).
Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.
Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.
There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.
Predicated on MWI being correct, and Quantum Immortality being true:
It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%
Talent is mostly a result of hard work, passion and sheer dumb luck. It's more nurture than nature (genes). People who are called born-geniuses more often than not had better access to facilities at the right age while their neural connections were still forming. (~90%)
Update: OK. It seems I've to substantiate. Take the case of Barrack Obama. Nobody would've expected a black guy to become the US President 50 years ago. Or take the case of Bill Gates, Bill Joy or Steve Jobs. They just happened to have the right kind of technological exposure at an early age and were ready when the technology boom arrived. Or take the case of mathematicians like Fibonacci, Cardano, the Bernoulli brothers. They were smart. But there were other smart mathematicians as well. What separates them is the passion and the hard work and the time when they lived and did the work. A century earlier, they would've died in obscurity after being tried and tortured for blasphemy. Take Mozart. He didn't start making beautiful original music until he was twenty-one by when he had enough musical exposure that there was no one to match him. Take Darwin and think what he would have become if he hadn't boarded the Beagle. He would have been some pastor studying bugs and would've died in obscurity.
In short a genius is made not born. I'm not denying that good genes would help you with memory and learning, but it takes more than genes to be a genius.
I was with you right up until that second sentence. And then I thought about my sister who was speaking in full sentences by 1 and had taught herself to read by 3.
Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%
The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)
NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.
I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.
The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)
Far too confident.
The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.
I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.
All right, I'll try to mount a defence.
I would be modestly surprised if any member of Congress has an IQ below 100. You just need to have a bit of smarts to get elected. Even if the seat you want is safe, i.e. repeatedly won by the same party, you likely have to win a competitive primary. To win elections you need to make speeches, answer questions, participate in debates and so on. It's hard. And you'll have opponents that are ready to pounce on every mistake you make and try make a big deal out of it. Even smart people make lots of mistakes and say stupid things when put on the spot. I doubt a person of below average intelligence even has a chance.
Even George W. Bush, who's said and done a lot of stupid things and is often considered dim for a politician, likely has an IQ above 120.
As for decency and honesty, a useful rule of thumb is that most people are good. Crooked people are certainly a significant minority but most of them don't hide their crookedness very well. And you can't be visibly crooked and still win elections. Your opponents are motivated to dig up the dirt on you.
As for honestly trying to serve their country I admit that this is a bit tricky. Congresspeople certa...
All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)
Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)
The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)
Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.
Many-world interpretation of quantum physics is wrong. Reasonably certain (80%).
I suppose the MWI is an artifact of our formulation of physics, where we suppose systems can be in specific states that are indexed by several sets of observables. I think there is no such thing as a state of the physical system.
You realize, of course, that your confidence level is too high. Eventually, the score should cycle between +9 and +10. Which means that the correct confidence level should be 50%.
Nonetheless, it is very cute. So, I'll upvote it for overconfidence, to say nothing of currently being wrong.
The gaming industry is going to be a major source of funding* for AGI research projects in the next 20 years. (85%)
*By "major" I mean contributing enough to have good odds of causing actual progress. By gaming industry I include joint ventures, so long as the game company invested a nontrivial portion of the funding for the project.
EDIT: I am referring to video game companies, not casinos.
Opponents can be done reasonably well with even the simple AI we have now. The killer app for gaming would be AI characters who can respond meaningfully to the player talking to them, at the level of actually generating new prewritten game plot quality responses based on the stuff the player comes up with during the game.
This is quite different from chatbots and their ilk, I'm thinking of complex, multiagent player-instigated plots such as the player convincing AI NPC A to disguise itself as AI NPC B to fool AI NPC C who is expecting to interact with B, all without the game developer having anticipated that this can be done and without the player feeling like they have gone from playing a story game to hacking AI code.
So I do see a case here. The game industry has thus far been very conservative about weird AI techniques, but since cutting edge visuals seem to be approaching diminishing returns, there could be room for a gamedev enterprise going for something very different. The big problem is that when sorta-there visuals can be pretty impressive, sorta there general NPC AI will probably look quite weird and stupid in a game plot.
Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)
There will be a net positive to society by measures of overall health, wealth and quality of life if the government capped reproduction at a sustainable level and distributed tradeable reproductive credits for that amount to all fertile young women. (~85% confident)
Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we're giving to CFAR and MIRI aren't going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn't donate money when EY wants you to. ~5%, maybe?
Between (edit:) 10% and 0.1% of college students understand any mathematics beyond elementary arithmetic above the level of rote calculation. ~95%
I think that "personal identity" and "consciousness" are fundamentally incoherent concepts. Reasonably confident (~80%)
The amount of consciousness that a neural network S has is given by phi=MI(A^H_max;B)+MI(A;B^H_max), where {A,B} is the bipartition of S which minimises the right hand side, A^H_max is what A would be if all its inputs were replaced with maximum-entropy noise generators and MI(A,B)=H(A)+H(B)-H(AB) is the mutual information between A and B and H(A) is the entropy of A. 99.9%
The Big Bang is not how our universe was created. Our universe was created by a naturalistic event that we have not yet seriously theorised, due to a lack of scientific knowledge. (15%)
Richard Dawkins' genocentric ("Selfish Gene") view is a bad metaphor for most of what happens with sufficiently advanced life forms. Organism-centered view is a much better metaphor. New body forms and behaviors first appear in phenotype, in response to changing environment. Later, they get "written" into the genotype if the new environment persists for enough time. Baldwin effect is ubiquitous. (60%)
"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)
More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)
NOTE: This comment is a re-post. I initially posted it in the "Comments on Irrationality Game" thread because I'm a moron. Sorry about that.
What's with all this 'infinite utility/disutility' nonsense? Utility is a measure of preference, and 'preference' itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market condit...
As:
formal complexity [http://en.wikipedia.org/wiki/Complexity#Specific_meanings_of_complexity] is inherent in may real-world systems that are apparently significantly simpler than the human brain,
and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]
and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,
any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident
The natural world is only different from other mathematically describable worlds in content not in type. Any universe that is described by some mathematical system has the same ontological status as the one that we experience directly. (90% about)
Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)
1 THz semiconductor-based computing will prove to be impossible. ~50%
(Note for the optimistic: I expect multiplying cores will continue to increase consumer computer performance for some years after length-scale limitations on clock rate are reached.)
Metadiscussion: Reply to this comment to discuss the game itself, or anything else that's not a proposition for upvotes/downvotes.
Nobody has ever come up with the correct solution to how Eliezer Yudkowsky won the AI-Box experiment in less than 15 minutes of effort. (This includes Eliezer himself). (75%)
I believe that virtually perfect gender egalitarianism will not be achieved within my lifetime in the United States with certainty of 90%.
This depends on the assumption that I will only live at most about eighty more years, i.e. that the transhumanist revolution will not occur within that time and that I am either not frozen or fail to thaw. My belief in that assumption is 75%.
Surprised that nobody has posted this yet...
"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)
More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)
The corporation, as such entities are legally defined in most countries at the present time, is a major contributor to a kind of "astronomical waste". Alternate forms for organizing trade exist that would require only human-level intelligence to find and would yield much greater total prosperity than does having the corporation as the unit of organization.
(Strong hunch, >70%)
Utilitarianism is impossible to even formulate precisely in a logically coherent way. (Almost certain.)
Even if some coherent formulation of utilitarianism can be found, applying it in practice requires belief in fictional metaphysical entities. (Absolutely certain.)
Finally, as a practical philosophy, utilitarianism is pernicious because it represents exactly the sort of quasi-rational thinking that is apt to mislead otherwise very smart people into terrible folly. (Absolutely certain.)
Cryonics does not maximize expected utility. (approx. 65%)
Edit: wording changed for clarity
Edit #2: Correct wording should be "Cryonics does not maximize your (the reader's) expected utility. (approx. 65%)"
Please read the post before voting on the comments, as this is a game where voting works differently.
Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.
Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.
Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.
Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."
If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.
That's the spirit of the game, but some more qualifications and rules follow.
If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.
The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.
Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.
Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?
Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.
Additional rules: