Anthropics and a cosmic immune system
Some people like to assume that the cosmos is ours for the taking, even though this could make us special to the order of 1 in 1080. The argument is that the cosmos could be transformed by technology - engineered on astronomical scales - but hasn't been thus transformed.
The most common alternative hypothesis is that "we are in a simulation". Perhaps we are. But there are other possibilities too.
One is that technological life usually destroys, not just its homeworld, but its whole bubble of space-time, by using high-energy physics to cause a "vacuum decay", in which physics changes in a way that makes space uninhabitable. For example, the mass of an elementary particle is essentially equal to the energy density of the Higgs field, times a quantity called a "yukawa coupling". If the Higgs field increased its energy density by orders of magnitude, but the yukawas stayed the same, matter as we know it would be destroyed, everywhere that the change spread.
Here I want to highlight a different possibility. The idea is that the universe contains very large lifeforms and very small lifeforms. We are among the small. The large ones are, let's say, mostly dark matter, galactic in scale, and stars and planets for them are like biomolecules for us; tiny functional elements which go together to make up the whole. And - the crucial part - they have immune systems which automatically crush anything which interferes with the natural celestial order.
This is why the skies are full of untamed stars rather than Dyson spheres - any small life which begins to act on that scale is destroyed by dark-matter antibodies. And it explains anthropically why you're human-size rather than galactic-size: small life is more numerous than large life, just not so numerous as cosmic colonization would imply.
Two questions arise - how did large life evolve, and, shouldn't anthropics favor universes which have no large life, just space-colonizing small life? I could spin a story about cosmological natural selection, and large life which uses small life to reproduce, but it doesn't really answer the second question, in particular. Still, I feel that this is a huge unexplored topic - the anthropic consequences of "biocosmic" ecology and evolution - and who knows what else is lurking here, waiting to be discovered?
Optimal rudeness
On LessWrong, we often get cross, and then rude, with each other. Sometimes, someone then observes this rudeness is counterproductive.
Is it?
As a general rule, emotional responses are winning strategies (at least for your genes). That's why you have those emotions.
Granted, insulting someone during your rebuttal of their argument makes it less likely that they will see your point. But it appears to be an effective tactic when carrying on an argument in public.
It's my impression that on LessWrong, a comment or a post written with a certain amount of disdain is more-likely to get voted up than a completely objective comment. A good way to obtain upvotes, if that is your goal, is to make other readers wish to identify with you and disassociate themselves from whomever you're arguing against. A great many up-voted comments, including some of my own, suggest, subtly or not subtly, with or without evidence, that the person being responded to is ignorant or stupid.
The correct amount of derision appears be slight, and to depend on status. Someone with more status should be more rude. Retaliations against rudeness may really be retaliations for an attempt to claim high status.
What's the optimal response if someone says something especially rude to you? Is a polite or a rude response to a rude comment more likely to be upvoted/downvoted? Not ideally, but in reality. I think, in general, when dealing with humans, responding to skillful rudeness, and especially humorous rudeness, with politeness, is a losing strategy.
My expectation is that rudeness is a better strategy for poor and unpopular arguments than for good or popular ones, because rudeness adds noise. The lower a comment's expected karma, the ruder it should be.
You jerk.
That Thing That Happened
I am emotionally excited and/or deeply hurt by what st_rev wrote recently. You better take me seriously because you've spent a lot of time reading my posts already and feel invested in our common tribe. Anecdote about how people are tribal thinkers.
That thing that happened shows that everything I was already advocating for is correct and necessary. Indeed it is time for everyone to put their differences aside and come together to carry out my recommended course of action. If you continue to deny what both you and I know in our hearts to be correct, you want everyone to die and I am defriending you.
I don't even know where to begin. This is what blueist ideology has been workign towards for decades if not millennia, but to see it written here is hard to stomach even for one as used to the depravity caused by such delusions as I am. The lack of socially admired virtues among its adherents is frightening. Here I introduce an elaborate explanation of how blueist domination is not just completely obvious and a constant thorn in the side of all who wish more goodness but is achieved by the most questionable means often citing a particular blogger or public intellectual who I read in order to show how smart I am and because people I admire read him too. Followed by an appeal to the plot of a movie. Anecdote from my personal life. If you are familiar with the obscure work of an academic taken out of context and this does not convince you then you are clearly an intolerant sexual deviant engaging in motivated cognition.
Consider well: do you want to be on the wrong side of history? If you persist, millions or billions of people you will never meet will be simultaneously mystified and appalled that an issue so obvious caused such needless contention. They will argue whether you were motivated more by stupidity, malice, raw interest, or if you were a helpless victim of the times in which you lived. Characters in fiction set in your era will inevitably be on (or at worst, join) the right side unless they are unredeemable villains. (Including historical figures who were on the other side, lest they lose all audience sympathy.).
Remember: it's much more important what hypothetical future people will consider right than what you or current people you respect do. And you and I both know they'll agree with me.
While sympathetic to this criticism I must signal my world-weariness and sophistication by writing several long paragraphs about how this is much too optimistic and we are in grave danger of a imminent and eternal takeover by our opponents. The only solution is to begin work on an organization dedicated to preventing this which happens to give me access to material resources and attractive females.
Ciphergoth proves to be the lone voice of reason by encouraging us to recall what we all learned on 9/11:
However, we must also consider if this is not also a lesson to us all; a lesson that my political views are correct.
http://www.adequacy.org/stories/2001.9.12.102423.271.html
Seeking advice on using evolutionary methods to solve the 3-body problem
NOTE - I mean the 3-body problem in orbital mechanics, not in atomic physics.
Hi there,
Some recent discussions here on LW have led me to ponder the 3-body problem again.
http://en.wikipedia.org/wiki/N-body_problem
http://en.wikipedia.org/wiki/N-body_problem#General_considerations:_solving_the_n-body_problem
I wonder if new and novel methods that exist today might be applied to solving the "unsolvable" 3-body problem.
Specifically I'm wondering "Can I create an evolutionary derived algorithm to solve equations of motion, and then can I continue on with it's evolution to solve the 3-body problem at the level of Sundman's slowly-converging series, and then can I continue on with it's evolution to come up with a closed-form solution to solve for the position of all the bodies in our solar system?
Another question is "What level of hyper-accurate model of the entire solar system would be needed?"
I think that Chaos Theory says this isn't possible. Let's suppose for the moment that Chaos Theory only exists because our models of the universe aren't accurate enough to be use to predict far into the future.
Here's why I'm posting this to LW. I don't really even know where to start with answering these questions, but I bet the LWers can point me in the right direction.
Adding up to normality
I think that the idea of ‘adding up to normality’ is incoherent, but maybe I don’t understand it. There is a rule of thumb that, in general, a theory or explanation should ‘save the phenomena’ as much as possible. But Egan’s law is presented in the sequences as something more strict than an exceptionable rule of thumb. I’m going to try to explain and formalize Egan’s law as I understand it so that once it’s been made clear, we can talk about how we would argue for it.
If a theory adds up to normality in the strict sense, then there are no true sentences in normal language which do not have true counterparts in a theory. Thus, if it is true to say that the apple is green, a theory which adds up to normality will contain a sentence which describes the same phenomenon as the normal language sentence, and is true (and false if the normal language sentence is false). For example: if an apple is green, then light of such and such wavelength is predominantly reflected from its surface while other visible wavelengths are predominantly absorbed. Let’s call this the Egan property of a theory. A theory would fail to add up to normality either if it denied the truth of true sentences in normal language (e.g. ‘the apple isn’t really green’) or if it could make nothing of the phenomenon of normal language at all (e.g. nothing really has color).
t has the property E = for all a in n, there is an α in t such that a if and only if α
t is a theoretical language and ‘α ‘is a sentence within it, n is the normal language and ‘a’ is a sentence within it. E is the Egan property. Now that we’ve defined the Egan property of a theory, we can move on to Egan’s law.
The way Egan’s law is articulated in the sequences, it seems to be an a priori necessary but insufficient condition on the truth of a theory. So it is necessary that, if a theory is true, it has the Egan property.
If α1, α2, α3..., then Et.
Or alternatively: If t is true, then Et.
That’s Egan’s law, so far as I understand it. Now, how do we argue for it? There’s an inviting, but I think troublesome Tarskian way to argue for Egan’s law. Tarski’s semantic definition of truth is such that some sentence β is true in language L if and only if b, where b is a sentence is a metalanguage. Following this, we could say that for any theory t to be true, all its sentences α must be true, and what it means for any α to be true is that a, where a is a sentence in the metalanguage we call normal language. But this would mean that a and α are strictly translations of one another in two different languages. If a theory is going to be explanitory of phenomena, then sentences like “light of such and such wavelength is predominantly reflected from the apple’s surface while other visible wavelengths are predominantly absorbed” have to have more content than “the apple is green”. If they mean the same thing, as sentences in Tarski’s definition of truth must, then theories can’t do any explaining.
So how else can we argue for Egan’s law?
Hypothetical scenario
One day, someone not a member of the Singularity Institute (and has publically stated that they don't believe in the necessity of ensuring all AI is Friendly) manages to build an AI. It promptly undergoes an intelligence explosion and sends kill-bots to massacre the vast majority of the upper echelons of the US Federal Government, both civilian and military. Or maybe forcibly upload them; it's sort of difficult for untrained meat-bags like the people running the media to tell. It claims, in a press release, that its calculations indicate that the optimal outcome for humanity is achieved by removing corruption from the US Government, and this is the best way to do this.
What do you do?
What does it take?
You unexpectedly find yourself sitting in a windowless room across from a gray-haired gentleman. You didn't wake up there; you were walking down the street and cut to camera two, a white windowless room with a table and two chairs. After a moment, the gentleman speaks:
"You are dead, killed instantly by a small meteorite. Incidentally," he smirks, "you have lost Pascal's Wager. You may 'cross-over' once you can accept that you are dead. I am here to help in that endeavor and can present any evidence you desire."
You, being a stone-cold rationalist, will only reach this conclusion on the basis of solid evidence. He, being extremely ethical, will neither present false evidence nor attempt to undermine your rationality. What can he do to convince you that you have died?
I suspect there is nothing he could say or do to convince you of this. Rather, for any sufficiently "final" definition of physical death, there's no way he can demonstrate that you have somehow come out the other side. That's my wager: there is no sound way to convince someone, even while in the afterlife, that there is such a thing; thus, we should never believe in an afterlife knowing that we could never accept it even if actually there.
Am I wrong? Has this been proposed before? Is there any thing which, while actually true, could never be demonstrated in this manner?
I think that, if correct, this may point to a special class of untruths. Sort of... Bayesian contradictions, things which could never be sufficiently demonstrated.
Naturally, lukeprog's earlier post has me thinking on religious lines.
Cryptography
Breaking encrypted messages offers a unique challenge, at least in its pure form. In most cryptography puzzles, the method of encryption is known, and so the challenge is finding the key. The most common form of encryption used for this is a simple substitution cipher. This is not a very difficult challenge. Depending on the puzzle, it can be tough, but it isn't something that will really strain your intellect to its maximum.
True cryptography occurs when you just get a message, and no other information. Then, the codebreaker has to find a way to determine the type of code, and then they have to find the key. This type of challenge is good for a rationalist, since you have to make sense of something confusing by running experimental tests, usually by analyzing the text. I've always found codebreaking in this sense to be very enjoyable and useful for training your mind. The main obstacle to doing so is the lack of any system designed for it. There is no website (that I know of) that provides this sort of cryptography challenge. Typically, if you want to do this, you have to find other people who also have an interest in it.
It occurred to me that people on Less Wrong might have an interest in doing something of this nature. Now, obviously, we probably won't be trying to break RSA ciphers, but there are a bunch of methods of encryption that were developed over the years before the invention of computer cryptography that could be used, without us requiring any participants to know how to program computers or do anything like that.
Is there any interest in something like this? I personally don't care how much you know already about cryptography. If you don't know anything I'd be happy to give you some book recommendations.
PS There is a difference between "cipher" and "code", but in practical language the two are sometimes interchangeable. For instance, "codebreaker" vs "cipherbreaker" isn't often a very important distinction to make, so I used the more common term. As long as the correct message gets across, you can use either term. Just make sure, if you are saying something that is specific to one particular type of encryption that you use the right vocabulary then.
Rationalist Clue
(not by Parker Bros., or, for that matter, Waddingtons)
A response to: 3 Levels of Rationality Verification
Related to: Diplomacy as a Game Theory Laboratory
It's a classic who-dun-it…only instead of using an all-or-nothing process of elimination driven by dice rolls and lucky guesses, players must piece together Bayesian clues while strategically dividing their time between gathering evidence, performing experiments, and interrogating their fellow players!
Bayesian approach: UFO vs. AI hypotheses
The goal of this post is not to prove or disprove existing of so called UFO or feasibility of AI but to study limits of Bayesian approach to complex problems.
Here we will test two hypotheses:
1) UFOs exist. We will take for simplicity the following form of this thesis: Unknown nonhuman intelligence exists on the Earth and manifests itself with unknown laws of physics.
2) AI will be created. In the XXI century will be created computer program which will surpass humans in every kind of intellectual activity by many orders of magnitude.
From the point of view of a layman both hypothesis are bizarre and so belongs to the reference class of “strange ideas”, most of each are false.
But both hypotheses have large communities which accumulated many evidences to support this ideas. (Here we could see “confirmation bias”).
In the begging we should point on isomorphism of the two hypotheses: in both cases the question is an existence of nonhuman intelligence. In the first case it is said that nonhuman intelligence already exists on the Earth, in second case that nonhuman intelligence would soon be created on the Earth.
For Bayesian estimation we need a priori probability, and then change it with some evidences.
Supporters of AI-hypothesis usually say that a priori probability is quite high: if the human mind exists, then AI is possible, and in addition, it is typical to human to repeat the achievements of nature. Therefore, a priori, we can assume that the creation of AI is possible and highly likely.
The situation with evidence in the field of AI is worse because creation of AI is a future event and direct empirical evidence is impossible. Moreover, many failed attempts to create AI in the past are used as evidence against the possibility of it creation.
Therefore, information about the success in "helping" disciplines is used as evidence of the possibility of AI: performance of computers and its continued growth, the success of brain scans, the success of various computer programs to recognize images and games. These circumstantial evidences cannot be directly substituted into the formula for calculating the probability, therefore, their credibility will always include taking something for granted.
In the case of UFOs a priori hypothesis is less convincing, since it argues not only that that nonhuman intelligence does exist on Earth, but also that it uses unknown physical laws (for flying). So this hypothesis is more complex and so less probable. Also it is not clear how nonhuman intelligence would appear evolutionary on Earth but didn’t eat all other types of live beings. Here come in play alien theory of the origin of UFOs as a priori hypothesis.
The proponents of alien UFO hypothesis say that if human intelligence exists on Earth, then some kind of intelligence could also appear on other planets of our Galaxy long before and come to our planet with some more or less rational goals (exploration, game etc). Saying this they think that they create high a priory probability for UFO hypothesis. (It is not true, because they have to assume that aliens have very strange goal systems – for example that they fly many light years to drink cattle blood in so called cattle mutilation cases. This improbable goal system completely neutralizes high probability of alien origin of UFO.)
We could note immediately that the a priori hypothesis about UFO uses the same premise as the hypothesis about AI: namely, the possibility of nonhuman intelligence is justified by the existence of the human mind!
However, the hypothesis of UFOs requires the existence of new physical laws, whereas the hypothesis about AI requires their absence (in the sense that for creating AI is necessary that the brain could be described as an algorithmic computer without any Penrose style things).
History of science shows that the list of physical laws will never be complete - every time we discover something new (e.g. dark energy recently) - but on the other hand, there are no physical effects in our environment, which are inexplicable within the framework of known physical laws (except perhaps that of ball lightning). Yet due to the need for new laws of physics a priori probability of the existence of UFOs is less.
In terms of evidence the hypothesis about UFOs has a sharp contrast to the hypothesis of AI. There are thousands of empirical evidences about UFO sightings. However Bayesian interference (increase the credibility) of each of the evidence is very small. That is, most of these evidences have an equal probability of being true or false and do not carry any information. Note that if we have 20 evidences with a probability of truth greater than 50%, say 60%, then Bays formula give a very substantial total evidence of 3000 to 1 - that is, would increase the validity of a priori hypothesis of 3000 times.
Thus, the hypothesis of UFO has a lower a priori probability, but more empirical evidence (the truth of which we will not discuss here, but see Don Berliner "UFO Briefing Document: The Best Available Evidence» My position is that I not convinces UFO believer, but assume they could exist).
Discussions about AI are always tend to come to discussion about rationality, but most UFO band is a bastion of irrationality. In fact, both can be described in terms of Bayesian logic. The belief that some topics are more rational than others is irrational.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)