Polarized gamma rays and manifest infinity
Most people (not all, but most) are reasonably comfortable with infinity as an ultimate (lack of) limit. For example, cosmological theories that suggest the universe is infinitely large and/or infinitely old, are not strongly disbelieved a priori.
By contrast, most people are fairly uncomfortable with manifest infinity, actual infinite quantities showing up in physical objects. For example, we tend to be skeptical of theories that would allow infinite amounts of matter, energy or computation in a finite volume of spacetime.
Rational entertainment industry?
By "the industry" in this post, I refer to that part of the entertainment industry which:
1. Produces movies, TV and video games (as opposed to books, comics etc.)
2. Is motivated by profit (as opposed to fun, politics etc.)
3. Consists of companies (as opposed to lone developers, student teams etc.)
It seems to me that the industry has two characteristics:
Formulaic
Most products follow some formula which is known to be workable.
Under what circumstances is this rational? (I'm not commenting on whether it's artistically good or bad; again, I'm only discussing entertainment as a commercial enterprise motivated by profit.) It seems to me following a proven formula is rational if your priority is to not lose, to go for the sure thing, i.e. the chance of a big hit is not worth the risk of a complete flop.
Hit driven
It's the accepted wisdom that entertainment is a hit driven industry: almost all the profits are generated by a handful of the most successful products, with the rest losing money or barely covering costs.
Now my question: isn't there a contradiction here? If you're selling insurance, following a proven formula may well be the rational thing to do. If you're the owner of one of the handful of franchises that is pulling in big profits, of course you shouldn't mess with a winner. But if you're one of the many also-rans, how is it rational to stick with an almost sure loser? In a hit driven industry, wouldn't it be more rational to concentrate on maximizing your chance of winning big, instead of trying to minimize the risk of a flop?
But I've never worked in the entertainment industry; perhaps my layman's impression of it is inaccurate. Is there something I'm missing, or is a substantial amount of expected profit really being left on the table?
Compatibilism in action
A practical albeit fictional application of the philosophical conclusion that free will is compatible with determinism came up today in a discussion about a setting element from the role-playing game Exalted
(5:31:44 PM) Nekira Sudacne: So during the pirmodial war, one Yozi got his fetch killed and he reincarnated as Sachervell, He Who Knows The Shape of Things To Come. And he reincarnated asleep. and he has remained asleep. And the other primordials do all in their power to keep him asleep. and he wants to be asleep.
For you see, for as long as he sleeps, he dreams only of the present. should he awaken, he will see the totaltiy of exsistance, all things past and future exsactly as they will happen. quantumly speaking he will lock the universe into a single shape. All things that happen will happen as he sees them happen and there will be no chance for anyone to change it. effectivly nullifying chance for change. Even he cannot alter his vision for his vision takes into account all attempts to alter it.
And there's a big debate over rather or not this is a game ending thing. Essentially, does predestination negate freewill or not
(5:32:17 PM) Nekira Sudacne: and this is important, because one of the requirements for Exaltation to function is freewill. if Sachervell is able to negate freewill, then Exaltations will cease to function
(5:32:44 PM) Nekira Sudacne: and maddenly enough the game authors are also on the thread arguing because THEY don't agree where to go with it either :)
(5:38:02 PM) rw271828: ah, well I happen to know the answer :-)
(5:39:23 PM) rw271828: one of the most important discoveries of 20th-century mathematics is that in general the behavior of a complex system cannot be predicted -- or rather, there is no easier way to predict it than to run it and see what happens. Note in particular:
(5:39:41 PM) rw271828: 1. This is a mathematical fact, so it applies in all possible universes, including Exalted
(5:40:01 PM) rw271828: 2. Humans and other sentient lifeforms are complex systems in the relevant sense
(5:41:33 PM) rw271828: so if you postulate an entity that can actually see the future (as opposed to just extrapolate what is likely to happen unless something intervenes), the only way to do that is for that entity to run a perfect simulation, a complete copy of the universe
(5:42:50 PM) rw271828: if you're willing to postulate that, well fine, continue the game, and just note that you are running it in the copy the entity is using to make the prediction - the people in the setting still have free will, it is their actions that determine the future, and thus the result of the prediction ^.^
(5:43:04 PM) Nekira Sudacne: Hah. nice one
The Curve of Capability
or: Why our universe has already had its one and only foom
In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.
That's a pretty important rule, so let's test it by looking at some more examples.
The I-Less Eye
or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
Two probabilities
Consider the following statements:
1. The result of this coin flip is heads.
2. There is life on Mars.
3. The millionth digit of pi is odd.
What is the probability of each statement?
A frequentist might say, "P1 = 0.5. P2 is either epsilon or 1-epsilon, we don't know which. P3 is either 0 or 1, we don't know which."
A Bayesian might reply, "P1 = P2 = P3 = 0.5. By the way, there's no such thing as a probability of exactly 0 or 1."
Which is right? As with many such long-unresolved debates, the problem is that two different concepts are being labeled with the word 'probability'. Let's separate them and replace P with:
F = the fraction of possible worlds in which a statement is true. F can be exactly 0 or 1.
B = the Bayesian probability that a statement is true. B cannot be exactly 0 or 1.
Clearly there must be a relationship between the two concepts, or the confusion wouldn't have arisen in the first place, and there is: apart from both obeying various laws of probability, in the case where we know F but don't know which world we are in, B = F. That's what's going on in case 1. In the other cases, we know F != 0.5, but our ignorance of its actual value makes it reasonable to assign B = 0.5.
When does the difference matter?
Suppose I offer to bet my $200 the millionth digit of pi is odd, versus your $100 that it's even. With B3 = 0.5, that looks like a good bet from your viewpoint. But you also know F3 = either 0 or 1. You can also infer that I wouldn't have offered that bet unless I knew F3 = 1, from which inference you are likely to update your B3 to more than 2/3, and decline.
On a larger scale, suppose we search Mars thoroughly enough to be confident there is no life there. Now we know F2 = epsilon. Our Bayesian estimate of the probability of life on Europa will also decline toward 0.
Once we understand F and B are different functions, there is no contradiction.
Privileged Snuff
So one is asked, "What is your probability estimate that the LHC will destroy the world?"
Leaving aside the issue of calling brown numbers probabilities, there is a more subtle rhetorical trap at work here.
If one makes up a small number, say one in a million, the answer will be, "Could you make a million such statements and not be wrong even once?" (Of course this is a misleading image -- doing anything a million times in a row would make you tired and distracted enough to make trivial mistakes. At some level we know this argument is misleading, because nobody calls the non-buyer of lottery tickets irrational for assigning an even lower probability to a win.)
If one makes up a larger number, say one in a thousand, then one is considered a bad person for wanting to take even one chance in a thousand of destroying the world.
The fallacy here is http://wiki.lesswrong.com/wiki/Privileging_the_hypothesis
Why safety is not safe
June 14, 3009
Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.
"We can stop here for a few minutes," remarked the librarian as he fumbled to light the lamp. "There's a stream just ahead."
The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.
It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?
The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.
Forty years later he died in bed, his question still unanswered.
The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.
Fire and Motion
Related to: Extreme Rationality: It's Not That Great
On the recent topics of "rationality is all very well but how do we translate understanding into winning?" and "isn't akrasia the most common limiting factor?", one of the best (non-recent) articles on practical rationality that I've come across is:
http://www.joelonsoftware.com/articles/fog0000000339.html
Interestingly, it uses a different kind of martial art as a metaphor. I conjecture it to be the sort of metaphor that just works well for humans.
(Most of Spolsky's posts are good reading even if you're not a programmer. I'm not in the New York real estate market but I still enjoyed his posts on that topic. He's just that good a writer.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)