Open Thread, August 2010
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (676)
I made some comments on the recently-deleted threads that got orphaned when the whole topic was banned and the associated posts were taken down. Currently no-one can reply to the comments. They don't related directly to the banned subject matter - and some of my messages survive despite the context being lost.
Some of the comments were SIAI-critical - and it didn't seem quite right to me at the time for the moderator to crush any discussion about them. So, I am reposting some of them as children of this comment in an attempt to rectify things - so I can refer back to them, and so others can comment - if they feel so inclined:
[In the context of SIAI folks thinking an unpleasant AI was likely]
Re: "The justification is that uFAI is a lot easier to make."
That seems like naive reasoning. It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.
Software companies manage to ship products that do sort of what they want, that they can patch to more closely do what they want. This is generally after rounds of internal testing, in which they try to figure out if it does what they want by running it and observing the result.
But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.
In fiction, yes. Fictional technology appears overnight, works the first time without requiring continuing human effort for debugging and maintenance, and can do all sorts of wondrous things.
In real life, the picture is very different. Real life technology has a small fraction of the capabilities of its fictional counterpart, and is developed incrementally, decade by painfully slow decade. If intelligent machines ever actually come into existence, not only will there be plenty of time to issue patches, but patching will be precisely the process by which they are developed in the first place.
I agree somewhat with this as a set of conclusions, but your argument deserves to get downvoted because you've made statements that are highly controversial. The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control. There are arguments against such a possibility being likely, but this is not a trivial matter. Moreover, comparing the situation to fiction is unhelpful- just because something is common in fiction that's not an argument that such a situation can't actually happen in practice. Reversed stupidity is not intelligence.
Re: "The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control."
If its creators are incompetent. Those who think this are essentially betting on the incompetence of the creators.
There are numerous counter-arguments - the shifting moral zeitgeist, the downward trend in deliberate death, the safety record of previous risky tech enterprises.
A stop button seems like a relatively simple and effective safely feature. If you can get the machine to do anything at all, then you can probably get it to turn itself off.
See: http://alife.co.uk/essays/stopping_superintelligence/
The creators will likely be very smart humans assisted by very smart machines. Betting on their incompetence is not a particularly obvious thing to do.
To be sure, the fact that something is commonplace in fiction doesn't prove it false. What it does show is that we should distrust our intuition on it, because it's clearly an idea to which we are positively disposed regardless of its truth value -- in the Bayesian sense, that is evidence against it.
The stronger argument against something is of course its consistent failure to occur in real life. The entire history of technological development says that technology in the real world does not work the way it would need to for the 'AI go foom' scenario. If 100% evidence against and 0% evidence for a proposition should not be enough to get us to disbelieve it, then what should?
Not to mention that when you look at the structure of the notion of recursive self-improvement, it doesn't even make sense. A machine is not going to be able to completely replace human programmers until it is smarter than even the smartest humans in every relevant sense, which given the differences in architecture, is an extraordinarily stringent criterion, and one far beyond anything unaided humans could ever possibly build. If such an event ever comes about in the very distant future, it will necessarily follow a long path of development in which AI is used to create generation after generation of improved tools in an extended bootstrapping process that has yet to even get started.
And indeed this is not a trivial matter -- if people start basing decisions on the 'AI go foom' belief, that's exactly the kind of thing that could snuff out whatever chance of survival and success we might have had.
Most programmers are supervised. So, this claim is hard to parse.
Machine intelligence has been under development for decades - and there have been plenty of patches so far.
One way of thinking about the process is in terms of increasing the "level" of programming languages. Computers already write most machine code today. Eventually humans will be able to tell machines what they want in ordinary English - and then a "patch" will just be some new instructions.
By other humans. If we program an AGI, then it will supervise all future programming.
Machine intelligence does not yet approach human intelligence. We are talking about applying patches on a superintelligence.
The difficulty is not in specifying the patch, but in applying to a powerful superintelligence that does not want it.
All computer programming will be performed and supervised by engineered agents eventually. But so what? That is right, natural and desirable.
It seems as though you are presuming a superintelliigence which doesn't want to do what humans tell it to. I am sure that will be true for some humans - not everyone can apply patches to Google today. However, for other humans, the superintelligence will probably be keen to do whatever they ask of it - since it will have been built to do just that.
A computer which understands human languages without problems will have achieved general intelligence. We won't necessarily be able to give it "some new instructions", or at least it might not be inclined to follow them.
Those software companies test their products for crashes and loops. There is a word for testing an AI of unknown Friendliness and that word is "suicide".
That just seems to be another confusion to me :-(
The argument - to the extent that I can make sense of it - is that you can't restrain an super-intelligent machine - since it will simply use its superior brainpower to escape from the constraints.
We successfully restrain intelligent agents all the time - in prisons. The prisoners may be smarter than the guards, and they often outnumber them - and yet still the restraints are usually successuful.
Some of the key observations to my mind are:
Discarding the standard testing-based methodology would be very silly, IMO.
Indeed, it would sabotage your project to the point that it would almost inevitably be beaten - and there is very little point in aiming to lose.
Are you familiar with the AI-Box experiment? We can restrain human-intelligence level agents in prisons, most of the time. But the question to ask is: how effective was the first prison? Because that's the equivalent case.
None of the safety measures you propose are safe enough. You're underestimating the power of a recursively self-improving AI by a factor I can't begin to estimate--which is kind of the point.
It won't be the first prison - or anything like it.
If we have powerful intelligence that needs testing, then we can have powerful guards too.
The AI-Box experiment has human guards. Consequently, it has very low relevance to the actual problem. Programmers don't build their test harnesses out of human beings.
Safety is usually an economic trade off. You can usually have an lot of it - if you are prepared to pay for it.
A much stronger argument than all-powerful AIs suddenly escaping (which is still not without merit) is that AI will have an incentive to behave as we expect it to behave, until at some point we no longer control it. It'll try its best to pass all tests.
So: while it believes it is under evaluation it does its very best to behave itself?
Can we wire that belief in as a prior with p=1.0?
I suppose I was mentally classifying that kind of behavior as an escape; you're right that it should be called out as a separate point of failure.
They used to have a "commitment" that:
"Technology developed by SIAI will not be used to harm human life."
...on their web site. I probably missed the memo about that being taken down.
[In the context of SIAI folks thinking an unpleasant AI was likely]
The SIAI derives its funding from convincing people that the end is probably nigh - and that they are working on a potential solution. This is not the type of organisation you should trust to be objective on such an issue - they have obvious vested interests.
I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss
They didn't source the specific article, but it seems to be this one, published in <i>Nature Physics</i>. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html
This is all well above my paygrade. Is this all conceptual? Are the scientists involed anywhere near an experiment to verify any of this? In a word, huh?
I don't want this kind of items to be discussed on LW. It's either off-topic or crackpottery, irrelevant whatever the case.
Considering the source was Nature, I doubt your analysis is correct. The researchers are from Ludwig-Maximilians-University and ETH Zürich, which appear to be respectable institutions. I found a write-up at Science Daily (http://www.sciencedaily.com/releases/2010/07/100727082652.htm) that provides some more details on the research. From that link:
"The teams at LMU and the ETH Zurich have now shown that the result of a measurement on a quantum particle can be predicted with greater accuracy if information about the particle is available in a quantum memory. Atoms or ions can form the basis for such a quantum memory.
The researchers have, for the first time, derived a formula for Heisenberg's Principle, which takes account of the effect of a quantum memory. In the case of so-called entangled particles, whose states are very highly correlated (i.e. to a degree that is greater than that allowed by the laws of classical physics), the uncertainty can disappear.
According to Christandl, this can be roughly understood as follows "One might say that the disorder or uncertainty in the state of a particle depends on the information stored in the quantum memory. Imagine having a pile of papers on a table. Often these will appear to be completely disordered -- except to the person who put them there in the first place."
This is one of the very few places online that I've seen thoughtful discussion on the implications of quantum mechanics, so I felt research that could impact quantum theory would be relevant.
The discussion of quantum mechanics Eliezer Yudkowsky did was not because quantum mechanics is relevant to the interests of this community, but because the counterintuitive nature of quantum mechanics offered good case studies to use in discussing rationality.
P ≠ NP : http://news.ycombinator.com/item?id=1585850
I know. Does any human mathematician really doubt that?
Yes, there are humans mathematicians who doubt that P is not equal to NP.
See "Guest Column: The P=?NP Poll" http://www.cs.umd.edu/~gasarch/papers/poll.pdf by William Gasarch where a poll was taken of 100 experts, 9 of whom ventured the guess that P = NP and 22 of whom offered no opinion on how the P vs. NP question will be resolved. The document has quotes from various of the people polled elaborating on what their beliefs are on this matter.
I've been becoming more and more convinced that Kevin and Clippy are the same person. Besides Clippy's attempt to get money for Kevin, one reason is that both of them refer to people with labels like "User:Kevin". More evidence just came in here, namely these comments within 5 minutes of each other.
How do you know you know?
Because if it were otherwise -- if verifying a solution were of the same order of computational difficulty of finding it -- it would be a lot harder to account for my observations than if it weren't so.
For example, verifying a proof would be of similar difficulty to finding the proof, which would mean nature would stumble upon representations isomorphic to either with similar probability, which we do not see.
The possibility that P = NP but with a "large polynomial degree" or constant is too ridiculous to be taken seriously; the algorithmic complexity of the set of NP-complete problems does not permit a shortcut that characterizes the entire set in a way that would allow such a solution to exist.
I can't present a formal proof, but I have sufficient reason to predicate future actions on P ≠ NP, for the same reason I have sufficient reason to predicate future actions on any belief I hold, including beliefs about the provability or truth of mathematical theorems.
Most human mathematicians think along similar lines. It will still be a big deal when P ≠ NP is proven, if for no other reason that it pays a million dollars. That's a lot of paperclips.
Let me know if you think you can solve any of these! http://www.claymath.org/millennium/
There's a very good summary by Scott Aaronson describing why we believe that P is very likely to be not equal to NP. However, Clippy's confidence seems unjustified. In particular, there was a poll a few years ago that showed that a majority of computer scientists believe that P=NP but a substantial fraction do not. (The link was here but seems to be not functioning at the moment (according to umd.edu's main page today they have a scheduled outage of most Web services for maintenance so I'll check again later. I don't remember the exact numbers so I can't cite them right now)).
This isn't precisely my area, but speaking as a mathematician whose work touches on complexity issues, I'd estimate around a 1/100 chance that P=NP.
URL is repeated twice in link?
As people are probably aware, Hitchens has cancer, which is likely to kill him in the not-too-distant future. There does not seem to be much to be done about this; but I wonder if it's possible to pass the hat to pay for cryonics for him? Apart from the fuzzies of saving a life with X percent probability, which can be had much cheaper by sending food to Africa, it might serve as marketing for cryonics, causing others to sign up. Of course, this assumes that he would accept, and also that there wouldn't be a perception that he was just grasping at any straw available.
I'd love to persuade him, but no way am I passing a hat.
Would Hitchens not be able to afford cryonics without donations?
What's wrong with this? Isn't that exactly what cryonics is: grasping the only available straw?
(Hm, how do I get a sentence inside the numbering indentation but outside the quotation?)
Perhaps so, but would he consider it the best use of his resources? While if he gets it for free, take it or lose it, that's a different matter.
For marketing purposes it would be an epic fail. In interviews he has made the point that no, he will not be doing any deathbed conversions unless he goes mad from pain. If cryonics is seen as only a deathbed conversion to a different religion (easy pattern completions: "Rapture of the Nerds", "weird beliefs = cults") it'll merely reinforce the perception of cryonics as something rather kooky which serious people needn't spend time on. Your point is correct, but will only work as PR if that's how it gets across to the public: This is a straw with an actual chance of working.
Ah, I see. Certainly it would be better if he made the choice well before he's at death's door/in terrible pain/etc..
Has there ever been a practical proof-of-concept system, even a toy one, for futarchy? Not just a "bare" prediction market, but actually tying the thing directly to policy.
If not, I suggest a programming nomic (aka codenomic) for this purpose.
If you're not familiar with the concept of nomic, it's a little tricky to explain, but there's a live one here in ECMAScript/Javascript, and an old copy of the PerlNomic codebase here. (There's also a scholarly article [PDF] on PerlNomic, for those interested.)
Also, if you're not familiar with the concept of nomic, you don't read enough Hofstadter.
I've heard many times here that Gargoyles involved some interesting multilevel plots, but the first few episodes had nothing like it, just standard Disneyishness. Anyone recommendations which episodes are best of the series, so I can check them out without going through the boring parts?
Some, if not most, people on LW do not subscribe to the idea that what has come to be known as AI FOOM is a certainty. This is even more common off LW. I would like to know why. I think that, given a sufficiently smart AI, it would be beyond easy for this AI to gain power. Even if it could barely scrape by in a Turing test against a five-year-old, it would still have all the powers that all computers inherently have, so it would already be superhuman in some respects, giving it enormous self-improving ability. And the most important such inherent power is the one that makes Folding@home work so well - the ability to simply copy the algorithm into more hardware, if all else fails, and have the copies cooperate on a problem.
So what could POSSIBLY slow this down, besides the AI's keepers intentionally keeping it offline?
Unless I'm really misinterpreting you, "simply copy the algorithm into more hardware" sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable. Rarely have I ever wanted to run a serial algorithm in parallel and had it be a matter of "simply run the same old thing on each one and put the results together." The more complicated the algorithm in question, the more work it takes to efficiently and correctly split up the work; and at really large, Google-esque scales, you need to start worrying about latency and hardware reliability.
I tend to agree that recursive self-improvement will lead to big gains fast, but I don't buy that it's going to be immediately trivial for the AI to just throw more hardware at the problem and gain huge chunks of performance for free. It depends on the initial design.
Are you a programmer yourself?
A prerequisite for an AI FOOMing is the ability to apply its intelligence to improving its source code so that the resulting program is more intelligent still.
We have an existence proof that human-level intelligence does not automatically give a mind the ability to understand source code and make changes to that source code which reliably have the intended effect. Perhaps some higher level of intelligence automatically grants that ability, but proving that would be non-trivial.
If your unpacking of "sufficiently smart" is such that any sufficiently smart AI has not only the ability to think at the same level as a human, but also to reliably and safely make changes to its own source code, such that these changes improve its intelligence, then a FOOM appears inevitable, and we have (via the AI Box experiments) an existence proof that human-level intelligence is sufficient for an AI to manipulate humans into giving it unrestricted access to computing resources.
But that meaning of "sufficiently smart" begs the question of what it would take for an AI to have these abilities.
One of the insights developed by Eliezer is the notion of a "codic cortex", a sensory modality designed to equip an AI with the means to make reliable inferences about source code in much the same way that humans make reliable inferences about the properties of visible objects, sounds, and so on.
I am prepared to accept that an AI equipped with a "codic cortex" would inevitably go FOOM, but (going on what I've read so far) that notion is at present more of a metaphor than a fully-developed plan.
India Asks, Should Food Be a Right for the Poor?
http://www.nytimes.com/2010/08/09/world/asia/09food.html?hp
why don't more male celebrities sell their semen?
A comic about the lancet liver fluke.
If you have many different (and conflicting, in that they demand undivided attention) interests: if it was possible, would copying yourself in order to pursue them more efficiently satisfy you?
One copy gets to learn drawing, another one immerses itself in mathematics & physics, etc. In time, they can grow very different.
(Is this scenario much different to you than simply having children?)
Waste of processing power. Having dozens of focuses of attention and corresponding body/brain construction is more efficient.
Because basic functions are being repeated?
What's the difference between a copy of yourself and an extra "body/brain construction"?
Copying has at best zero utility (as regards interests): each copy only indulges in one interest, and I anticipate being only one copy, even if I don't know in advance which one.
How is having children at all similar? 1) children would have different interests; 2) I cannot control (precommit) future children; 3) raising children would be for me a huge negative utility - both emotionally and resource-wise.
I would like feedback on my recent blog post:
http://www.kmeme.com/2010/07/singularity-is-always-steep.html
It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.
Instead I now believe in many cases the log plot is closer to "the real thing" or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.
Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It's not building to some dramatic peak.
None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.
This is easier to say when you're near the top of the current curve.
It doesn't affect me much that my computer can't handle hi-def youtube, because I'm just a couple of doubling times behind the state of the art.
But if you were using a computer ten doubling times back, you'd have trouble even just reading lesswrong. Even if you overcame the format and software issues, we'd be trading funny cat videos that are bigger than all your storage. You'd get nothing without a helper god to downsample them.
When the singularity approaches, the doubling time will decrease, for some people. Maybe not for all.
Maybe will will /feel/ like a linear increase in utility for the people who's abilities are being increased right along. For people who are 10 doublings behind and still falling, it will be obvious something is different..
My essay on the topic:
http://alife.co.uk/essays/the_singularity_is_nonsense/
See also:
"The Singularity" by Lyle Burkhead - see the section "Exponential functions don't have singularities!"
It's not exponential, it's sigmoidal
The Singularity Myth
Singularity Skepticism: Exposing Exponential Errors
IMO, those interested in computational limits should discuss per-kg figures.
The metric Moore's law uses is not much use really - since it would be relatively easy to make large asynchronous ICs with lots of faults - which would make a complete mess of the "law".
Wow good stuff. Especially liked yours not linked above:
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn't going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.
But you make a very good case that the whole thing is bunk. I especially like the "different levels of intelligence" point, had not heard that before re: AI.
But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.
I wonder if someone 50 years ago said "some day computers will display high quality video and everyone will watch computers instead of TV or film". Sure it is happening, but it's a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.
Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today - to some extent.
I expect machine intelligence won't surpass human intelligence rapidly - but rather gradually, one faculty at a time. Memory and much calculation have already gone.
The extent to which machines design and build other machines has been gradually increasing for decades - in a process known as "automation". That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.
Automation takes over jobs gradually - partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work - and simple machines could do their jobs for them.
However, this bunches together the remaining human workers somewhat - likely increasing the rate at which their jobs will eventually go.
So: possibly relatively rapid and dramatic changes - but most of the ideas used to justify using the "singularity" term seem wrong. Here is some more orthodox terminology:
http://en.wikipedia.org/wiki/Digital_Revolution
http://en.wikipedia.org/wiki/Information_Revolution
I discussed this terminology in a recent video/essay:
http://alife.co.uk/essays/engineering_revolution/
There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.
I think this is bunk. Consider the following:
--
Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.
Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?
Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won't readjust the sync on an ongoing basis; it's just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)
Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation -- emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they're embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.
What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?
--
No, of course not. And, on reflection, it's obvious that we never did: redundant computation is not additional computation.
So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term -- the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people -- perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.
But consider, for a moment, if we were not talking about people but -- say -- works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.
The attitude I've seen seems to treat people as a special case -- as a separate magisterium.
--
I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.
If you really believed that, you'd try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.
You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.
I have for a while had a feeling that the moral value of a being's existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where "has something to do with" = it's somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that's bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn't seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus's brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being's value has something to do with the degree to which its mind's unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)
I think... there's more to this wrongness-feeling I have than I've expressed. I would readily subject a million forks of myself to horrific suffering for the moderate benefit of just one of me. The main reason I'd have reservations about releasing myself on the internet for anyone to download would be because they could learn how to manipulate me. The main problem I have with slavery and starvation is that they're a waste of human resources, and that monolithic power structures are brittle against black swans. In short, I don't consider it a moral issue what algorithm is computed to produce a particular result.
I'm not sure how to formalize this properly.
Some hobby Bayesianism. A typical challenge for a rationalist is that there is some claim X to be evaluated, it seems preposterous, but many people believe it. How should you take account of this when considering how likely X is to be true? I'm going to propose a mathematical model of this situation and discuss two of it's features.
This is based on a continuing discussion with Unknowns, who I think disagrees with what I'm going to present, or with its relevance to the "typical challenge."
Summary: If you learn that a preposterous hypothesis X is believed by many people, you should not correct your prior probability P(X) by a factor larger than the reciprocal of P(Y), your prior probability for the hypothesis Y = "X is believed by many people." One can deduce an estimate of P(Y) from an estimate of the quantity "if I already knew that at least n people believed X, how likely it would be that n+1 people believed X" as a function of n. It is not clear how useful this method of estimating P(Y) is.
The right way to unpack "X seems preposterous, but many believe it" mathematically is as follows. We have a very low prior probability P(X), and then we have new evidence Y = "many people believe X". The problem is to evaluate P(X|Y).
One way to phrase the typical challenge is "How much larger than P(X) should P(X|Y) be?" In other words, how large is the ratio P(X|Y)/P(X)? Bayes formula immediately says something interesting about this:
P(X|Y)/P(X) = P(Y|X)/P(Y)
Moreover, since P(Y|X) < 1, the right-hand side of that equation is less than 1/P(Y). My interpretation of this: if you want to know how seriously to take the fact that many people believe something, you should consider how likely you find it that many people would believe it absent any evidence. Or a little more precisely, how likely you find it that many people would believe it if the amount of evidence available to them was unknown to you. You should not correct your prior for X by more than the reciprocal of this probability.
Comment: how much less than 1 P(Y|X) is depends on the nature of X. For instance, if X is the claim "the Riemann hypothesis is false" then it is unclear to me how to estimate P(Y|X), but (since it is conceivable to me that RH is false, but still it is widely believed) it might be quite small. If X is an everyday claim like "it's a full moon tomorrow", or a spectacular claim like "Jesus rose from the dead", it seems like P(Y|X) is very close to 1. So sometimes 1/P(Y) is a good approximation to P(X|Y)/P(X), but maybe sometimes it is a big overestimation.
What about P(Y)? Is there a way to estimate it, or at least approach its estimation? Let's give ourselves a little more to work with, by quantifying "many people" in "many people believe X". Let Y(n) be the assertion "at least n people believe X." Note that this model doesn't specify what "believe" means -- in particular it does not specify how strongly n people believe X, nor how smart or expert those n people are, nor where in the world they are located... if there is a serious weakness in this model it might be found here.
Another application of Bayes theorem gives us
P(Y(n+1))/P(Y(n)) = P(Y(n+1)|Y(n))
(Since P(Y(n)|Y(n+1)) = 1, i.e. if we know n+1 people believe X, then of course n people believe X). Squinting a little, this gives us a formula for the derivative of the logarithm of P(Y(n)). Yudkowsky has suggested naming the log of a probability an "absurdity," let's write A(Y(n)) for the absurdity of Y(n).
d/dn A(Y(n)) = A(Y(n+1)|Y(n))
So up to an additive constant A(Y(n)) is the integral from 1 to n of A(Y(m+1)|Y(m))dm. So an ansatz for P(Y(n+1)|Y(n)) = exp(A(Y(n+1)|Y(n)) will allow us to say something about P(Y(n)), up to a multiplicative constant.
The shape of P(Y(n+1)|Y(n)) seems like it could have a lot to do with what kind of statement X is, but there is one thing that seems likely to be true no matter what X is: if N is the total population of the world and n/N is close to zero, then P(Y(n+1)|Y(n)) is also close to zero, and if n/N is close to one then P(Y(n+1)|Y(n)) is also close to one. I might work out an example ansatz like this in a future comment, if this one stands up to scrutiny.
An amusing case of rationality failure: Stockwell Day, a longstanding albatross around Canada's neck, says that more prisons need to be built because of an 'increase in unreported crime.'
As my brother-in-law amusingly noted on FB, quite apart from whether the actual claim is true (no evidence is forthcoming), unless these unreported crimes are leading to unreported trials and unreported incarcerations, it's not clear why we would need more prisons.
I think one of the other reasons many people are uncomfortable with cryonics is that they imagine their souls being stuck-- they aren't getting the advantages of being alive or of heaven.
In all honesty, I suspect another reason people are uncomfortable with cryonics is that they don't like being cold.
I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing). Is it not a good enough topic? Why not? Is it not in depth enough? And so on.
Is your graph complete?
The red gnomes are known to be the best arguers in the world. If you asked them whether the only creature that lived in the Graph Mountains was a Dwongle, they would say, “No, because Dwongles never live in mountains.”
And this is true, Dwongles never live in mountains.
But if you want to know the truth, you don’t talk to the red gnomes, you talk to the green gnomes who are the second best arguers in the world.
And they would say. “No, because Dwongles never live in mountains.”
But then they would say, “Both we and the red gnomes are so good at arguing that we can convince people that false things are true. Even worse though, we’re so good that we can convince ourselves that false things are true. So we always ask if we can argue for the opposite side just as convincingly.”
And then, after thinking, they would say, “We were wrong, they must be Dwongles, for only Dwongles ever live in places where no other creatures live. So we have a paradox and paradoxes can never be resolved by giving counter examples to one or the other claim. Instead of countering, you must invalidate one of the arguments.”
Eventually, they would say, “Ah. My magical fairy mushroom has informed me that Graph Mountain is in fact a hill, ironically named, and Dwongles often live in hills. So yes, the creature is a Dwongle.”
The point of all of that is best discussed after introducing a method of diagramming the reasoning made by the green gnomes. The following series of diagrams should be reasonably self explanatory. A is a proposition that we want to know the truth of (the creature in the Graph Mountains a Dwongle) and not-A is its negation (the creature in the Graph Mountains is not a Dwongle). If a path is drawn between a proposition and the “Truth” box, then the proposition is true. Paths are not direct but go through a proof (in this case P1 stands in for “Dwongles never live in mountains” and P2 stands in for “Only Dwongles live in a place where no other creatures live). The diagrams connect to the argument made above by the green gnome. First, we have the argument that it mustn’t be a Dwongle because of P1. The second diagram shows the green gnome realising that they have an argument that it must be a Dwongle too due to P2. This middle type of diagram could be called a “Paradox Diagram.”
Figure 1. The green gnomes process of argument.
In his book, Good and Real, Gary Drescher notes that paradoxes can’t be resolved by making more counterarguments (which would relate to the approach shown in figure 2 before, which when considered graphically is obviously not helpful, we still have both propositions being shown to be true) but rather, by invalidating one of the arguments. That’s what the green gnomes did when they realised that Graph Mountain was actually a hill and that’s what the final diagram in figure 1 shows the result of (when you remove a vertex, like P1, you remove all the lines connected to it as well).
Figure 2. Attempting to resolve a paradox via counter arguments rather than invalidation.
The interesting thing in all of this is that the first and third diagrams in figure 1 look very similar. In fact, they’re the same but simply with different propositions proven. And this raises something: It can be very difficult to tell the difference between an incomplete paradox diagram and a completed proof diagram. The difference between the two is whether you’ve tried to find an argument for the opposite of the proposition proven and, if you do find one, whether you’ve managed to invalidate that argument.
What this means is, if you’re not confident that your proof for a proposition is true, you can’t be sure that you’ve taken all of the appropriate steps to establish its truth until you’ve asked: Is my graph complete?
I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.
Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanations of it really say much. Here's a representative snippet from Wikipedia:
Then we have pages like this one, talking more about the Akashic Records (because apparently it's a quantum field thingy and also an infinite library or something). The very first sentence sums it up: "The Akashic Records refer to the frequency gird programs that create our reality." Okay, actually that didn't sum up crap; but it sounded cool, didn't it? That page is full of references to the works of various people, cited very nicely, and the spelling and grammar suggest someone with education. There are a lot of pages like this floating around. The thing they all have in common is that they don't seem to consider evidence to be important. It's not even on their radar.
Scholarly writings from New Age people is a pretty breathtaking example of dark side epistemology, if anybody wants a case study in exactly what not to do. It's pretty intense.
Cryronics Lottery.
Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."
It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".
It might be useful to have a short list of English words that indicate logical relationships or concepts often used in debates and arguments, so as to enable people who are arguing about controversial topics to speak more precisely.
Has anyone encountered such a list? Does anyone know of previous attempts to create such lists?
PZ Meyers' comments on Kurzweil generated some controversy here recently on LW--see here. Apparently PZ doesn't agree with some of Kurzweil's assumptions about the human mind. But that's besides the point--what I want want to discuss is this: according to another blog, Kurzweil has been selling bogus nutritional supplements. What does everyone think of this?
I would like a better source than a blog comment for the claim that Kurzweil has been selling bogus nutritional supplements. The obvious alternative possibility is that someone else, with less of a reputation to worry about, attached Kurzweil's name to their product without his knowledge.
Ok, I've found some better sources. See the first three links.
I would have preferred a more specific link than that, to save me the time of doing a detailed investigation of Kurzweil's company myself. But I ended up doing one anyways, so here are the results.
That "Ray and Terry's Longevity Products" company's front page screams low-credibility. It displays three things: an ad for a book, which I can't judge as I don't have a copy, an ad for snack bars, and a news box. Neutral, silly, and, ah, something amenable to a quality test!
The current top headline in their Healthy Headlines box looked to me like an obvious falsehood ("Dirty Electricity May Cause Type 3 Diabetes"), and on a topic important to me, so I followed it up. It links to a blog I don't recognize, which dug it out of a two year old study, which I found on PubMed. And I personally verified that the study was wrong - by the most generous interpretation, assuming no placebo effect or publication bias (both of which were obviously present), the study contains exactly 4 bits of evidence (4 case studies in which the observed outcome had a 50% chance of happening assuming the null hypothesis, and a 100% chance of happening assuming the conclusion). A review article confirmed that it was flawed.
That said, he probably just figured the news box was unimportant and delegated the job to someone who wasn't smart enough to keep the lies out. But it means I can't take anything else on the site seriously without a very time-consuming investigation, which is bad enough.
The bit about Kurzweil taking 250 nutritional supplements per day jumps out, too, since it's an obviously wrong thing to do; the risks associated with taking a supplement (adverse reaction, contamination, mislabeling) scale linearly with the number taken, while the upside has diminishing returns. You take the most valuable thing first, then the second-most, by the time you get to the 250th thing it's a duplicate or worthless. Which leads me to believe that he just fudged the number, by counting things that are properly considered duplicates like split doses of the same thing.
Kurzweil should be concerned that his name is associated with junk science, and the overall result, but I think its a little far-fetched to think the man is actually selling nutritional supplements that he thinks are bogus.
The state of medicine and nutrition today is such that we know there is so much we don't know. The human body is supremely complex, to make an understatement. The evidence is pretty strong that most supplements, and even most multi-vitamins, don't do much or even do harm.
However that is certainly not true in every case, and there are particular supplements where we have strong evidence for net positive effect (vitamin D and fish oil have very strong evidence for net benefit at this point - everyone should be on them) .
But if you are someone like Kurzweil, and you want to make it to the Singularity, you probably will do the research and believe you have some inside knowledge on optimizing the human body. I find it more likely that he actually does take a boatload of supplements.
I'm sure he does take a lot of them himself, but the problem is that Kurzweil taking supplements will still make people think he is delusional (because most people are instantly suspicious of people who do so, generally for good reasons).
On a related note, Ben Best also sells supplements on his website, and many of them look pretty questionable.
So I'm curious, do you believe that typical supplements have net negative effect, vs just neutral?
It was my understanding that the weight of evidence points to most having neutral overall effect, which to me wouldn't justify instant suspicion. I mean you may be wasting money, but you probably aren't hurting yourself.
And if you really do the research, you probably are going to get some net positive gain, statistically speaking. Don't you think? I know of at least 2 cases (vitamin D and fish oil, where the evidence for net benefit is strong - but mainly due to deficiency in the modern diet).
I think it is a mixed bag: Some supplements are potentially dangerous, but others (like the ones you mention) can be very helpful. The majority, however, probably have little to no effect whatsoever. As a result, I don't think people should mess around with what they eat without it being subjected to rigorous clinical trials first; though there might be a positive net gain, one dose of something bad can kill you.
In any case, though, believing that something is helpful when it has not yet been tested is clearly irrational. (This is more what I concerned about with Best and Kurzweil.) Selling or promoting something that isn't tested is even worse; it borders on fraud and charlatanry.
Edit: No, let me amend that: it is charlatanry.
Followup to: Making Beliefs Pay Rent in Anticipated Experiences
In the comments section of Making Beliefs Pay Rent, Eliezer wrote:
If I am interpreting this correctly, Eliezer is saying that there is a nearly infinite space of unfalsifiable hypotheses, and so our priors for each individual hypothesis should be very close to zero. I agree with this statement, but I think it raises a philosophical problem: doesn't this same reasoning apply to any factual question? Given a set of data D, there must be an nearly infinite space of hypotheses that (a) explain D and (b) make predictions (fulfilling the criteria discussed in Making Beliefs Pay Rent). Though Occam's Razor can help us to weed out a large number of these possible hypotheses, a mind-bogglingly large number would still remain, forcing us to have a low prior for each individual hypothesis. (In philosophy of science, this is known as "underdetermination.") Or is there a flaw in my reasoning somewhere?
Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypotheses that can be taken reasonably seriously, and the remainder being unlikely - and progressively more unlikely as n increases, so that when n is sufficiently large, we can, practically, dismiss any hypotheses.
I agree with most of that, but why favor less information content? Though I may not fully understand the math, this recent post by cousin it seems to be saying that priors should not always depend on Kolmogorov complexity.
And, even if we do decide to favor less information content, how much emphasis should we place on it?
In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.
Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.
Which theory has more information?
I didn't say you ignored previous correspondence with reality, though.
That isn't Perplexed's point. Let's say that as of this moment all crows that have been observed are black, so both of his hypotheses fit the data. Why should "all crows are black" be assigned a higher prior than "All crows are black except <270 pages specifying the exceptions>"? Based on cousin_it's post, I don't see any reason to do that.
Here's a thought experiment that's been confusing me for a long time, and I have no idea whether it is even possible to resolve the issues it raises. It assumes that a reality which was entirely simulated on a computer is indistinguishable from the "real" one, at least until some external force alters it. So... the question is, assuming that such a program exists, what happens to the simulated universe when it is executed?
In accordance with the arguments that Pavirta gives below me, redundant computation is not the same as additional computation. Executing the same program twice (with the same inputs each time) is equivalent to executing it once, which is equivalent to executing it five times, ten times, or a million. You are just simulating the same universe over and over, not a different one each time.
But is running the simulation once equivalent to running it ZERO times?
The obvious answer seems to be "no", but bear with me here. There is nothing special about the quarks and leptons that make up a physical computer. If you could make a Turing machine out of light, or more exotic matter, you would still be able to execute the same program on it. And if you could make such a computer in any other universe (whatever that might mean), you would still be able to run the program on it. But in such considerations, the computer used is immaterial. A physical computer is not a perfect Turing machine - it has finite memory space and is vulnerable to physical defects which introduce errors into the program. What matters is the program itself, which exists regardless of the computer it is on. A program is a Platonic ideal, a mathematical object which cannot exist in this universe. We can make a representation of that program on a computer, but the representation is not perfect, and it is not the program itself. In the same way, a perfect equilateral triangle cannot actually be constructed in this universe; even if you use materials whose length is measured down to the atom, its sides will not be perfectly straight and its angles will not be perfectly equal. More importantly, if you then alter the representation to make one of the angles bigger, it does not change the fact that equilateral triangles have 60° angles, it simply makes your representation less accurate. In the same way, executing a program on a computer will not alter the program itself. If there are conscious beings simulated on your computer, they existed before you ran the program, and they will exist even if you unplug the computer and throw it into a hole - because what you have in your computer is not the conscious beings, but a representation of them. And they will still exist even if you never run the program, or even if it never occurs to anyone on Earth that such a program could be made.
The problem is, this same argument could be used to justify the existence of literally everything, everywhere. So we are left with several possible conclusions: (1)Everything is "real" in some universe, and we have no way of ever finding such universes. This cannot ever be proved or falsified, and also leads to problems with the definition of "everything" and "real". (2)The initial premise is false, and only physical objects are real: simulations, thoughts and constructs are not. I think there is a philosophical school of thought that believes this to be true, though I have no idea what its name is. Regardless, there are still a lot of holes in this answer. (3)I have made a logical mistake somewhere, or I am operating from an incorrect definition of "real". It happens.
It is also worth pointing out that both (1) and (2) invalidate every ethical truth in the book, since in (1) there is always a universe in which I just caused the death of a trillion people, and in (2) there is no such thing as "ethics" - ideas aren't real, and that includes philosophical ideas.
Anyway, just bear this in mind when you think about a universe being simulated on a computer.
I don't think it works like that. Math is a conceptual construct, not something that has its own reality separate from either the thing it approximates or the mind that approximates with it.
I'm reminded of the person who thought that using the equations for relativistic rather than classical mechanics to model cannonballs would give the wrong answer.
Only things that happen are real. There's no Math Heaven inhabited by angelic equations in a separate magisterium from the world of the merely real.
Indeed. I have a post making similar arguments, though I still haven't been able to resolve the ethical and anthropic problems it raises in any satisfactory way. At this point I've backtracked from the confidence I held when I wrote that post; what I'm still willing to say is that we're probably on the right track thinking of "Why does anything exist?" as a wrong question and thinking of reality as indexical (i.e. the true referent of the category "real" is the set of things instantiated by this universe; it is a category error to talk about other universes being real or not real), but the Mathematical Universe Hypothesis still leaves much to be confused about.
My own view is that (ignoring simulations for the time being) MWI ideas have no conflict with our usual ethical intuitions and reasonings. Yes, it is the case that when I choose between evil action A and good action B, there will be two branches of the universe - one in which I choose A and one in which I choose B. This will be the case regardless of which choice I make. But this does not make my choice morally insignificant, because I split too, along with the rest of the universe. The version of me that chose evil act A will have to live thereafter with the consequences of that choice. And the version of me that chose B must live with quite different consequences.
What, more than that, could a believer in the moral significance of actions want of his universe?
The situation with respect to simulations is a bit trickier. Suppose I am deciding whether to (A) pull the plug on a simulation which contains millions of sentient (simulated) beings, or (B) allow the simulation to continue. So, I choose, and the universe branches. If I chose A, I must live with the consequences. I don't have that simulation to kick around any more. But, if I were to worry about all the simulated lives that I have so ruthlessly terminated, I can easily reassure myself that I have only terminated a redundant copy of those lives. The (now) master copy of the simulation plays on, over in that parallel universe where I chose B.
Is it wrong to create a simulation and then torture the inhabitants? Well, that is an ethical question, whereas this is a meta-ethical analysis. But the meta-ethical answer to that ethical question is that if you torture simulated beings, then you must live with the consequences of that.
That's not how MWI works, unless human brains have a quantum randomness source that they use to make decisions (which does not appear to be the case).
I'm not sure it matters to the analysis. Whether we have a Tegmark multiverse, or Everett MWI with some decisions depending on quantum randomness and others classically determined, or whether the multiple worlds are purely subjective fictions created to have a model of Bayesianism; regardless of what you think is a possible reduction of "possibly"; it is still the case that you have to live in the reality which you helped to create by way of your past actions.
Yes, MWI ideas have no conflict with usual ethical intuitions. And they also help you make better sense of those intuitions. Counterfactuals really do exist, for example; they're not just some hypothetical that is in point of fact physically impossible.
I should add that it is impossible to erase your sin by deciding to terminate the simulation, so as to "euthanize" the victims of your torture. Because there is always a branch where you don't so decide, and the victims of your torture live on.
That's pretty much Tegmark's Multiverse, which seems pretty popular around here (I think it makes a lot of sense).
Eliezer has written a post (ages ago) which discussed a bias when it comes to contributions to charities. Fragments that I can recall include considering the motivation for participating in altruistic efforts in a tribal situation, where having your opinion taking seriously is half the point of participation. This is in contrast to donating 'just because you want thing X to happen'. There is a preference to 'start your own effort, do it yourself' even when that would be less efficient than donating to an existing charity.
I am unable to find the post in question - I think it is distinct from 'the unit of caring'. It would be much appreciated if someone who knows the right keywords could throw me a link!
Your Price for Joining?
Alright, I've lost track of the bookmark and my google-fu is not strong enough with the few bits and pieces I remember. I remember seeing a link to a story in a lesswrong article. The story was about a group of scientists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a series of experiences/dreams which recount history leading up to where he currently is, including a civilization of uploads, and he's currently living with the last humans around... something like that. Can anybody help me out? Online story, 20 something chapters I think... this is driving me nuts.
After Life
Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.
Elderly atheist farmer dead; his friend the popular preacher's the suspect.
--JRM
The visual guide to a PhD: http://matt.might.net/articles/phd-school-in-pictures/
Nice map–territory perspective.
John Baez This Week's Finds in Mathematical Physics has its 300th and last entry. He is moving to wordpress and Azimuth. He states he wants to concentrate on futures, and has upcoming interviews with:
Tim Palmer on climate modeling and predictability, Thomas Fischbacher on sustainability and permaculture, and Eliezer Yudkowsky on artificial intelligence and the art of rationality. A Google search returns no matches for Fischbacher + site:lesswrong.com and no hits for Palmer +.
That link to Fischbacher that Baez gives has a presentation on cognitive distortions and public policy which I found quite good.
Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.
Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other 'higher' animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a 'hedonic coefficient' of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.
Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).
The instrumental value of most humans is enormously higher than the intrinsic value of the same persons - given that they do sufficiently good things.
Less absurd than that some organism is infinitely more valuable than its sibling that differs in lacking a single mutation (in the case of the first organism of a particular species to have evolved "high" enough to have minimal moral value)?
I've written a post for consolidating book recommendations, and the links don't have hidden urls. These are links which were cut and pasted from a comment-- the formatting worked there.
Posting (including to my drafts) mysteriously doubles the spaces between the words in one of my link texts, but not the others. I tried taking that link out in case it was making the whole thing weird, but it didn't help.
I've tried using the pop-up menu for links that's available for writing posts, but that didn't change the results.
What might be wrong with the formatting?
I don't know what's wrong, but a peek at the raw HTML editor (there's a button for it in the toolbar) might give a hint.
Say a "catalytic pattern" is something like scaffolding, an entity that makes it easier to create (or otherwise obtain) another entity. An "autocatalytic pattern" is a sort of circular version of that, where the existence of an instance of the pattern acts as scaffolding for creating or otherwise obtaining another entity.
Autocatalysis is normally mentioned in the "origin of life" scientific field, but it also applies to cultural ratchets. An autocatalytic social structure will catalyze a few more instances of itself (frequently not expanding without end - rather, a niche is filled), and then the population has some redundancy and recoverability, acting as a ratchet.
For example, driving on the right(left) in one region catalyzes driving on the right(left) in an adjacent region.
Designing circular or self-applicable entities is kindof tricky, but it's not as tricky as it might be - often, theres an attraction basin around a hypothesized circular entity, where X catalyzes Y which is very similar to X, and Y catalyzes Z which is very similar to Y, and so focusing your search sufficiently, and then iterating or iterating-and-tweaking can often get the last, trickiest steps.
Douglas Hofstadter catalyzed the creation (by Lee Sallows) of a "Pangram Machine" that exploits this attraction basin to create a self-describing sentence that starts "This Pangram contains four as, [...]" - see http://en.wikipedia.org/wiki/Pangram
Has there been any work on measuring, studying attraction basins around autocatalytic entities?
With regard to the recent proof of P!=NP: http://predictionbook.com/predictions/1588
With no time limit, how can you ever win that one?
No time limit?
"Under Pressure: The Search for a Stress Vaccine" http://www.wired.com/magazine/2010/07/ff_stress_cure/all/1
It was interesting that most of the commenters were opposed to the idea of a stress vaccine, though their reasons didn't seem very good.
I'm wondering whether the vaccine would mean that people would be more inclined to accept low status (it's less painful) or less inclined to accept low status (more energy, less pessimism.)
I also wonder how much of the stress from low status is from objectively worse conditions (less benign stimulus, worse schedules, more noise, etc.) as distinct from less control, and whether there's a physical basis for the inclination to crank up stress on subordinates.
Wired has unusually crappy commentators; YouTube quality. I wouldn't put much stock in their reactions.
/blatant speculation
Stress response evolved for fight-or-flight - baboons and chimps fight nasty. Not for thinking or health. Reduce that, and like mindfulness meditation, one can think better and solve one's problems better.
IIRC, the description made it sound like the study controlled for conditions - comparing clerical work with controlling bosses to clerical work sans controlling bosses.
Would people be interested in a place on LW for collecting book recommendations?
I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found Great Books of Failure, an article which hadn't crossed my path before.
There's a recent thread about books for a gifted young tween which might or might not get found by someone looking for good books..... and so on.
Would it make more sense to have a top level article for book recommendations or put it in the wiki? Or both?
Considering most of my favorite books are the result of mentions in comment threads here, I'd say a book recommendation thread is in order.
Tangental, but I remember "Logic of Failure" to be mostly being mental phenomena I was already familiar with, and generalizations from computer experiments that I didn't find particularly compelling. I'll have to give it another look.
I'd say new top-level thread. The wiki can get a curated version of that.
LW database download?
I was wondering if it would be a good idea to offer a download of LW or at least the sequences and Wiki. In the manner that Wikipedia is providing it.
The idea behind it is to have a redundant backup in case of some catastrophe, for example if the same happens to EY that happened to John C. Wright. It could also provide the option to read LW offline.
You can use the wget program like this: 'wget -m lesswrong.com'. A database download would be easier on the servers though.
Tricycle has the data. Also if an event of JCW magnitude happened to me I'm pretty sure I could beat it. I know at least one rationalist with intense religious experiences who successfully managed to ask questions like "So how come the divine spirit can't tell me the twentieth digit of pi?" and discount them.
What if you sustained hypoxic brain injury, as JCW may well have done during his cardiac event? (This might also explain why he think it's cool to write BSDM scenes featuring a 16-year-old schoolgirl as part of an ostensibly respectable work of SF, so it's a pet suspicion of mine.)
Point of curiosity: Does anyone else still notice this sort of thing? I don't think my generation does anymore.
I've only read his Golden Age trilogy, so if it's there, then no, to this 50-something it didn't stand out from everything else that happened. If it's in something else, I doubt it would. I mean, I've read Richard Morgan's ultra-violent stuff, including the gay mediæval-style fantasy one, and, well, no.
[ETA: from Google the book in question appears to be Orphans of Chaos.]
I could be an outlier though.
It would seem he is just writing for Mature Audiences. In this case maturity means not just 'the age at which we let people read pornographic text' but the kind of maturity that allows people to look beyond their own cultural prejudices.
16 is old. Not old enough according to our culture but there is no reason we should expect a fictional time-distant culture to have our particular moral or legal prescriptions. It wouldn't be all that surprising if someone from an actual future time to, when reading the work, scoff at how prudish a culture would have to be to consider sexualised portrayals of women that age to be taboo!
Mind you I do see how a hypoxic brain injury could alter someone's moral inhibitions and sensibilities in the kind of way you suggest. I just don't include loaded language in the speculation.
Interestingly, if the book in question is the one I think it is, it takes place in Britain, where the age of consent is, in fact, sixteen.
Come to think of it, 16 is the age of consent here (Australia - most states) too. I should have used 'your' instead of 'our' in the paragraph you quote! It seems I was just running with the assumption.
Although "18 years old" does seem to be a hard-and-fast rule for when you can legally appear in porn everywhere, as far as I know...
Eh, you see people trying to "push boundaries" in "respectable" literature all the time anyway.
Certainly there are other explanations. If you can show me that JCW openly wrote highly sexualized portrayals of people below the age of consent before his religious experience/heart attack, I will be happy to retract.
Actually, you have to be sure that you wouldn't convert if you had John Wright's experiences, otherwise Aumann's agreement theorem should cause you to convert already, simply because John Wright had the experiences himself-- assuming you wouldn't say he's lying. I actually know someone who converted to religion on account of a supposed miracle, who said afterward that since they in fact knew before converting that other people had seen such things happen, they should have converted in the first place.
Although I have to admit I don't see why the divine spirit would want to tell you the 20th digit of pi anyway, so hopefully there would be a better argument than that.
Here's a more detailed version (starting at "I know a transhumanist who has strong religious visions").
That's incredibly sad.
Every so often, people derisively say to me "Oh, and you assume you'd never convert to religion then?" I always reply "I absolutely do not assume that, it might happen to me; no-one is immune to mental illness."
I support this idea.
But what about copyright issues? What if posts and comments are owned by their writer?
I would argue that one cannot own the information stored on the computers of other, unrelated people.
I support this idea also. I actually intend to make a service for uploading the content of forum/blog to alternate server for backup service, but who knows when it will happen.
However, if EY converted to religion, he would (in that condition) assert that he had had good reasons for doing it, i.e. that it was rational. So he would have no reason to take down this website anyway.
I think I may have artificially induced an Ugh Field in myself.
A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.
Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."
Now that the week's over, I don't even want to think about X any more. It just feels too weird.
And maybe that's a good thing.
I have also artificially induced an Ugh Field in myself. A few months ago, I was having a horrible problem with websurfing procrastination. I started using Firefox for browsing and LeechBlock to limit (but not eliminate) my opportunities for websurfing instead of doing work. I'm on a Windows box, and for the first three days I disabled IE, but doing so caused knock-on effects, so I had to re-enable it. However, I knew that resorting to IE to surf would simply recreate my procrastination problem, so... I just didn't. Now, when the thought occurs to me to do so, it auto-squelches.
I predict with 95% confidence that within six months you will have recreated your procrastination problem with some other means.
Your lack of confidence in me has raised my ire. I will prove you wrong!
Did you start procrastinating again?
Yep. Eventually I sought medical treatment.
To be settled by February 8, 2011!
What simple rationality techniques give the most bang for the buck? I'm talking about techniques you might be able to explain to a reasonably smart person in five minutes or less: really the basics. If part of the goal here is to raise the sanity waterline in the general populace, not just among scientists, then it would be nice to have some rationality techniques that someone can use without much study.
Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." Now, I'm sure everybody here can talk about this very precisely, in terms of Bayesian updating and odds ratios, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.
What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. I'll try a few candidates, to get the discussion started.
Candidate 1: Carl Sagan's concise explanation of how evidence works, as mentioned above.
Candidate 2: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.
Candidate 3: Admitting that you were wrong is a way of winning an argument. There's a saying that "It takes a big man to admit he's wrong," and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like victory. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. I know that it's possible to retrain yourself to feel this way, because I've done it. This wasn't even too difficult; it was more a matter of just realizing that feeling good about conceding an argument was even an option.
Anti-candidate: "Just because something feels good doesn't make it true." I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.
Any ideas? I know that the main goal of this community is to strive for rationality far beyond such low-hanging fruit, but if we can come up with simple and easy techniques that actually help people be more rational, there's a lot of value in that. You could use it as rationalist propaganda, or something.
EDIT: I've expanded this into a top-level post.
I'm going to be running a series of Rationality & AI seminars with Alex Flint in the Autumn, where we'll introduce aspiring rationalists to new concepts in both fields; standard cognitive biases, a bit of Bayesianism, some of the basic problems with both AI and Friendliness. As such, this could be a very helpful thread.
We were thinking of introducing Overconfidence Bias; ask people to give 90% confidence intervals, and then reveal (surprise surprise!) that they're wrong half the time.
Since it seemed like this could be helpful, I expanded this into a top-level post.
That 90% confidence interval thing sounds like one hell of a dirty trick. A good one, though.
I think some of the statistical fallacies that most people fall for are quite high up the list.
One such is the "What a coincidence!" fallacy. People notice that some unlikely event has occurred, and wonder how many millions to one against this event must have been - and yet it actually happenned ! Surely this means that my life is influenced by some supernatural influence!
The typical mistake is to simply calculate the likelihood of the occurrence of the particular event that occurred. Nothing wrong with that, but one should also compare that number against the whole basket of other possible unlikely events that you would have noticed if they'd happenned (of which there are surely millions), and all the possible occasions where all these unlikely events could have also occurred. When you do that, you discover that the likelihood of some unlikely thing happenning is quite high - which is in accordance with our experience that unlikely events do actually happen.
Another way of looking at it is that non-notable unlikely events happen all the time. Look, that particular car just passed me at exactly 2pm ! Most are not noticable. But sometimes we notice that a particular unlikely event just occurred, and of course it causes us to sit up and take notice. The question is how many other unlikely events you would also have noticed.
The key rational skill here is noticing the actual size of the set of unlikely things that might have happenned, and would have caught our attention if they had.
The concept of inferential distance is good. You wouldn't want to introduce it in the context of explaining something complicated - you'd just sound self-serving - but it'd be a good thing to crack out when people complain about how they just can't understand how anyone could believe $CLAIM.
Edit: It's also a useful concept when you are thinking about teaching.
#3 is a favorite of mine, but I like #1 too.
How about "Your intuitions are not magic"? Granting intuitions the force of authority seems to be a common failure mode of philosophy.
That's a good lesson to internalize, but how do you get someone to internalize it? How do you explain it (in five minutes or less) in such a way that someone can actually use it?
I'm not saying that there's no easy way to explain it; I just don't know what that way would be. When I argue with someone who acts like their intuitions are magic, I usually go back to basic epistemology: define concisely what it means to be right about whatever we're discussing, and show that their intuitions here aren't magic. If there's a simple way to explain in general that intuition isn't magic, I'd really love to hear it. Any ideas?
Given that we haven't constructed a decent AI, and don't know how those intuitions actually work, we only really believe they're not magic on the grounds that we don't believe in magic generally, and don't see any reason why intuitions should be an exception to the rule that all things can be explained.
Perhaps an easier lesson is that intuitions can sometimes be wrong, and it's useful to know when that happens so we can correct for it. For example, most people are intuitively much more afraid of dying in dramatic and unusual ways (like air crashes or psychotic killers) than in more mundane ways like driving the car or eating unhealthy foods, Once it's established that intuitions are sometimes wrong, the fact that we don't exactly know how they work isn't so dangerous to one's thinking.
Well, I thought Kaj_Sotana's explanation was good, but the five-minute constraint makes things very difficult. I tend to be so long-winded that I'm not sure I could get across any insight in five minutes, honestly, but you're right that "Your intuitions are not magic" is likely to be harder than many.
Scenario: A life insurance salesman, who happens to be a trusted friend of a relatively-new-but-so-far-trustworthy friend of yours, is trying to sell you a life insurance policy. He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died. He seems to want you to think that buying a life insurance policy from him will somehow make you less likely to die.
How do you respond?
edit: to make this question more interesting: you also really don't want to offend any of the people involved.
Tell him you found his pitch very interesting and persuasive, and that you'd like to buy life insurance for a 20 year period. Then, ponder for a little while; "Actually, it can't be having the contact that keeps them alive, can it? That's just a piece of paper. It must be that the sort of person who buy it are good at staying alive! And it looks like I'm one of them; this is excellent!
Then , you point out that as you're not going to die, you don't need life insurance, and say goodbye.
If you wanted to try to enlighten him, you might start by explicitly asking if he believed there was a causal link. But as the situation isn't really set up for honest truth-hunting, I wouldn't bother.
If the salesman is omega in disguise, is this two-boxing? :-)
Well, kind of. Unlike in Newcombe's, we have no evidence that it's the decision that cases the long-life, as opposed to some other factor correlated with both (which seems much more likely).
Wow. He admitted that to you? That seems to be strong evidence that most people refuse to buy life insurance from him. In a whole 20 years he hasn't sold enough insurance that even one client has died from unavoidable misfortune!
"No."
Life insurance salesmen are used to hearing that. If they act offended, it's a sales act. If you're reluctant to say it, you're easily pressured and you're taking advantage. You say "No". If they press you, you say, "Please don't press me further." That's all.
Buying life insurance can't extend a human's life.
Thank you, Cliptain Obvious! The problem is to say how his claim is implausible or doesn't follow from his evidence, given that we already have that intuition.
Maybe the salesman mostly sells temporary life insurance, and just means that no clients had died while covered?
With a degree of discombobulation, I imagine. I can't see any causal mechanism by which buying insurance would cause you to live longer, so unless the salesman knows something I wouldn't expect him to, he would seem to have acquired an unreliable belief. Given this, I would postpone buying any insurance from him in case this unreliable belief could have unfortunate further consequences* and I would reduce my expectation that the salesman might prove to be an exceptional rationalist.
* For example: given his superstition, he may have allotted inadequate cash reserves to cover future life insurance payments.
Since his sales rate probably increased with time, that means the average time after selling a policy is ~8 years. So the typical client of his didn't die after 8 years. Making a rough estimate of the age of the client he sells to, which would probably be 30-40, it just means that the typical client has lived to at least 48 or less, which is normal, not special.
Furthermore, people who buy life insurance self-select for being more prudent in general.
So, even ignoring the causal separations you could find, what he's told you is not very special. Though it separates him from other salesmen, the highest likelihood ratio you should put on this piece of evidence would be something like 1.05 (i.e. ~19 out of 20 salesmen could say the same thing), or not very informative, so you are only justified in making a very slight move toward his hypothesis, even under the most generous assumptions.
You could get a better estimate of his atypicality by asking more about his clients, at which point you would have identified factors that can screen off the factor of him selling a policy.
(Though in my experience, life insurance salesmen aren't very bright, and a few sentences into that explanation, you'll get the, "Oh, it's one of these people" look ...)
How'd I do?
Edit: Okay, I think I have to turn in my Bayes card for this one: I just came up with a reason why the hypothesis puts a high probability on the evidence, when in reality the evidence should have a low probability of existing. So it's more likely he doesn't have his facts right.
Maybe this is a good case to check the "But but somebody would have noticed" heuristic. If one of his clients died, would he even find out? Would the insurance company tell him? Does he regularly check up on his clients?
On the other hand, there's also selection for people who aren't expecting to live as long as the average, and this pool includes prudent people.
Anyone have information on owning life insurance and longevity?
And on yet another hand there is selection for people who are expected to live longer than the average (selection from the salemen directly or mediated by price.)
I disagree with your analysis, but the details of why I disagree would be spoilers.
more details:
no, he's not deliberately selecting low-risk clients. He's trying to make as many sales as possible.
and he's had lots of clients. I don't know the actual numbers, but he has won awards for how many policies he has sold.
and he seems to honestly believe that there's something special about him that makes his clients not die. he's "one of those people".
and here's the first actuarial life table I found through a quick google search: http://www.ssa.gov/OACT/STATS/table4c6.html
I'm going to go ahead and post the spoiler, rot13'd
Zl thrff: Ur'f ylvat. Naq ur'f cebonoyl ylvat gb uvzfrys nf jryy, va beqre sbe gur yvr gb or zber pbaivapvat. Gung vf, qryvorengryl sbetrggvat nobhg gur pyvragf jub unir qvrq.
Vs ur unf unq a pyvragf, naq vs gurve nirentr ntr vf 30... Rnpu lrne, gur cebonovyvgl bs rnpu bs gurz fheivivat gur arkg lrne vf, jryy, yrg'f ebhaq hc gb 99%. Gung zrnaf gung gur cebonovyvgl bs nyy bs gurz fheivivat vf 0.99^a. Rira vs ur unf bayl unq 100 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^100=0.36 Vs ur unq 200 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^200=0.13. Naq gung'f whfg sbe bar lrne. Gur sbezhyn tbrf rkcbaragvny ntnva vs lbh pbafvqre nyy 20 lrnef. Gur cebonovyvgl bs nyy 100 pyvragf fheivivat 20 lrnef vf 0.99^100^20=1.86R-9
Naq zl npghny erfcbafr vf... qba'g ohl gur yvsr vafhenapr. Ohg qba'g gryy nalbar gung lbh guvax ur'f ylvat. (hayrff lbh pbhag guvf cbfg.) Nyfb, gur sevraq ab ybatre pbhagf nf "gehfgrq", be ng yrnfg abg gehfgrq gb or engvbany. Bu, naq srry ernyyl thvygl sbe abg svaqvat n orggre fbyhgvba, naq cbfg gb YJ gb frr vs nalbar guvaxf bs n orggre vqrn. Ohg qba'g cbfg rabhtu vasbezngvba sbe nalbar gb npghnyyl guvax bs n orggre fbyhgvba. Naq vs fbzrbar qbrf guvax bs n orggre vqrn naljnl, vtaber vg vs vg'f gbb fpnel.
But I can only make inferences on what you've told me. If there's a factor that throws off the general inferences you can make from a salesman's clientele, you can't fault me for not using it. It's like you're trying to say:
"This dude was born in the US. He's 50 years old. Can he speak English?" -> Yeah, probably. -> "Haha! No, he can't! I didn't tell you he was abducted to Cambodia as an infant and grew up there!"
Anyway, the next step is to estimate what fraction of salesman with the same clientele composition have not had their clients die and see how atypical he is. Plus, his sales record would have to start from early in his career, or else his clients fall mostly within recent sales, a time span in which people normally don't die anyway.
Goodhart sighting? Misunderstanding of causality sighting? Check out this recent economic analysis on Slate.com (emphasis added):
(He then quotes an economist that says inflation would also prop up home values and prevent foreclosures.)
Did I get that right? Because inflation has traditionally been a sign of (caused by) rising output, you should directly cause inflation, in order to cause higher output. (Note: in order to complete the case for inflation, you arguably have to do the same thing again, but replacing inflation with output, and output with reduced unemployment.)
A usual, I'm not trying to start a political debate about whether inflation is good or bad, or what should be done to increase/decrease inflation. I'm interested in this particular way of arguing for pro-inflation policies, which seems to even recognize which way the causality flows, but still argue as if it runs the opposite direction.
Am I misunderstanding it?
LW Goodhart article
Last night I introduced a couple of friends to Newcomb's Problem/Counterfactual Mugging, and we discussed it at some length. At some point, we somehow stumbled across the question "how do you picture Omega?"
Friend A pictures Omega as a large (~8 feet) humanoid with a deep voice and a wide stone block for a head.
When Friend B hears Omega, he imagines Darmani from Majora's mask (http://www.kasuto.net/image/officialart/majora_darmani.jpg)
And for my part, I've always pictured him a humanoid with paper-white skin in a red jumpsuit with a cape (the cape, I think, comes from hearing him described as "flying off" after he's confounded you).
So it seemed worth asking LW just for the amusement: how do you picture Omega?
At the risk of spoiling a very good webcomic; Omega looks like this.
DAMN YOU WILLIS.
I've always pictured Omega like this: suddenly I'm pulled from our world and appear in a sterile white room that contains two boxes. At the same moment I somehow know the problem formulation. I open one box, take the million, and return to the world.
This, down to the white room and being pulled. Omega doesn't Have form or personality. He's beyond physics.
And when you get counterfactually mugged, you're in a sterile white room with a vending machine bill acceptor planted in the wall?
No, just an empty room. If I take a bill out of my pocket and hold it in front of me, it disappears and I go back. If I say "no", I go back.
I've always thought of Omega as looking something like a hydralisk--biological and alien, almost a scaled-down Lovecraftian horror.
(Necro-thread)
I can't explain why, but I've always imagined Omega to be a big hovering red sphere with a cartoonish face, and black beholder-like eyestalks coming off him from all sides.
He may have been influenced by the Flying Spaghetti Monster.
AI development in the real world?
I can't evaluate whether what he's saying is plausible enough for science fiction-- it's certainly that-- or likely to be true.
Wired - We Are All Talk Radio Hosts
Related - verbal overshadowing, where describing something verbally blocks retrieving perceptual memories of it. Critically, verbal overshadowing doesn't always occur - sometimes verbal descriptions improve reasoning.
Doesn't refute Lehrer's main point exactly, but does complicate it somewhat.
Interesting SF by Robert Charles Wilson!
I normally stay away from posting news to lesswrong.com - although I think an Open Thread for relevant news items would be a good idea - but this one sounds especially good and might be of interest for people visiting this site...
Many-Worlds in Fiction: "Divided by Infinity"
Thank you.
The idea reminded me of Moravec's thoughts on death:
I already wrote this fic ("The Grand Finale of the Ultimate Meta Mega Crossover").