List of literally false statements in the Bible

-13 Academian 20 May 2011 08:10AM

Jehova's Witnesses aim to interpret the Bible literally, which is in some sense admirable because that is the only way it can serve much to constrain one's anticipations about reality.  By contrast, if one aims to interpret a religious text only "metaphorically", then there are so many possible meanings that it does essentially nothing to constrain one's anticipations.

For example, when one accepts the best scientific knowledge about the origin of Earth, one believes that it was not in fact created in 6 days, and that the literal meaning of the English Bible is false in this case.  Christians who accept the true age of Earth are not usually bothered by this, and resort to a "metaphorical" interpretation wherein "days" are metaphors for longer periods.

But if you only believe that each statement in the Bible has some metaphorical interpretation which is true, it doesn't tell you much about the world at all.  The Bible asserts that God exists... but since we're only taking things metaphorically now, maybe God doesn't actually literally exist.  Maybe He's pretend.  Maybe there in fact is no God, but there is a rainforest, and God is a metaphor for the rainforest.  Or for the sun.  Who knows.  Since there is no way to tell which metaphor is the right one, believing that the Bible is "metaphorically true" basically tells you nothing.

Jehova's Witnesses seem to understand this, so they're not going there.  They're sticking to the literal Word of the Lord.  Which makes me interested:

What verses of the Bible can we cite that are false in their literal interpretation, according to accepted scientific or well-founded historical knowledge?

Thanks to anyone who contributes!

Probability puzzles

-9 johnclark 22 April 2011 09:11PM

There are 2  probability puzzles that I like:

1) Suppose I tell you that I have 2 children and one of them is a boy, what is the probability that I have 2 boys?

The correct answer is not 1/2 but 1/3. How can that be? Well there are 4 possible combinations, BB,GG,BG and GB but but at least one is a boy so you can get rid of GG. So all that's left is BB,BG and GB; and in only one of those 3 possibilities do I have two boys.

2) Now I tell you that I have 2 children and one of them is a boy born on a Tuesday. What is the probability that I have 2 boys?

You may think that Tuesday is not useful information in this matter so the answer would be the same as the previous example, but you would be wrong. The correct answer is 13/27. How can that be?

Well there are 14 possibilities for EACH kid:
B-Mo, B-Tu, B-We, B-Th, B-Fr, B-Sa, B-Su
G-Mo, G-Tu, G-We, G-Th, G-Fr, G-Sa, G-Su

But I told you the one of my kids (the first or the second) was a boy born on a Tuesday so that narrows down the field of possibilities to:

First child: B-Tu, second child: B-Mo, B-Tu, B-We, B-Th, B-Fr, B-Sa, B-Su, G-Mo, G-Tu, G-We, G-Th, G-Fr, G-Sa, G-Su.

Second child: B-Tu, first child: B-Mo, B-We, B-Th, B-Fr, B-Sa, B-Su, G-Mo, G-Tu, G-We, G-Th, G-Fr, G-Sa, G-Su.

No need to put B-Tu in the second row because it's already accounted for in the first row.
So now just count them out, 14+13= 27 possibilities. How many result in 2 boys? Count them out again 7+6=13. So 13 out of 27 possibilities give you 2 boys.

  John K Clark

We are not living in a simulation

-9 dfranke 12 April 2011 01:55AM

The aim of this post is to challenge Nick Bostrom's simulation argument by attacking the premise of substrate-independence. Quoting Bostrom in full, this premise is explained as follows:

A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) -- just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

I contend that this premise, in even its weakest formulation, is utterly, unsalvageably false.

Since Bostrom never precisely defines what a "simulator" is, I will apply the following working definition: a simulator is a physical device which assists a human (or posthuman) observer with deriving information about the states and behavior of a hypothetical physical system. A simulator is "perfect" if it can respond to any query about the state of any point or volume of simulated spacetime with an answer that is correct according to some formal mathematical model of the laws of physics, with both the query and the response encoded in a language that it is easily comprehensible to the simulator's [post]human operator. We can now formulate the substrate independence hypothesis as follows: any perfect simulator of a conscious being experiences the same qualia as that being.

Let us make a couple observations about these definitions. First: if the motivation for our hypothetical post-Singularity civilization to simulate our universe is to study it, then any perfect simulator should provide them with everything necessary toward that end. Second: the substrate independence hypothesis as I have defined it is much weaker than any version which Bostrom proposes, for any device which perfectly simulates a human must necessarily be able to answer queries about the state of the human's brain, such as what synapses are firing at what time, as well as any other structural question right down to the Planck level.

Much of the ground I am about to cover has been tread in the past by John Searle. I will explain later in this post where it is that I differ with him.

Let's consider a "hello universe" example of a perfect simulator. Suppose an essentially Newtonian universe in which matter is homogeneous at all sufficiently small scales; i.e., there are either no quanta, or quanta simply behave like billiard balls. Gravity obeys the familiar inverse-square law. The only objects in this universe are two large spheres orbiting each other. Since the two-body problem has an easy closed-form solution, it is hypothetically straightforward to program a Turing machine to act as a perfect simulator of this universe, and furthermore an ordinary present-day PC can be an adequate stand-in for a Turing machine so long as we don't ask it to make its answers precise to more decimal places than fit in memory. It would pose no difficulty to actually implement this simulator.

If you ran this simulator with Jupiter-sized spheres, it would reason perfectly about the gravitational effects of those spheres. Yet, the computer would not actually produce any more gravity than it would while powered off. You would not be sucked toward your CPU and have your body smeared evenly across its surface. In order for that happen, the simulator would have to mimic the simulated system in physical form, not merely computational rules. That is, it would have to actually have two enormous spheres inside of it. Such a machine could still be a "simulator" in the sense that I've defined the term — but in colloquial usage, we would stop calling this a simulator and instead call it the real thing.

This observation is an instance of a general principle that ought be very, very obvious: reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map.

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.

For an example of my claim, let us suppose like Bostrom does that a simulation which correctly models brain activity down to the level of individual synaptic discharges is sufficient in order model all the essential features of human consciousness. What does that tell us about what would be required in order to build an artificial human? Here is one design that would work: first, write a computer program, running on (sufficiently fast) conventional hardware, which correctly simulates synaptic activity in a human brain. Then, assemble millions of tiny spark plugs, one per dendrite, into the physical configuration of a human brain. Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

Alternatively, what if granularity right down to the Planck level turned out to be necessary? In that case, the only way to build an artificial brain would to be to actually build, particle-for-particle, a brain — since due to speed-of-light limitations, no other design could possibly model everything it needed to model in real time.

I think that actual requisite granularity is probably somewhere in between. The spark plug design seems too crude to work, while Planck-level correspondence is certainly overkill, because otherwise, the tiniest fluctuation in our surrounding environment, such as a .01 degree change in room temperature, would have a profound impact on our mental state.

Now, from here on is where I depart from Searle if I have not already. Consider the following questions:

  1. If a tree falls in the forest and nobody hears it, does it make an acoustic vibration?
  2. If a tree falls in the forest and nobody hears it, does it make an auditory sensation?
  3. If a tree falls in the forest and nobody hears it, does it make a sound?
  4. Can the Chinese Room (.pdf link) pass a Turing test administered in Chinese?
  5. Does the Chinese Room experience the same qualia that a Chinese-speaking human would experience when replying to a letter written in Chinese?
  6. Does the Chinese Room understand Chinese?
  7. Is the Chinese Room intelligent?
  8. Does the Chinese Room think?

Here is the answer key:

  1. Yes.
  2. No.
  3. What do you mean?
  4. Yes.
  5. No.
  6. What do you mean?
  7. What do you mean?
  8. What do you mean?

The problem with Searle is his lack of any clear answer to "What do you mean?". Most technically-minded people, myself included, think of 6–8 as all meaning something similar to 4. Personally, I think of them as meaning something even weaker than 4, and have no objection to describing, e.g., Google, or even a Bayesian spam filter, as "intelligent". Searle seems to want them to mean the same as 5, or maybe some conjunction of 4 and 5. But in counterintuitive edge cases like the Chinese Room, they don't mean anything at all until you assign definitions to them.

I am not certain whether or not Searle would agree with my belief that it is possible for a Turing machine to correctly answer questions about what qualia a human is experiencing, given a complete physical description of that human. If he takes the negative position on this, then this is a serious disagreement that goes beyond semantics, but I cannot tell that he has ever committed himself to either stance.

Now, there remains a possible argument that might seem to save the simulation hypothesis even in the absence of substrate-independence. "Okay," you say, "you've persuaded me that a human-simulator built of silicon chips would not experience the same qualia as the human it simulates. But you can't tell me that it doesn't experience any qualia. For all you or I know, a lump of coal experiences qualia of some sort. So, let's say you're in fact living in a simulation implemented in silicon. You're experiencing qualia, but those qualia are all wrong compared to what you as a carbon-based bag of meat ought to be experiencing. How would you know anything is wrong? How, other than by life experience, do you know what the right qualia for a bag of meat actually are?"

The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

So, I think I have now established that to any extent we can be said to be living in a simulation, the simulator must physically incorporate a human brain. I have not precluded the possibility of a simulation in the vein of "The Matrix", with a brain-in-a-vat being fed artificial sensory inputs. I think this kind of simulation is indeed possible in principle. However, nothing claimed in Bostrom's simulation argument would suggest that it is at all likely.

ETA: A question that I've put to Sideways can be similarly put to many other commenters on this thread.  "Similar in number", i.e., two apples, two oranges, etc., is, similarly to "embodying the same computation", an abstract concept which can be realized by a wide variety of physical media.  Yet, if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved.  If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

Another Argument Against Eliezer's Meta-Ethics

9 Wei_Dai 05 February 2011 12:54AM

I think I've found a better argument that Eliezer's meta-ethics is wrong. The advantage of this argument is that it doesn't depend on the specifics of Eliezer's notions of extrapolation or coherence.

Eliezer says that when he uses words like "moral", "right", and "should", he's referring to properties of a specific computation. That computation is essentially an idealized version of himself (e.g., with additional resources and safeguards). We can ask: does Idealized Eliezer (IE) make use of words like "moral", "right", and "should"? If so, what does IE mean by them? Does he mean the same things as Base Eliezer (BE)? None of the possible answers are satisfactory, which implies that Eliezer is probably wrong about what he means by those words.

1. IE does not make use of those words. But this is intuitively implausible.

2. IE makes use of those words and means the same things as BE. But this introduces a vicious circle. If IE tries to determine whether "Eliezer should save person X" is true, he will notice that it's true if he thinks it's true, leading to Löb-style problems.

3. IE's meanings for those words are different from BE's. But knowing that, BE ought to conclude that his meta-ethics is wrong and morality doesn't mean what he thinks it means.

You're in Newcomb's Box

40 HonoreDB 05 February 2011 08:46PM

Part 1:  Transparent Newcomb with your existence at stake

Related: Newcomb's Problem and Regret of Rationality

 

Omega, a wise and trustworthy being, presents you with a one-time-only game and a surprising revelation.  

 

"I have here two boxes, each containing $100," he says.  "You may choose to take both Box A and Box B, or just Box B.  You get all the money in the box or boxes you take, and there will be no other consequences of any kind.  But before you choose, there is something I must tell you."

 

Omega pauses portentously.

 

"You were created by a god: a being called Prometheus.  Prometheus was neither omniscient nor particularly benevolent.  He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman.  Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created.  If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused.  Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."

 

Do you take both boxes, or only Box B?

continue reading »

Counterfactual Calculation and Observational Knowledge

11 Vladimir_Nesov 31 January 2011 04:28PM

Consider the following thought experiment ("Counterfactual Calculation"):

You are taking a test, which includes a question: "Is Q an even number?", where Q is a complicated formula that resolves to some natural number. There is no a priori reason for you to expect that Q is more likely even or odd, and the formula is too complicated to compute the number (or its parity) on your own. Fortunately, you have an old calculator, which you can use to type in the formula and observe the parity of the result on display. This calculator is not very reliable, and is only correct 99% of the time, furthermore its errors are stochastic (or even involve quantum randomness), so for any given problem statement, it's probably correct but has a chance of making an error. You type in the formula and observe the result (it's "even"). You're now 99% sure that the answer is "even", so naturally you write that down on the test sheet.

Then, unsurprisingly, Omega (a trustworthy all-powerful device) appears and presents you with the following decision. Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the (same) formula Q, on the same occasion (i.e. all possible worlds that fit this description). The counterfactual diverges only in the calculator showing a different result (and what follows). You are to determine what is to be written (by Omega, at your command) as the final answer to the same question on the test sheet in that counterfactual (the actions of your counterfactual self who takes the test in the counterfactual are ignored).

continue reading »

Put all your eggs in one basket?

8 PhilGoetz 23 January 2011 07:14PM

Having all known life on Earth concentrated on a single planet is an existential risk.  So we should try to spread out, right?  As soon as possible?

Yet, if we had advanced civilizations on two planets, that would be two places for unfriendly AI to originate.  If, as many people here believe, a single failed trial ruins the universe, you want to have as few places trying it as possible.  So you don't want any space colonization until after AI is developed.

If we apply that logic to countries, you would want as few industrialized nations as possible until AAI (After AI).  So instead of trying to help Africa, India, China, and the Middle East develop, you should be trying to suppress them.  In fact, if you really believed the calculations I commonly see used in these circles about the probability of unfriendly AI and its consequences, you should be trying to exterminate human life outside of your developed country of choice.  Failing to would be immoral.

And if you apply it within the USA, you need to pick one of MIT and Stanford and Carnegie Mellon, and burn the other two to the ground.

Of course, doing this will slow the development of AI.  But that's a good thing, if UFAI is most likely and has zero utility.

In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.

Do you agree with any of this?  Is there a point where you think it goes too far?  If so, say where it goes too far and explain why.

I see two main flaws in the reasoning.

  • Categorization of outcomes as "FAI vs UFAI", with no other possible outcomes recognized, and no gradations within the category of either, and zero utility assigned to UFAI.
  • Failing to consider scenarios in which multiple AIs can provide a balance of power.  The purpose of this balance of power may not be to keep humans in charge; it may be to put the AIs in an AI society in which human values will be worthwhile.
  • ADDED, after being reminded of this by Vladimir Nesov:  Re. the final point, stopping completely guarantees Earth life will eventually be eliminated; see his comment below for elaboration.

ADDED:  A number of the comments so far imply that the first AI built will necessarily FOOM immediately.  FOOM is an appealing argument.  I've argued in favor of it myself.   But it is not a theorem.  I don't care who you are; you do not know enough about AI and its future development to bet the future of the universe on your intuition that non-FOOMing AI is impossible.  You may even think FOOM is the default case; that does not make it the only case to consider.  In this case, even a 1% chance of a non-foom AI, multiplied by astronomical differences in utility, could justify terrible present disutility.

How Greedy Bastards Have Saved More Lives Than Mother Theresa Ever Did

14 waitingforgodel 03 December 2010 06:20AM

And how you can use the same techniques to save a stranger's life for under $600


It's a strange world we live in.

When I first heard of Optimal Philanthropy, it was in a news article about Bill Gates's plan for retirement. He'd decided to donate tens of billions of dollars to charity, but had decided that no existing charity was worth donating to.

Gates felt they weren't run properly.

You see, at the time most people thought that "efficient charities" were those that had little or no overhead. Everyone wanted as much money to go to the front lines as possible, with little or none for administration.

Gates didn't care about any of that.

No, what Gates wanted was measurable results... and if more administration would get better results, he was all for it.

In business, it all comes down to return on investment. How much money did you use (to rent buildings, buy supplies, hire employees), and how much money did you earn in return.

Gates felt that something similar was needed for charity.

If the charity saved lives, Gates reasoned, then it should be judged by how much money it used to save that life. If a charity could save twice as many lives on the same budget by using more administrators, they by all means they should do that.

As you may have heard, Bill Gates was appalled that he couldn't find a charity he could measure.

Here he was, trying to selflessly give away over ten billion dollars to any charity that could prove it would have the highest impact.... and finding a bunch of nonsense answers about how that's not the way charity works... or how little overhead there was.

And as you may have also heard, Mr. Gates turned that frustration into a revolution in the world of charity -- and inspired others to follow him. His foundation -- the Bill and Melinda Gates Foundation -- is now the biggest in the world, and makes a difference everyday in the areas of world education, malaria, and sustainable energy.

 

But Enough About All That! This Isn't About Bill Gates, This Is About You

Although the billionaires of the world have gotten their heads screwed on right about charity (and saving hundreds of millions more lives as a result), us non-billionaires didn't seem to get the memo.

And that means, if you are the sort of person who donates, you're not doing nearly the amount of good you could.

Here are 3 simple steps you can use right away that will at least double the impact your donations have.

Pause a second to think about what that would mean.

Why do you donate?

How would it feel to know that those donations now to twice as much good in this world? To know that at least twice as many people were helped?

Ready to hear the steps? Great!

 

Step 1: Make your reason for donating CONCRETE!

This step requires being very honest with yourself. It means not donating to the Haiti relief fund just because it was tragic (or because Bill Clinton said you should), but instead thinking about what that donation to Haiti would accomplish.

Something along the lines of: save lives and put good people back into homes. Whatever you hope your donation will accomplish.

What we're doing is moving from causes and goals (global warming, world peace, freedom from dictators), to concrete outcomes (reducing or negating carbon emissions, preventing wars, saving solders lives, educating people about the benefits of democracy).

Once you've got a concrete outcome you'd like to see in the world, it's time to find out the best way to accomplish that goal.

 

Step 2: Use 3rd party charity evaluations that focus on outcomes, and donate where it will do the most good.

Go to givewell.com and see if your current charity is listed, and what kinds of results they can get per donated dollar.

Also, don't forget to look at similar outcomes your donation money can accomplish. It's not uncommon to find out that, for example, the cost of giving a blind child a seeing eye dog is three times more than the cost of preventing childhood blindness in the first place.

Yes it might seem tragic to think of a little blind girl without a dog to guide her, but it's even worse to think that we'd give that girl a seeing eye dog at the expense of three other children going blind.

If nothing else, visit givewell.com, it will change the way you think about donating for the rest of your life.

 

Step 3: Donate what you can, but don't donate time unless you earn less than $10 an hour.

The strange truth of the matter is that, unless you're donating your time as a professional (Doctor's Without Borders, Pro-Bono Legal Aid), it's often more cost effective to simply work an extra hour and donate the money.

If you make $25/hr, your cause can probably can get 150 minutes of work for every hour of income you donate.

 


Okay! If you do those three steps you will get more good from your donation money than 90% of all the donors out there.

If you felt that this letter helped you, please consider forwarding it to your friends and family, or at least talking about these important issues with them.

Together, we can make a difference.

 

How To Lose 100 Karma In 6 Hours -- What Just Happened

-31 waitingforgodel 10 December 2010 08:27AM
As with all good posts, we begin with a hypothetical:
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
Here's the question: Is this blackmail? Is this terrorism?
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.

If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
Does this count as blackmail? does this count as terrorism?

What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?

Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?


(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)


It is my opinion that none of these are blackmail.
Blackmail is fundamentally a single shot game.
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.

Here's what happened:
  • 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
  • Earlier today, Yudkowsky censored a post on less wrong
  • 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).

This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's "delete" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.

FAQ:
Q: Will you reconsider?
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about "dangerous" ideas), he is not immune to my laws.

Q: How can you be sure that a post was censored rather than deleted by the owner?
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.

Q: How can you be sure that you haven't missed a deleted comment?
A: I use, and am improving, an automated solution.

Q: What is the nature of the existential risk increase?
A: Emails. (Yes, emails). Maybe some phone calls.
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.

Q: Is there any way for me to gain access to the censored content?
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!

Q: Will you still post here under waitingforgodel
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.

Thank you very much for your time and understanding,
-wfg

Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the "precommitment" thread, where it is talked about extensively.

“Fake Options” in Newcomb’s Problem

0 Normal_Anomaly 10 December 2010 02:12AM

This is an exploration of a way of looking at Newcomb’s Problem that helped me understand it. I hope somebody else finds it useful. I may add discussions of other game theory problems in this format if anybody wants them.

 

Consider Newcomb’s Problem:: Omega offers you two boxes, one transparent and containing $1000, the other opaque and containing either $1 million or nothing. Your options are to take both boxes, or only take the second one; but Omega has put money in the second box only if it has predicted that you will only take 1 box. A person in favor of one-boxing says, “I’d rather have a million than a thousand.” A two-boxer says, “Whether or not box B contains money, I’ll get $1000 more if I take box A as well. It’s either $1001000 vs. $1000000, or $1000 vs. nothing.” To get to these different decisions, the agents are working from two different ways of visualising the payoff matrix. The two-boxer sees four possible outcomes and the one-boxer sees two, the other two having very low probability.

The two-boxer’s payoff matrix looks like this:

                                    Box B

                                 |Money    | No money|

Decision  1-box|    $1mil           | $0           |

                2-box |      $1001000| $1000      |           

The outcomes $0 and $1001000 both require Omega making a wrong prediction. But as the problem is formulated, Omega is superintelligent and has been right 100 out of 100 times so far. So the one-boxer, taking this into account, describes the payoff matrix like this:

                                    Box B

                                 |Money    | No money|

Decision   1-box|       $1mil       | not possible|

                2-box |   not possible| $1000        |           

            If Omega is really a perfect (nearly perfect) predictor, the only possible (not hugely unlikely) outcomes are $1000 for two-boxing and $1 million for one-boxing, and considering the other outcomes is an epistemic failure.

 

View more: Prev | Next