Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread February 25 - March 3

8 Coscott 25 February 2014 04:57AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Terminal and Instrumental Beliefs

4 Coscott 12 February 2014 10:45PM

Cross-Posted on By Way of Contradiction

As you may know from my past posts, I believe that probabilities should not be viewed as uncertainty, but instead as weights on how much you care about different possible universes. This is a very subjective view of reality. In particular, it seems to imply that when other people have different beliefs than me, there is no sense in which they can be wrong. They just care about the possible futures with different weights than I do. I will now try to argue that this is not a necessary conclusion.

First, let's be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails.

Now, imagine that you are unaware of the fact that it is an unfair coin. By default, you believe that the coin comes up heads with probability 50%. How can we express the fact that I have a correct belief, and you have an incorrect belief in the language of values?

We will take advantage of the language of terminal and instrumental values. A terminal value is something that you try to get because you want it. An instrumental value is something that you try to get because you believe it will help you get something else that you want.

If you believe a statement S, that means that you care more about the worlds in which S is true. If you terminally assign a higher value to worlds in which S is true, we will call this belief a terminal belief. On the other hand, if you believe S because you think that S is logically implied by some other terminal belief, T, we will call your belief in S an instrumental belief.

Instrumental values can be wrong, if you are factually wrong about the fact that the instrumental value will help achieve your terminal values. Similarly, an Instrumental belief can be wrong if you are factually wrong about the fact that it is implied by your terminal belief.

Your belief that the coin will come up heads with probability 50% is an instrumental belief. You have a terminal belief in some form of Occam's razor. This causes you to believe that coins are likely to behave similarly to how coins have behaved in the past. In this case, that was not valid, because you did not take into consideration the fact that I chose the coin for the purpose of this thought experiment. Your Instrumental belief is in this case wrong. If your belief in Occam's razor is terminal, then it would not be possible for Occam's razor to be wrong.

This is probably a distinction that you are already familiar with. I am talking about the difference between an axiomatic belief and a deduced belief. So why am I viewing it like this? I am trying to strengthen my understanding of the analogy between beliefs and values. To me, they appear to be two different sides of the same coin, and building up this analogy might allow us to translate some intuitions or results from one view into the other view.

Open Thread for February 11 - 17

3 Coscott 11 February 2014 06:08PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Preferences without Existence

14 Coscott 08 February 2014 01:34AM

Cross-posted on By Way of Contradiction

My current beliefs say that there is a Tegmark 4 (or larger) multiverse, but there is no meaningful “reality fluid” or “probability” measure on it. We are all in this infinite multiverse, but there is no sense in which some parts of it exist more or are more likely than any other part. I have tried to illustrate these beliefs as an imaginary conversation between two people. My goal is to either share this belief, or more likely to get help from you in understanding why it is completely wrong.

A: Do you know what the game of life is?

B: Yes, of course, it is a cellular automaton. You start with a configuration of cells, and they update following a simple deterministic rule. It is a simple kind of simulated universe.

A: Did you know that when you run the game of life on an initial condition of a 2791 by 2791 square of live cells, and run it for long enough, creatures start to evolve. (Not true)

B: No. That’s amazing!

A: Yeah, these creatures have developed language and civilization. Time step 1,578,891,000,000,000 seems like it is a very important era for them, They have developed much technology, and it someone has developed the theory of a doomsday device that will kill everyone in their universe, and replace the entire thing with emptyness, but at the same time, many people are working hard on developing a way to stop him.

B:How do you know all this?

A: We have been simulating them on our computers. We have simulated up to that crucial time.

B: Wow, let me know what happens. I hope they find a way to stop him

A: Actually, the whole project is top secret now. The simulation will still be run, but nobody will ever know what happens.

B: Thats too bad. I was curious, but I still hope the creatures live long, happy, interesting lives.

A: What? Why do you hope that? It will never have any effect over you.

B: My utility function includes preferences between different universes even if I never get to know the result.

A: Oh, wait, I was wrong. It says here the whole project is canceled, and they have stopped simulating.

B: That is to bad, but I still hope they survive.

A: They won’t survive, we are not simulating them any more.

B: No, I am not talking about the simulation, I am talking about the simple set of mathematical laws that determine their world. I hope that those mathematical laws if run long enough do interesting things.

A: Even though you will never know, and it will never even be run in the real universe.

B: Yeah. It would still be beautiful if it never gets run and no one ever sees it.

A: Oh, wait. I missed something. It is not actually the game of life. It is a different cellular automaton they used. It says here that it is like the game of life, but the actual rules are really complicated, and take millions of bits to describe.

B: That is too bad. I still hope they survive, but not nearly as much.

A: Why not?

B: I think information theoretically simpler things are more important and more beautiful. It is a personal preference. It is much more desirable to me to have a complex interesting world come from simple initial conditions.

A: What if I told you I lied, and none of these simulations were run at all and never would be run. Would you have a preference over whether the simple configuration or the complex configuration had the life?

B: Yes, I would prefer if the simple configuration to have the life.

A: Is this some sort of Solomonoff probability measure thing?

B: No actually. It is independent of that. If the only existing things were this universe, I would still want laws of math to have creatures with long happy interesting lives arise from simple initial conditions.

A: Hmm, I guess I want that too. However, that is negligible compared to my preferences about things that really do exist.

B: That statement doesn’t mean much to me, because I don’t think this existence you are talking about is a real thing.

A: What? That doesn’t make any sense.

B: Actually, it all adds up to normality.

A: I see why you can still have preferences without existence, but what about beliefs?

B: What do you mean?

A: Without a concept of existence, you cannot have Solomonoff induction to tell you how likely different worlds are to exist.

B: I do not need it. I said I care more about simple universes than complicated ones, so I already make my decisions to maximize utility weighted by simplicity. It comes out exactly the same, I do not need to believe simple things exist more, because I already believe simple things matter more.

A: But then you don’t actually anticipate that you will observe simple things rather than complicated things.

B: I care about my actions more in the cases where I observe simple things, so I prepare for simple things to happen. What is the difference between that and anticipation?

A: I feel like there is something different, but I can’t quite put my finger on it. Do you care more about this world than that game of life world?

B: Well, I am not sure which one is simpler, so I don’t know, but it doesn’t matter. It is a lot easier for me to change our world than it is for me to change the game of life world. I therefore will make choices that roughly maximizes preferences about the future of this world in the simplest models.

A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?

B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.

A: Okay, it seems plausible, but kind of depressing to think that we do not exist.

B: Oh, I disagree! I am still a mind with free will, and I have the power to use that will to change my own little piece of mathematics — the output of my decision procedure. To me that feels incredibly beautiful, eternal, and important.

Meetup : West LA Meetup-indexical and logical uncertainty

0 Coscott 29 January 2014 10:26PM

Discussion article for the meetup : West LA Meetup-indexical and logical uncertainty

WHEN: 29 January 2014 02:32:41PM (-0800)

WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064, USA

How to get in: Go to the Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge". Parking is free for 3 hours. Discussion: We will discuss the difference between logical and indexical uncertainty, mostly inspired by my recent blog post. I expect that this will be a short discussion, and most of the time will be casual conversation. We will also discuss a new meeting place, (or maybe even new time) so if you have opinions on this, and will not be there, leave a comment. This meetup will be in less than 5 hours at the time of this post. Sorry for the short notice. No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible.

Discussion article for the meetup : West LA Meetup-indexical and logical uncertainty

Logical and Indexical Uncertainty

14 Coscott 29 January 2014 09:49PM

Cross-posted on By Way of Contradiction

Imagine I shot a photon at a half silvered mirror which reflects the photon with "probability" 1/2 and lets the photon pass through with "probability" 1/2.

Now, Imagine I calculated the trillionth decimal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term "probability" in this situation too, and to you, the "probability" that the digit is odd is 1/2.

What is the difference between these too situations? Assuming the many worlds interpretation of quantum mechanics, the first probability comes from indexical uncertainty, while the second comes from logical uncertainty. In indexical uncertainty, both possibilities are true in different parts of whatever your multiverse model is, but you are unsure which part of that multiverse you are in. In logical uncertainty, only one of the possibilities is true, but you do not have information about which one. It may seem at first like this should not change our decision theory, but I believe there are good reasons why we should care about what type of uncertainty we are talking about.

I present here 6 reasons why we potentially care about the 2 different types of uncertainties. I do not agree with all of these ideas, but I present them anyway, because it seems reasonable that some people might argue for them. Is there anything I have missed?

1) Anthropics

Suppose Sleeping Beauty volunteers to undergo the following experiment, which is described to her before it begins. On Sunday she is given a drug that sends her to sleep, and a coin is tossed. If the coin lands heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug that makes her forget the events of Monday only, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. Beauty wakes up in the experiment and is asked, "With what subjective probability do you believe that the coin landed tails?"

People argue about whether the "correct answer" to this question should be 1/3 or 1/2. Some say that the question is malformed, and needs to be rewritten as a decision theory question. Another view is that the question actually depends on the coin flip:

If the coin flip is a indexical coin flip, then there are effectively 3 copies of sleeping beauty, and in 1 on those copies, the coin came up tails, so you should say 1/3. On the other hand, if it is a logical coin flip, then you cannot compare the two copies of you waking up in one possible world with the one copy of you waking up in the other possible world. Only one of the worlds is logically consistent. The trillionth digit of pi is not changed by you waking up, and you will wake up regardless of the state of the trillionth digit of pi.

2) Risk Aversion

Imagine that I were to build a doomsday device. The device flips a coin, and if the coin comes up heads, it destroys the Earth, and everything on it. If the coin comes up tails, it does nothing. Would you prefer if the coin flip were a logical coin flip, or a indexical coin flip?

You probably prefer the indexical coin flip. It feels more safe to have the world continue on in half of the universes, then to risk destroying the world in all universes. I do not think this feeling arises from biassed thinking, but instead from a true difference in preferences. To me, destroying the world in all of the universes is actually much more than twice as bad as destroying the world in half of the universes.

3) Preferences vs Beliefs

In updateless decision theory, you want to choose the output of your decision procedure. If there are multiple copies of yourself in the universe, you do not ask about which copy you are, but instead just choose the output which maximizes your utility of the universe in which all of your copies output that value. The "expected" utility comes from your logical uncertainty about what the universe is like. There is not much room in this theory for indexical uncertainty. Instead the indexical uncertainty is encoded into your utility function. The fact that you prefer to be given a reward with indexical probability 99% than given a reward with indexical probability 1% should instead be viewed as you preferring the universe in which 99% of the copies of you receive the reward to the universe in which 1% of the copies of you receive the reward.

In this view, it seems that indexical uncertainty should be viewed as preferences, while logical uncertainty should be viewed as beliefs. It is important to note that this all adds up to normality. If we are trying to maximize our expected utility, the only thing we do with preferences and beliefs is multiply them together, so for the most part it doesn't change much to think of something as a preference as opposed to belief.

4) Altruism

In Subjective Altruism, I asked a question about whether or not when being altruistic towards someone else, you should try to maximize their expected utility relative to you probability function or relative to their probability function. If your answer was to choose the option which maximizes your expectation of their utility, then it is actually very important whether indexical uncertainty is a belief or a preference.

5) Sufficient Reflection

In theory, given enough time, you can settle logical uncertainties just by thinking about them. However, given enough time, you can settle indexical uncertainties by making observations. It seems to me that there is not a meaningful difference between observations that take place entirely within your mind and observations about the outside world. I therefore do not think this difference means very much.

6) Consistency

Logical uncertainty seems like it is harder to model, since it means you are assigning probabilities to possibly inconsistent theories, and all inconsistent theories are logically equivalent. You might want some measure of equivalence of your various theories, and it would have to be different from logical equivalence. Indexical uncertainty does not appear to have the same issues, at least not in an obvious way. However, I think this issue only comes from looking at the problem in the wrong way. I believe that probabilities should only be assigned to logical statements, not to entire theories. Then, since everything is finite, you can treat sentences as equivalent only after you have proven them equivalent.

7) Counterfactual Mugging

Omega appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.

It seems reasonable to me that people might feel very different about this question based on whether or not the coin is logical or indexical. To me, it makes sense to give up the $100 either way, but it seems possible to change the question in such a way that the type of coin flip might matter.

Meetup : West LA: Surreal Numbers

2 Coscott 18 January 2014 08:56PM

Discussion article for the meetup : West LA: Surreal Numbers

WHEN: 22 January 2014 07:00:00PM (-0800)

WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064, USA

How to get in: Go to the Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".

Parking is free for 3 hours.

Discussion: 'In the beginning, everything was void, and J. H. W. H. Conway began to create numbers. Conway said, "Let there be two rules which bring forth all numbers large and small. This shall be the first rule: Every number corresponds to two sets of previously created numbers, such that no member of the left set is greater than or equal to any member of the right set. And the second rule shall be this: One number is less than or equal to another number if and only if no member of the first number's left set is greater than or equal to the second number, and no member of the second number's right set is less than or equal to the first number." And Conway examined these two rules he had made, and behold! They were very good.' -D. E. Knuth

No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible.

Discussion article for the meetup : West LA: Surreal Numbers

Thought Crimes

5 Coscott 15 January 2014 05:23AM

Cross-posted on By Way of Contradiction

In my morals, at least up until recently, one of the most obvious universal rights was freedom of thought. Agents should be allowed to think whatever they want, and should not be discouraged for doing so. This feels like a terminal value to me, but it is also instrumentally useful. Freedom of thought encourages agents to be rational and search for the truth. If you are punished for believing something true, you might not want to search for truth. This could slow science and hurt everyone. On the other hand, religions often discourage freedom of thought, and this is a major reason for my moral problems with religions. It is not just that religions are wrong, everyone is wrong about lots of stuff. It is that many religious beliefs restrict freedom of thought by punishing doubters with ostracizing or eternal suffering. I recognize that there are some "religions" which do not exhibit this flaw (as much).

Recently, my tune has changed. There are two things which have caused me to question the universality of the virtue of freedom of thought:

1) Some truths can hurt society

Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.

2) Simulations

This is the main point I want to talk about. If we get to the point where minds can simulate other minds, then we run into major issues. Should one mind be allowed to simulate another mind and torture it? It seems like the answer should be no, but this rule seems very hard to enforce without sacrificing not only free thought, but what would seem like the most basic right to privacy. Even today, people can have preferences over the thoughts of other people, but our intuition tells us that the one who is doing the thinking should get the final say. If the mind is simulating another mind, shouldn't the simulated mind also have rights? What makes advanced minds simulating torture so much worse than a human today thinking about torture. (Or even worse, thinking about 3^^^^3 people with dust specks in their eyes. (That was a joke, I know we cant actually think about 3^^^^3 people.))

The first thing seems like a possible practical concern, but it does not bother me nearly as much as the second one. The first seems like it is just and example of the basic right of freedom of thought contradicting another basic right of safety. However the second thing confuses me. It makes me wonder whether or not I should treat freedom of thought as a virtue as much as I currently do. I am also genuinely not sure whether or not I believe that advanced minds should not be free to do whatever they want to simulations in their own minds. I think they should not, but I am not sure about this, and I do not know if this restriction should be extended to humans.

What do you think? What is your view on the morality of drawing the line between the rights of a simulator and the rights of a simulatee? Do simulations within human minds have any rights at all? What conditions (if any) would make you think rights should be given to simulations within human minds?

Functional Side Effects

0 Coscott 14 January 2014 08:22PM

Cross Posted on By Way of Contradiction

You have probably heard the argument in favor of functional programming languages that functions act like functions in mathematics, and therefore have no side effects. When you call a function, you get an output, and with the exception of possibly the running time nothing matters except for the output that you get. This is in contrast with other programming languages where a function might change the value of some other global variable and have a lasting effect.

Unfortunately the truth is not that simple. All functions can have side effects. Let me illustrate this with Newcomb’s problem. In front of you are two boxes. The first box contains 1000 dollars, while the second box contains either 1,000,000 or nothing. You may choose to take either both boxes or just the second box. An Artificial Intelligence, Omega, can predict your actions with high accuracy, and has put 1,000,000 in the second box if and only if he predicts that you will take only the second box.

You, being a good reflexive decision agent, take only the second box, and it contains 1,000,000.

Omega can be viewed as a single function in a functional programming language, which takes in all sorts of information about you and the universe, and outputs a single number, 1,000,000 or 0. This function has a side effect. The side effect is that you take only the second box. If Omega did not simulate you and just output 1,000,000, and you knew this, then you would take two boxes.

Perhaps you are thinking “No, I took one box because I BELIEVED I was being simulated. This was not a side effect of the function, but instead a side effect of my beliefs about the function. That doesn’t count.”

Or, perhaps you are thinking “No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.”

These are reasonable rebuttals, but they do not carry over to other situations.

Imagine two programs, Omega 1 and Omega 2. They both simulate you for an hour, then output 0. The only difference is that Omega 1 tortures the simulation of you for an hour, while Omega 2 tries its best to simulate the values of the simulation of you. Which of these functions would your rather be run.

The fact that you have a preference between these (assuming you do have a preference) shows that function has a side effect that is not just a consequence of the function application in counterfactual universes.

Further, notice that even if you never know which function is run, you still have a preference. It is possible to have preference over things that you do not know about. Therefore, this side effect is not just a function of your beliefs about Omega.

Sometimes the input-output model of computation is an over simplification.

Let’s look at an application of thinking about side effects to Wei Dai’s Updateless Decision Theory. I will not try to explain UDT if you don’t already know about it, so this post should not be viewed alone.

UDT 1.0 is an attempt at a reflexive decision theory. It views a decision agent as a machine with code S, given input X, and having to choose an output Y. It advises the agent to consider different possible outputs, Y, and consider all consequences of the fact that the code S when run on X outputs Y. It then outputs the Y which maximizes his perceived utility of all the perceived consequences.

Wei Dai noticed an error with UDT 1.0 with the following thought experiment:

“Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.”

The problem is that all the reasons that S(1)=A are the exact same reasons why S(2)=A, so the two copies will probably the same result. Wei Dai proposes a fix, UDT 1.1 which is that instead of choosing an output S(1), you instead choose a function S, from 1,2 to A,B from the 4 available functions which maximizes utility. I think this was not the correct correction, which I will probably talk about in the future. I prefer UDT 1.0 to UDT 1.1.

Instead, I would like to offer an alternative way of looking at this thought experiment. The error is in the fact that S only looked at the outputs, and ignored possible side effects. I am aware that when S looked at the outputs, he was also considering his output in simulations of himself, but those are not side effects of the function. Those are direct results of the output of the function.

We should look at this problem and think, ”I want to output A or B, but in such a way that has the side effect that the other copy of me outputs B or A respectively.” S could search through functions considering their output on input 1 and the side effects of that function. S might decide to run the UDT 1.1 algorithm, which would have the desired result.

The difference between this and UDT 1.1 is that in UDT 1.1 S(1) is acting as though it had complete control over the output of S(2). In this thought experiment that seems like a fair assumption, but I do not think it is a fair assumption in general, so I am trying to construct a decision theory which does not have to make this assumption. This is because if the problem was different, then S(1) and S(2) might have had different utility functions.

On Voting for Third Parties

6 Coscott 13 January 2014 03:16AM

Cross Posted on my blog, By Way of Contradiction

Anti-Trigger Warning: There is not really any politics in this post. I doubt it will kill your mind.

If your favorite candidate in an election is a third party candidate, should you vote for him?

This question has confused me. I have changed my mind many times, and I have recently changed my mind again. I would like to talk about some of the arguments in both directions and explain the reason for my most recent change.

Con 1) Voting for a third party is throwing your vote away.

We have all heard this argument before, and it is true. It is an unfortunate consequence of the plurality voting system. Plurality is horrible and there are all better alternatives, but it is what we are stuck with for now. If you vote for a third party, the same candidate would be elected as if you did not vote at all.

Pro 1) The probability that you vote changes the election is negligible. All your vote does is add one to the number of people who voted for a given candidate. Your vote for the third party candidate therefore matters more because it is changing a small number by relatively more.

This argument is actually an empirical claim, and I am not sure how well it holds up. It is easy to study the likelihood that you vote changes the election. One study finds that it roughly varies from 10^-7 to 10^-11 in America for presidential elections. However, it is not clear to me just how much your vote affects the strategies of political candidates and voters in the future.

Pro 2) The probability that your vote changes the election or future elections is negligible. The primary personal benefit for voting is the personal satisfaction of voting. This personal satisfaction is maximized by voting for the candidate you agree with the most.

I think that many people if given the choice between changing the next president between the two primary parties or being paid an amount of money equal to the product of the amount of gas they spent to drive to vote and 10^7 would take the money. I am not one of them but any of those people must agree that voting is a bad investment if you do not consider the personal satisfaction. However, I think I might get more satisfaction out of doing my best to change the election, rather than placing a vote that does not matter.

Con 2) Actually if you use a reflexive decision theory, you are much more likely to change the election, so you should vote like it matters.

Looking at the problem like a timeless decision agent, you see that your choice on voting is probably correlated with that of many other people. You voting for a primary party is logically linked with other people voting for a primary party, and those people whose votes are logically linked with yours are more likely to agree with you politically. This could bring the chance of changing the election out of the negligible zone, where you should be deciding based on political consequences.

Pro 3) Your morality should encourage you to vote honestly.

It is not clear to me that I should view a vote for my favorite candidate as an honest vote. If we used the anti-plurality system where the person with the least votes wins, then a vote for my favorite candidate would clearly not be considered an honest one. The "honest" vote should be the vote that you think will maximize your preferences which might be a vote for a primary party.

Pro 4) Strategic voting is like defecting in the prisoner's dilemma. If we all cooperate and vote honestly, we will get the favorite candidate of the largest number of people. If not, then we could end up with someone much worse.

The problem with this is that if we all vote honestly, we get the plurality winner, and the plurality winner is probably not all that great a choice. The obvious voting strategy is not the only problem with plurality. Plurality also discourages compromise, and the results of plurality are changed drastically by honest vote splitting. The plurality candidate is not a good enough goal that I think we should all cooperate to achieve it.

I have decided that in the next election, I will vote for a primary party candidate. I changed my mind almost a year ago after reading Stop Voting for Nincompoops, but after recent further reflection, I have changed my mind back. I believe that Con 1 is valid, Con 2 and the other criticisms above adequately respond to Pro 1 and Pro 2, and I believe that Pro 3 and Pro 4 are invalid for the reasons described above. I would love to hear any opinions on any of these arguments, and would love even more to hear arguments I have not thought of yet.

View more: Prev | Next