Comment author: Spurlock 24 July 2012 08:33:04PM *  0 points [-]

AFAICT, this is an unfortunately strong argument... Thanks.

I see two solutions to the paradox:

1) Note that auctions are usually played by more than 2 bidders. Even if the first bidder would let you have the pot for $2, the odds that you'll be allowed to have it by everyone decrease sharply as the number of participants increases. So in a real auction (say at least 5 participants), 9% probably is overconfident.

2) If we have a small number of bidders, one would have to find statistics about the distribution of winners on these auctions (10% won by first bid, 12% won on second bid, and so on...). Of course, this strategy only works if your opponents don't know (and won't catch on) that you never bid more than once. But it should work at least for a one-shot auction where you don't publish your strategy in advance.

Out of curiosity, since you argue that joining these auctions as player #2 could very well have positive EU, would you endorse the statement "it is rational to join dollar auctions as the second bidder"? If not, why not?

Comment author: Bundle_Gerbe 24 July 2012 09:59:21PM *  1 point [-]

Against typical human opponents it is not rational to join dollar auctions either as the second player or as the first, because of the known typical behavior of humans in this game.

The equilibrium strategy however is a mixed strategy, in which you pick the maximum bid you are willing to make at random from a certain distribution that has different weights for different maximum bids. If you use a the right formula, your opponents won't have any better choice than mirroring you, and you will all have an expected payout of zero.

Comment author: Bundle_Gerbe 24 July 2012 07:18:48PM 4 points [-]

For the dollar auction, note that according to the wikipedia page for All-pay auction the auction does have a mixed-strategy Nash equilibrium in which the players' expected payoff is zero and the auctioneer's expected revenue is $20. So this breaks the pattern of the other examples, which show how Nash equilibrium game theory can be exploited to devious ends when facing rational maximizers.

The dollar auction is an interesting counterpoint to games like the Traveler's Dilemma in which the game when played by humans reaches outcomes much better than the Nash equilibrium. The Dollar Auction is an example where the outcomes are much worse when played by humans.

It seems humans in the Traveler's Dilemma and the Dollar Auction fail to reach the Nash equilibrium for an entirely different reason than in, say, the Prisoner's Dilemma or the Ultimatum Game (in which altruism/fairness are the main factors). In both cases, understanding the game requires iterative thinking, and there's a sort of "almost-equilibrium" different from the Nash equilibrium when this thinking isn't fully applied.

Comment author: Bundle_Gerbe 24 July 2012 05:54:46PM 8 points [-]

Calling the last game a "Prisoner's Dilemma" is little misleading in this context as the critical difference from the standard Prisoner's Dilemma (the fact that the payoff for (C,D) is the same as for (D,D)) is exactly what makes cousin_it 's (and Nick's) solution work. A small incentive to defect if you know your opponent is defecting defeats a strategy based on committing to defect.

Comment author: Bundle_Gerbe 23 July 2012 10:33:22PM *  0 points [-]

I think evolutionary psychology is pretty far from the crux of the theism/atheism debate.

On one hand, I don't think evolutionary psych at the moment provides very strong evidence against god. It's true that if god doesn't exist, there is probably some evolutionary explanation for widespread cross-cultural religious belief, and if god does exist, there might not be. But evolutionary psychology so far only really has educated guesses for why religious belief might be so common, without knock-down proof that any of them are the true reason. The existence of such guesses seems to me pretty close to equally likely under either hypothesis. These guesses are a good argument however against the opposite claim that widespread belief in religion proves god exists.

As for the Keller argument you mentioned, it begs the question by suggesting that the evolutionary psychology argument is a last ditch effort to subvert a conclusion that would otherwise be nearly unavoidable for our god-believing brains. But if we do have some inherent tendency to give credence to the idea of god, it's not all that strong, for instance the non-existence of god is much less surprising to our intuitions than the fact that color categories are perceptually constructed. The argument is better left at the object-level of directly giving reasons for and against god, instead of arguing for and against the reliability of certain weak human intuitions.

Comment author: Benedict 22 July 2012 07:52:49PM 16 points [-]

Hey, I'm -name withheld-, going by Benedict, 18 years old in North Carolina. I was introduced to Less Wrong through HPMoR (which is fantastic) and have recently been reading through the Sequences (still wading through the hard science of the Quantum Physics sequence).

I'm here because I have a real problem- dealing with the consequences of coming out as atheist to a Christian family. For about a year leading up to recent events, I had been trying to reconcile Christian belief with the principles of rationalism, with little success. At one point I settled into an unstable equilibrium of "believing in believing in belief" and "betting" on the truth of religious doctrine to cover the perceived small-but-noteworthy probability of its veracity and the proposed consequences thereof. I'd kept this all secret from my family, putting on a long and convincing act.

This recently fell apart in my mind, and I confronted my dad with a shambling confession and expression of confusion and outrage against Christianity. I'm... kinda really friggin' bad at communicating clearly through spoken dialogue, and although I managed to comport myself well enough in the conversation, my dad is unconvinced that the source of my frustrations is a conflicting belief system so much as a struggle with juvenile doubts. This is almost certainly why I haven't yet faced social repercussions, as my dad is convinced he can "fix" my thinking. He's a paid pastor and theologian, and has connections to all the really big names in contemporary theology- having an apostate son would damage both his pride and social status, and as such he's powerfully motivated to attempt to "correct" me.

After I told him about this, he handed me a book (The Reason for God by Timothy Keller) and signed himself up as a counselor for something called The Clash, described as a Christian "worldview conference". Next week, from July 30 to August 3, he's going to take me to this big huge realignment thing, and I'm worried I won't be able to defend myself. I've been reading through the book I mentioned, and found its arguments spectacularly unconvincing- but I'm having trouble articulating why. I haven't had enough experience with rationalism and debate to provide a strong defense, and I fear I'll be pressured into recanting if I fail.

That's why I'm here- in the upcoming week, I need intensive training in the defense of rationality against very specific, weak but troubling religious excuses. I really need to talk to people better trained than me about these specific arguments, so that I can survive the upcoming conference and assert my intellectual independence. Are there people I can be put in touch with, or online meetups where I can talk to people and arm myself? Should I start a discussion post, or what? I'm unfamiliar with the site structure here, so I could use some help.

Oh but dang if there aren't like over a thousand comments here, jeez i don't want to sound like i'm crying for attention but i'm TOTALLY CRYING FOR ATTENTION, srsly i need help you dudes

Comment author: Bundle_Gerbe 23 July 2012 02:11:57AM *  3 points [-]

It does not sound to me like you need more training in specific Christian arguments to stay sane. You have already figured things out despite being brought up in a situation that massively tilted the scales in favor of christianity. I doubt there is any chance they could now convince you if they had to fight on a level field. After all, it's not like they've been holding back their best arguments this whole time.

But you are going to be in a situation where they apply intense social pressure and reinforcement towards converting you. On top of that, I'm guessing maintaining your unbelief is very practically inconvenient right now, especially for your relationship with your dad. These conditions are hazardous to rationality, more than any argument they can give. You have to do what MixedNuts says. Just remember you will consider anything they say later, when you have room to think.

I do not think they will convert you. I doubt they will be able to brainwash you in a week when you are determined to resist. Even if they could, you managed to think your way out of christian indoctrination once already, you can do it again.

If you want to learn more about rationality specific to the question of Christianity, given that you've already read a good amount of material here about rationality in general, you might gain the most from reading atheist sites, which tend to spend a lot of effort specifically on refuting Christianity. Learn more about the Bible from skeptical sources, if you haven't before you'll be pretty amazed how much of what you've been told is blatantly false and how much about the bible you don't know (for instance, Genesis 1&2 have different creation stories that are quite contradictory, and the gospels' versions of the resurrection are impossible to reconcile. Also, the gospels of Matthew and Luke are largely copied from Mark, and the entire resurrection story is missing from the earliest versions of Mark.) I unfortunately don't know a source that gives a good introduction to bible scholarship. Maybe someone else can suggest one?

Comment author: Bundle_Gerbe 20 July 2012 04:13:00PM *  5 points [-]

For abstract algebra I recommend Dummit and Foote's Abstract Algebra over Lang's Algebra, Hungerford's Algebra, and Herstein's Topics in Algebra. Dummit and Foote is clearly written and covers a great deal of material while being accessible to someone studying the subject for the first time. It does a good job focusing on the most important topics for modern math, giving a pretty broad overview without going too deep on any one topic. It has many good exercises at varying difficulties.

Lang is not a bad book but is not introductory. It covers a huge amount but is hard to read and has difficult exercises. Someone new to algebra will learn faster and with less frustration from a less advanced book. Hungerford is awful; it is less clear, less modern, harder, and covers less material than Dummit and Foote. Herstein is ok but too old fashioned and narrow, and has too much focus on finite group theory. The part about Galois theory is good though, as are the exercises.

Comment author: Bundle_Gerbe 18 July 2012 04:59:29PM *  12 points [-]

In the strictest sense, "adding" sheep is a category error. Sheep are physical objects, you can put two sheep in a pen or imagine putting two sheep in a pen, but you aren't "adding" them, that's for numbers. Arithmetic is merely a map that can be fruitfully used to model (among many other things) certain aspects of sheep collection, separating sheep into groups, etc, under certain circumstances. When mathematical maps work especially well, they risk being confused with the territory, which is what I think is going on here. The "female sheep + male sheep" example should be thought of as an aspect of "putting sheep in pens for long periods of time" which addition does not model, not as an exception to "1+1=2".

Comment author: Bundle_Gerbe 15 June 2012 02:28:42AM 0 points [-]

We need a sense in which Bob is "just as likely to have existed" as I am, otherwise, it isn't a fair trade.

First considering the case before Omega's machine is introduced. The information necessary to create Bob contains the information necessary to create me, since Bob is specified as a person who would specifically create me, and not anyone else who might also make him. Add to that all the additional information necessary to specify Bob as a person, and surely Bob is much less likely to have existed than I am, if this phrase can be given any meaning. This saves us from being obligated to tile the universe with hypothetical people who would have created us.

With Omega's machine, we also imagine Bob as having run across such a machine, so he doesn't have to contain the same information anymore. Still Bob has the specific characteristic of having a somewhat unusual response to the "Bob the jerk" problem, which might make him less likely to have existed. So this case is less clear, but it still doesn't seem like a fair trade.

To give a specific sense for "just as likely to have existed," imagine Prometheus drew up two complete plans for people to create, one for Bob and one for you, then flipped a coin to decide which one to create, which turned out to be you. Now that you exists, Prometheus lets you choose whether to also create Bob. Again lets say Bob would have created you, if and only if he thought you would create him. In this case we can eliminate Bob, since it's really the same if Prometheus just says "I flipped heads and just created you. But if I flipped tails, I would have created you if and only if I thought you would give me 100 bucks. So give me 100 bucks." (The only difference is in the case with Bob, Prometheus creates you indirectly by creating Bob who then chooses to create you). The only difference between this and Pascal's mugging is that your reward in the counterfactual case is existence. I can't think of any reason (other than a sense of obligation) to choose differently in this problem than you do in Pascal's mugging.

Finally, imagine the same situation with Prometheus, but let's say Bob isn't a real jerk, just really annoying and smells bad. He also finds you annoying and malodorous. You are worse off if he exists. But Prometheus tells you Bob would have created you if Prometheus had flipped tails. Do you create Bob? It's sort of a counterfactual prisoner's dilemma.

Comment author: Ezekiel 23 May 2012 11:03:55AM 21 points [-]

I think we could generalise problem 2 to be problematic for any decision theory XDT:

There are 10 boxes, numbered 1 to 10. You may only take one. Omega has (several times) run a simulated XDT agent on this problem. It then put a prize in the box which it determined was least likely to be taken by such an agent - or, in the case of a tie, in the box with the lowest index.

If agent X follows XDT, it has at best a 10% chance of winning. Any sufficiently resourceful YDT agent, however, could run a simulated XDT agent themselves, and figure out what Omega's choice was without getting into an infinite loop.

Therefore, YDT performs better than XDT on this problem.

If I'm right, we may have shown the impossibility of a "best' decision theory, no matter how meta you get (in a close analogy to Godelian incompleteness). If I'm wrong, what have I missed?

Comment author: Bundle_Gerbe 30 May 2012 09:27:02PM *  1 point [-]

To draw out the analogy to Godelian incompleteness, any computable decision theory is subject to the suggested attack of being given a "Godel problem'' like problem 1, just as any computable set of axioms for arithmetic has a Godel sentence. You can always make a new decision theory TDT' that is TDT+ do the right thing for the Godel problem. But TDT' has it's own Godel problem of course. You can't make a computable theory that says "do the right thing for all Godel probems", if you try to do that it would not give you something computable. I'm sure this is all just restating what you had in mind, but I think it's worth spelling out.

If you have some sort of oracle for the halting problem (i.e. a hypercomputer) and Omega doesn't, he couldn't simulate you, so you would presumably be able to always win fair problems. Otherwise the best thing you could hope for is to get the right answer whenever your computation halts, but fail to halt in your computation for some problems, such as your Godel problem. (A decision theory like this can still be given a Godel problem if Omega can solve the halting problem, "I simulated you and if you fail to halt on this problem..."). I wonder if TDT fails to halt for its Godel problem, or if some natural modification of it might have this property, but I don't understand it well enough to guess.

I am less optimistic about revising "fair" to exclude Godel problems. The analogy would be proving Peano arithmetic is complete "except for things that are like Godel sentences." I don't know of any formalizations of the idea of "being a Godel sentence".

View more: Prev