Comment author: VKS 12 April 2012 05:48:37PM *  0 points [-]

In an undergraduate seminar on game theory I attended, it was mentioned in an answer to a question posed to the presenter that, when computing a payoff matrix, the headings in the rows and columns aren't individual actions, but are rather entire strategies; in other words it's as if you pretty much decide what you do in all circumstances at the beginning of the game. This is because when evaluating strategies nobody cares when you decide, so might as well act as if you had them all planned out in advance. So in that spirit, I'm going to use the following principle:

An Agent should chose the strategy that it predicts gives the greatest outcome, weighted by probability of that outcome etcetc...

to approach every one of these problems.

On Omega's coin flip: Omega has given you the function you have to apply to your strategy, you just apply it, and the result's bigger if you answer "yes". Although, realistically, there's no way that Omega has provided nearly enough information for you to trust him, but whatever, that's the premise.

On Parfit's hitchhiker: Again, Ekman has access to your strategy. Just pick one that does benefit him, since those are the only ones that have you not dying at the outcome. If you don't have 100$, find something else you could give him.

On the Democratic Pie: Well, your problem has no strong Nash equilibrium. No solution is going to be stable. I don't really know how this works when you have more than two players, (undergraduate, remember) but I suggest looking into not using a pure strategy. If each voter votes randomly, but choses the probability of his votes appropriately, things work out a little better. You can then compute which semi-random strategy gets you the highest expected size of your slice, etcetc.... Find a book, I don't know how this works. (If I wanted to solve this problem on my own, I would trying to do it with 8 coconuts first, rather than a Continuum of cake.) (This also spells Doom for the AIs respecting whatever Constitution is given then. Not just Doom, Somewhat Unpredictable Doom.)

On the Prisoner's Dilema's Infinite Regress: I don't know.

Comment author: VKS 12 April 2012 06:14:12PM 0 points [-]

Further elaboration on the cake problem's discrete case:

Suppose there are two slices of cake, and three people who can chose how these will be distributed, by majority vote. Nobody votes so that they alone get both slices, since they can't get a majority that way. So everybody just votes to get one slice for themselves, and randomly decides who gets the other slice. There can be ties, but you're getting an expected 2/3 of a slice whenever a vote is finally not a tie.

To get the continuous case:

It's tricky, but find a way to extend the previous reasoning to n slices and m players, and then take the limit as n goes to infinity. The voting sessions do get longer and longer before consensus is reached, but even when consensus is forever away, you should be able to calculate your expectation of each outcome...

Comment author: VKS 12 April 2012 05:48:37PM *  0 points [-]

In an undergraduate seminar on game theory I attended, it was mentioned in an answer to a question posed to the presenter that, when computing a payoff matrix, the headings in the rows and columns aren't individual actions, but are rather entire strategies; in other words it's as if you pretty much decide what you do in all circumstances at the beginning of the game. This is because when evaluating strategies nobody cares when you decide, so might as well act as if you had them all planned out in advance. So in that spirit, I'm going to use the following principle:

An Agent should chose the strategy that it predicts gives the greatest outcome, weighted by probability of that outcome etcetc...

to approach every one of these problems.

On Omega's coin flip: Omega has given you the function you have to apply to your strategy, you just apply it, and the result's bigger if you answer "yes". Although, realistically, there's no way that Omega has provided nearly enough information for you to trust him, but whatever, that's the premise.

On Parfit's hitchhiker: Again, Ekman has access to your strategy. Just pick one that does benefit him, since those are the only ones that have you not dying at the outcome. If you don't have 100$, find something else you could give him.

On the Democratic Pie: Well, your problem has no strong Nash equilibrium. No solution is going to be stable. I don't really know how this works when you have more than two players, (undergraduate, remember) but I suggest looking into not using a pure strategy. If each voter votes randomly, but choses the probability of his votes appropriately, things work out a little better. You can then compute which semi-random strategy gets you the highest expected size of your slice, etcetc.... Find a book, I don't know how this works. (If I wanted to solve this problem on my own, I would trying to do it with 8 coconuts first, rather than a Continuum of cake.) (This also spells Doom for the AIs respecting whatever Constitution is given then. Not just Doom, Somewhat Unpredictable Doom.)

On the Prisoner's Dilema's Infinite Regress: I don't know.

Comment author: VKS 08 April 2012 12:08:05PM *  4 points [-]

Hello!

I should have read this post before I started posting.

I'm here because figuring out how thinking works is something I am interested in doing. I'm a freshman student in mathematics somewhere on planet Earth, but I know an unpredictable amount of mathematics beyond what I am supposed to. Particularly category theory. <3 Cat. Terrible at it for now though.

I hope I can say things which are mostly interesting and mostly not wrong, but my posting record already contains a certain number of errors in reasoning...

Comment author: gwern 25 March 2012 03:02:56AM 1 point [-]

That's why they were prominent!

But they also had to correctly pick out the one which was the revolution - every such scientist faces tons of ideas and hypotheses to consider. Is your hypothesis here a kind of regression to the mean: all scientists are equally vulnerable to holding crankery?

Comment author: VKS 08 April 2012 11:32:52AM *  1 point [-]

Essentially, yes. They just happened to have had a string of sixes when they threw the dice, culminating in prominence. If you suppose that the crank-susceptible scientists significantly outnumber the crank-immune, you get predictions which resemble our observations that many prominent scientists are susceptible to crank.

Where by crank-susceptible I mean, approximately, susceptible to infection by crank...

Comment author: Eliezer_Yudkowsky 05 April 2012 10:38:05PM 5 points [-]

The heck? Quantum fields are completely lawful and sane. Only the higher levels of organization, i.e. human beings, are bugfuck crazy.

Behold, the Copenhagen Interpretation causes BRAIN DAMAGE.

Comment author: VKS 06 April 2012 01:04:23AM 4 points [-]

As natural as QFT seems today, my understanding is that in 1960, before many of the classic texts in the domain were published, the ideas still seemed quite strange. We would do well to remember that when we set out to search for other truths which we do not yet grasp.

:p

Comment author: Will_Newsome 04 April 2012 11:29:05PM -1 points [-]

I assume my understanding is correct, since you didn't correct it.

Sorry, I was just too lazy to correct it. Still too lazy.

Comment author: VKS 04 April 2012 11:31:45PM 1 point [-]

I give up. Good night.

Comment author: J_Taylor 04 April 2012 10:50:00PM 0 points [-]

The word 'similar' is often frustratingly vague. However, crocodiles and birds share a more recent common ancestor than crocodiles and turtles.

But this clearly isn't the use you want to make of this bucket.

The word is nonetheless used. I do agree with you that it is frustrating that the word's usage is historically determined.

Comment author: VKS 04 April 2012 11:28:08PM -1 points [-]

Well then the term reptile is somewhat deceptive in evolutionary biology, and based more on some consensus about appearance. Fine. Whatever. The point is that the word metaphysics isn't evocative in that way or any way, except in the context of its historical usage. As such, it cannot inform us in any way about any subject that isn't the phenomenon of its acceptance as a field, and is not even a useful subject heading, being a hodgepodge. We can choose whether to continue to use it, and I don't see why we should.

Comment author: Will_Newsome 04 April 2012 10:33:16PM 0 points [-]

I could replace all my uses of the word "metaphysical" with "sloppily-general", I guess, but I'm not sure it has quite the right connotations, and "metaphysical" is already the standard terminology. "Metaphysical" is vague in a somewhat precise way that "sloppily-general" isn't. I appreciate the general need for down-to-earth language, but I also don't want to consent to the norm of encouraging people to take pains to write in such a way as to be understood by the greatest common factor of readers.

Comment author: VKS 04 April 2012 11:16:22PM *  0 points [-]

"X is a metaphysic" becomes "X is somehow a model (of something), but I'm not sure how". "Y is metaphysical" becomes "Y is about or related to a model (somehow)". I assume my understanding is correct, since you didn't correct it. "sloppily-general" is then indeed kind of far from the intended meaning, but that's just because it's a terrible coinage.

Elsewhere, somebody posted a link to the Stanford Encyclopedia of Philosophy's definition of metaphysics. They say right in the intro that they haven't found a good way to define it. The Wikipedia article on metaphysics's body implies a different definition than its opening paragraph. In common parlance, it's used for some vague spiritualish thing. And your definition is different from all of these. Do you think that the term could reasonably be expected to be understood the way you intended it to?

"Metaphysical" isn't vague in a somewhat precise way. It isn't even evocative, as its convoluted etymology prevents even that. It's just vague and used by philosophers.

The greatest common factor of readers isn't even here. The point is more to be understood by readers at all. Don't make your writing more obscure than it needs to be. Hard concepts are hard enough as is, without making the fricking idea of "somehow a model" worth 3 hours' worth of discussion.

Comment author: J_Taylor 04 April 2012 10:03:05PM 0 points [-]

I don't particularly see why the metaphysics bucket is convenient, though.

Unless you are a cladist, 'reptile' is a bucket which contains crocodiles, lizards, and turtles, but does not contain birds and mammals. The word is still sometimes useful for communication.

Is there any point in discussing metaphysics as anything other than a cultural phenomenon among philosophers?

It depends on your goals. I do not generally recommend it, however.

Comment author: VKS 04 April 2012 10:31:52PM 0 points [-]

My claim was not about the general lack of utility of buckets. Briefly, the reptile bucket is useful because reptiles are similar to one another, and thus having a way to refer to them all is handy. There is apparently no such justification for "metaphysics", except in the sense that its contents are related by history. But this clearly isn't the use you want to make of this bucket.

Comment author: J_Taylor 04 April 2012 09:35:54PM 3 points [-]

When someone on Lesswrong uses the term 'simulation', they are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B). (This particular subject often falls under the part of metaphysics known as ontology.)

The same applies to usage of most terms.

Comment author: VKS 04 April 2012 10:23:12PM 1 point [-]

Correct me if I'm wrong, but "They are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B)." and "They are probably making some implicit claims about what it means for some object(A) to be a simulation of some other object(B)" mean exactly the same thing.

View more: Prev | Next