Comment author: Today 12 February 2012 01:17:51AM -1 points [-]

I would say that is when there is empirical evidence supporting the claim but, try as you might, you can't find any that falsifies it.

Comment author: Nebu 12 March 2015 03:06:30PM 1 point [-]

I think having a flawed (human) agent using this technique is too susceptible to the agent convincing themselves that they tried hard enough, and so we've just pushed the problem one step back: How do you know when you've tried hard enough.

Comment author: RichardKennaway 13 October 2010 10:21:31AM *  1 point [-]

Greg Egan's short story "Axiomatic" is close to the first scenario. Complete synopsis in rot13:

N zna, n pbzzvggrq cnpvsvfg, unf n tveysevraq jub vf fubg nf n olfgnaqre va n onax eboorel. Gur eboore vf pnhtug naq pbaivpgrq ohg trgf n fubeg fragrapr. Gur zna jnagf gb xvyy uvz, lrg vf nyfb bccbfrq gb xvyyvat uvz. Fb va beqre gb or noyr gb xvyy uvz ur ohlf na vyyvpvg qeht gb ercebtenz uvf trareny ivrjcbvag gb bar bs "Crbcyr ner whfg zrng. Gurl qba'g znggre." Gura ur tbrf gb pbasebag gur eboore, abj bhg bs wnvy, ohg orsber fubbgvat uvz, ur nfxf jul ur xvyyrq uvf tveysevraq, naq trgf gur bss-unaq nafjre, "url, fur jnf whfg va gur jnl, zna". Gur eboore unq gur fnzr nggvghqr gung ur unf whfg chepunfrq. Ur wblbhfyl rzcgvrf uvf tha ng uvz, abg va eriratr sbe uvf tveysevraq, ohg orpnhfr [crbcyr ner zrng, gurl qba'g znggre].

Gur qeht bayl unf n grzcbenel rssrpg, ohg gur fgbel raqf jvgu gur cebgntbavfg vagraqvat gb trg n irefvba gung jvyy znxr vg creznarag.

So, what do you do with Gandhi after his viewpoint has changed and he's done the deed? What does Gandhi do with Gandhi? I think this is a case where hardening the problem by elevating the stakes obscures the issue rather than focussing it. Just about any means can be made to look justified by making the ends important enough.

Comment author: Nebu 02 February 2015 09:31:56AM 0 points [-]

Slight nitpick on your summary of the story:

Gur cebgntbavfg qbrf abg rzcgl uvf tha vagb gur eboore zreryl orpnhfr crbcyr ner zrng naq qba'g znggre. Vafgrnq, gur cebgntbavfg unq vagraqrq gb yrg gur eboore yvir orpnhfr gur cebgntbavfg ernyvmrq gur jubyr fvghngvba jnf nofheq naq abguvat znggrerq nalzber, abg rira uvf tveysevraq'f qrngu (vg'f nzovthbhf nf gb jurgure gur qeht jnf gur cevznel pnhfr bs uvz pbzvat gb guvf pbapyhfvba). Gur cebgntbavfg ghearq gb jnyx njnl, naq gung'f jura gur eboore ehfurq uvz. Va ernpgvba/frys qrsrafr, gur cebgntbavfg fubg gur eboore. Nsgre frrvat gung ur jnf qrnq, gur cebgntbavfg sryg ab erzbefr naq yrsg.

Comment author: Arandur 01 August 2011 05:53:45PM *  -1 points [-]

I just reread it; thank you for allowing to see one of Eliezer's posts in a new light. Always a pleasure.

However, I have other data at hand that seems to lend credence to the "God exists" theory; I don't have to reply on the results of one test. If I did, then by that same logic, we would always have to assume that a coin once flipped would be 100% biased toward the side upon which is landed.

Your program, in order to describe the universe, has to be the best model of every single point in the universe. I'm sure there were people who argued that Newton's equations were simpler than General Relativity. But the data cannot be denied.

Comment author: Nebu 14 January 2015 05:33:46AM 0 points [-]

I think there are two distinct concepts here: One of them is Bayesian reasoning, and the other is Solomonoff induction (which is basically Occam's Razor taken to its logical extreme).

Bayesian reasoning is applicable when you have some prior beliefs, usually formalized as probabilities for various theories being true, (e.g 50% chance God did it, 50% amino acids did it), and then you encounter some evidence (e.g. observe angels descend from the sky), and you now want to update your beliefs to be consistent with the evidence you encountered (e.g. 90% chance God did it, 10% amino acids did it). To emphasize, Bayesian reasoning is simply not applicable unless some prior belief to update.

However, I have other data at hand that seems to lend credence to the "God exists" theory;

Sounds like you're referring to Bayesian reasoning here. You're saying without that "other data", you have some probabilities for your various theories, but then when you add in that data, you're inclined to update your probabilities such that "God did it" becomes more probable.

In contrast, Occam's Razor and Solomonoff induction do not work with "prior beliefs" (in fact, Solomonoff is often used, in theory, to bootstrap the Bayesian process, providing the "initial belief", from which you can start using Bayesian to update from). When using Solomonoff, you enumerate all conceivable theories, and then for each theory, you check whether it is compatible with the the data you currently have. You don't think in terms of "this theory is more probable given data set 1, but that theory is more probable given data set 2". You simply mark each theory as "compatible" or "not compatible". Once you've done that, you eliminate all theories which are "not compatible" (or equivalently, assign them a probability of 0). Now all that remains is to assign probabilities to the theories the remain (i.e. the ones which are compatible with the data you have). One naive way to do that is to just assign uniform probability to all remaining theories. Solomonoff induction actually states that you should assign probabilities based on the complexity of the theory.

If I did, then by that same logic, we would always have to assume that a coin once flipped would be 100% biased toward the side upon which is landed.

That's actually not true. Coincidentally, I wrote a web app which illustrates a similar point: http://nebupookins.github.io/binary-bayesian-update/

Mentally relabel the button "Witnessed Failure" with "Saw a coin come up tails" and "Witnessed Success" with "Saw a coin come up heads", then click the "Witnessed Success"/"Saw a coin come up heads" button.

Note that the results is not "You should assume that a coin is 100% biased towards head."

Instead, the results are "There's a 0% chance that the coin is 100% biased towards tail, a tiny chance that the coin is 99% biased towards tail, a slightly larger chance that the coin is 98% biased towards tail" and so on until you reach "about a 2% chance the coin is 100% biased towards head", which is currently your most probable theory. But note that while "100% biased towards head" is your most probable theory, you're extremely non-confident in that theory (only a 2% chance that the theory is true). You need to witness a lot more coin flips to increase you confidence levels (go ahead and click on the buttons a few more times).

Disclaimer: This web app actually uses the naive solution of initially assigning uniform probability to all possible theories, rather than the Solomonoff solution of assigning probability according to complexity.

Comment author: CCC 10 November 2014 01:37:29PM 3 points [-]

The first idea to come to my mind, on reading this article, is that companies do not produce jobs and attempt to sell them to potential employees. Rather, potential employees produce labour, which they attempt to sell to companies (or, in the case of entrepreneurs, which they use to produce goods or services for sale).

Labour is... a very odd good. Most goods are produced, for a cost, and can then be stored and transported for a time before being sold at a profit.

Labour, on the other hand, can be produced at very little cost, but must be used in some manner at the time and place of production and can only be produced at a limited rate per person. Any hours that are not used are lost. Skilled labour is more complicated yet, as it comes in many varieties and most people are unable to produce most varieties (and those that can produce any given type will do so with varying degrees of quality).

This means, among other things, that the available supply of labour depends on the population of employable people; a new person, grown to an employable age, can produce new labour over and above what currently exists.

It seems likely that much of the difference between the job market and the ideal market can be traced back to just how different labour is from other goods.

Comment author: Nebu 11 November 2014 06:59:23AM 1 point [-]

The first idea to come to my mind, on reading this article, is that companies do not produce jobs and attempt to sell them to potential employees. Rather, potential employees produce labour, which they attempt to sell to companies (or, in the case of entrepreneurs, which they use to produce goods or services for sale).

What kind of predictive power does this belief have? I.e. why does this inversion mean that the labour market is "different", but performing the inversion "stores don't sell goods to consumers for money, consumers sell money to stores for goods" does not make the goods market similarly "different"?

Comment author: Decius 14 April 2013 09:26:51PM 0 points [-]

"Having at least one survivor" means that humanity exists at the end of the game. "Surviving" means that your country exists at the end of the game.

I sidestepped 'ethical' entirely in favor of 'practical'. I also had to address this question in a manner not nearly as hypothetical or low-stakes as this.

Comment author: Nebu 15 April 2013 02:57:35PM 0 points [-]

Okay, thanks.

So it sounds to me like this is not iterated prisoner's dilemma, because if my country gets nuked, I do not get to elect another military leader for the next round.

Comment author: Decius 07 March 2013 05:56:36AM 0 points [-]

Vastly simplified:

Survival is worth three points, destroying the opposing ideology is worth two points, and having at least one survivor is worth twenty points.

If nobody uses WMDs, everyone gets 23 points. If one side uses WMDs, they survive and destroy their idealogical opponent for 25 points to the opposing 20. If both sides use WMDs, both score 2 for destroying the opponent.

Given that conflicts will happen, a leader who refuses to initiate use of WMDs while convincing the opponent that he will retaliate with them is most likely to result in the dual-cooperate outcome. Therefore the optimum choice for the organism which selects the military leaders is to select leaders who are crazy enough to nuke them back, but not crazy enough to launch first.

If you share the relative ranking above (not-extinction>>surviving>wiping out others), then your personal maximum comes from causing such a leader to be elected (not counting unrelated effects on e.g. domestic policy). The cheapest way of influencing that is by voting for such a leader.

Comment author: Nebu 14 April 2013 03:10:53PM *  0 points [-]

What's the difference between "Survival" and "having at least one survivor"?

The way I see it:

  • If I'm dead, 0 points.
  • If I'm alive, but my city got nuked, so it's like a nuclear wasteland, 1 point.
  • If I'm alive, and living via normal north american standards, 2 points.

We're assuming a conflict is about to happen, I guess, or else the hypothetical scenario is boring and there are no important choices for me to make.

The question is not "Do I elect a crazy leader or a non-crazy leader?", but rather, "Do I elect a leader that's believes 'all's fair in love and war?' or a leader that believes in 'always keep your word and die with honor'?"

I.e. if you think "ethical vs unethical" means "will retaliate-but-not-initiate vs will not retaliate-but-not-intiiate", then it's no wonder why we're having communication problems.

Comment author: Decius 20 February 2013 02:47:12AM 0 points [-]

I think the payoff matrix of warfare is very analogous to the PD payoff matrix, and that the previous (and even current) military leaders are available to all serious players of the game. Also, anticipate that others might make irrational decisions, like responding to a WMD attack with a WMD reprisal even if it doesn't benefit them; they might also make rational decisions, like publicly and credibly precommitting to a WMD reprisal in the even of a WMD attack.

Comment author: Nebu 06 March 2013 02:57:33PM 0 points [-]

I'm still not following you.

So first of all, you'll need to convince me that the payoff matrix for an individual civilian within a nation deciding who their military leader should be is similar to one of the prisoners in PD. In particular, we'll need to look at what "cooperate" and "defect" even mean for the individual citizen. E.g. does "cooperate" mean "elect an ethical military leader"?

Second, asuming you do convince me that the payoff matrices are similar, you'll have to clarify whether you think warfare is iterated for an individual civilian, especially when the "other" nation defects. I suspect if my leader is ethical, and their leader is not, then I will be dead, and hence no iteration for me.

Thirdly, you may wish to clarify whether all the sentences after your first are intended to be new assertions, or if they are supposed to be supporting arguments for the first sentence.

Comment author: Decius 16 February 2013 07:42:05AM 0 points [-]

But the selection of military leaders is iterated.

Comment author: Nebu 19 February 2013 06:33:14PM 1 point [-]

I'm afraid I don't see the relevance.

Comment author: Decius 15 February 2013 11:46:19PM 1 point [-]

Do you defect in iterated prisoners' dilemma?

Comment author: Nebu 16 February 2013 03:42:32AM *  1 point [-]

No, but I'm not sure military conflicts are necessarily iterated, especially from the perspective of me, an individual civilian within a nation.

Comment author: CCC 12 February 2013 07:15:57AM 2 points [-]

I took it as meaning the second. There's even a recommendation as to what else to read; a book on Lisp.

Comment author: Nebu 15 February 2013 05:20:42PM 1 point [-]

Of course, if your goal is to learn Python but you find Zed's book too easy, "Read a book on Lisp" is probably not suitable advice.

View more: Prev | Next