The Irrationality Game

38 Post author: Will_Newsome 03 October 2010 02:43AM

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

Comments (910)

Comment author: Angela 21 January 2014 03:41:13PM 3 points [-]

The hard problem of consciousness will be solved within the next decade (60%).

Comment author: VAuroch 12 January 2014 06:54:50AM 0 points [-]

Pope Francis will do more good than harm in the world. (80%)

Comment author: timujin 12 January 2014 04:59:12AM *  2 points [-]

Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we're giving to CFAR and MIRI aren't going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn't donate money when EY wants you to. ~5%, maybe?

Comment author: Angela 11 January 2014 06:16:23PM 0 points [-]

The amount of consciousness that a neural network S has is given by phi=MI(A^Hmax;B)+MI(A;B^Hmax), where {A,B} is the bipartition of S which minimises the right hand side, A^H_max is what A would be if all its inputs were replaced with maximum-entropy noise generators and MI(A,B)=H(A)+H(B)-H(AB) is the mutual information between A and B and H(A) is the entropy of A. 99.9%

Comment author: [deleted] 29 December 2012 11:04:38PM 9 points [-]

Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%

Comment author: Wrongnesslessness 13 April 2012 05:02:12PM 6 points [-]

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

Comment author: Incorrect 13 April 2012 07:01:32PM 7 points [-]

All existence is intrinsically meaningless

I'm trying to figure out what this statement means. What would the universe look like if it were false?

Comment author: Locaha 21 January 2014 05:09:27PM 2 points [-]

I'm trying to figure out what this statement means.

You can't. We live in an intrinsically meaningless universe, where all statements are intrinsically meaningless. :-)

Comment author: TheOtherDave 13 April 2012 08:12:54PM 6 points [-]

In context, I took it to predict something like "Above a certain limit, as a system becomes more intelligent and thus more able to discern the true nature of existence, it will become less able to motivate itself to achieve goals."

Comment author: thomblake 13 April 2012 07:11:51PM 1 point [-]

I'm not sure it's a bug if "all existence is meaningless" turns out to be meaningless.

Comment author: ArisKatsaris 13 April 2012 06:34:46PM *  1 point [-]

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

This prediction isn't falsifiable -- the word "crazy" is not precise enough, and the word "sufficient" is a loophole you can drive the planet Jupiter through.

Comment author: TimS 13 April 2012 05:27:55PM *  1 point [-]

Aren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect.

I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.

Comment author: Wrongnesslessness 13 April 2012 06:24:03PM 1 point [-]

But humans are crazy! Aren't they?

Comment author: TimS 13 April 2012 06:30:19PM 0 points [-]

If we define crazy as "sufficiently mentally unusual as to be noticeably dysfunctional in society" then I estimate at least 50% of humanity is not crazy.

If we define crazy as "sufficiently mentally unusual that they cannot achieve ordinary goals more than 70% of the time," then I estimate that at least 75% of humanity is not crazy.

Comment author: thomblake 13 April 2012 03:03:31PM *  0 points [-]

This comment will be massively upvoted. 100%.

EDIT: See here. Retracted.

Comment author: TheOtherDave 13 April 2012 03:18:29PM 2 points [-]

Were I a robot from 1960s SF movies, my head would now explode.

Comment author: thomblake 13 April 2012 03:22:09PM 3 points [-]

The stable solution is for everyone to notice that few people will read the comment and so it will only be moderately upvoted, and so upvote it.

Comment author: MarkusRamikin 13 April 2012 03:26:32PM 2 points [-]
Comment author: thomblake 13 April 2012 03:47:06PM 2 points [-]
Comment author: MarkusRamikin 13 April 2012 05:15:38PM 0 points [-]

Aw, didn't mean you to actually do that. :) Guess I'll upvote you here instead.

Comment author: [deleted] 13 April 2012 03:43:38PM 0 points [-]

Why... not?

Comment author: thomblake 13 April 2012 04:23:52PM 2 points [-]

There isn't a reason - that just turned out to be another stable solution to the paradox.

Comment author: [deleted] 13 April 2012 06:02:23PM -1 points [-]

What paradox? There wasn't even a paradox.

Comment author: TheOtherDave 13 April 2012 06:15:32PM 2 points [-]

As I understood it, the paradox was that by the rules of the thread, "This comment will be massively upvoted. 100%" is something I should upvote if I believe it's unlikely to be true. But if I upvote it on that basis, I should expect others to upvote it as well. But if I expect others to upvote it, then I should expect it to be upvoted, and therefore I should consider it likely to be true. But if I consider it likely to be true, then by the rules of the thread, I should downvote it. But if I downvote on that basis, I should expect others to downvote it as well, and therefore I should consider it unlikely to be true. But...

Comment author: thomblake 13 April 2012 06:12:10PM 1 point [-]

Naively:

Everyone should agree that 100% certainty of something is infinitely overconfident. Then, everyone should upvote. Knowing this, I'm completely certain that I'll get lots of upvotes, and so absurdly large amounts of certainty seem justified. And as a kicker, everyone said I was overconfident of something that turned out to be correct.

Obviously, there are other possibilities (like me retracting the comment before it can be massively upvoted), so (like usual) 100% certainty really isn't justified. And unforseen consequences like that are exactly why you don't play with outcome pumps, as the time turner story reminds us.

Comment author: MarkusRamikin 13 April 2012 03:46:14PM *  0 points [-]

The universe might end due to paradox.

Comment author: [deleted] 13 April 2012 03:47:41PM 0 points [-]

I seriously doubt the universe's integrity depends on the state some bits stored on hardware that exists inside of it.

Comment author: [deleted] 13 April 2012 11:48:23AM *  10 points [-]

I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).

Comment author: Zetetic 17 April 2012 12:39:44AM *  2 points [-]

essentially erasing the distiction of map and territory

This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.

In more detail:

Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism, so the map/territory distinction is not dissolved.

Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us.

So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.

Comment author: [deleted] 17 April 2012 05:49:21AM 0 points [-]

It is true that a "Shiny" new ontological perspective changes little. Practical intelligences are still bayesians, for information theoretical reasons. What my rather odd idea looks at is specifically what one might call the laws of physics and the mystery of the first cause.

And if one might know the Math behind the Universe, the only thing that one might get is a complete theory of QM.

Comment author: Salivanth 13 April 2012 08:57:16AM -2 points [-]

Nobody has ever come up with the correct solution to how Eliezer Yudkowsky won the AI-Box experiment in less than 15 minutes of effort. (This includes Eliezer himself). (75%)

Comment author: [deleted] 18 April 2012 04:05:37PM 0 points [-]

Well, no. The solution is definitely non-obvious and I am also quite certain it took Eliezer himself to come up with a good strategy.

Comment author: gRR 08 April 2012 01:13:44PM 1 point [-]

Richard Dawkins' genocentric ("Selfish Gene") view is a bad metaphor for most of what happens with sufficiently advanced life forms. Organism-centered view is a much better metaphor. New body forms and behaviors first appear in phenotype, in response to changing environment. Later, they get "written" into the genotype if the new environment persists for enough time. Baldwin effect is ubiquitous. (60%)

Comment author: Multiheaded 08 April 2012 08:55:21AM 15 points [-]

Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)

Comment author: VAuroch 12 January 2014 07:40:53AM -1 points [-]

The resemblance is shallow at best.

Comment author: ArisKatsaris 11 April 2012 01:31:02PM 0 points [-]

You didn't assign a probability estimate.

Comment author: Multiheaded 11 April 2012 01:34:33PM *  0 points [-]

Oh. Umm... 33%!

Comment author: [deleted] 11 April 2012 06:44:59AM *  -1 points [-]

Don't joke posts ruin the of the point of the Irrationality Games?

In any case you are taking the wrong approach. Clearly it is ultimately the fault of the Jews because they run everything, no further thought required.

Comment author: Multiheaded 11 April 2012 12:31:15PM 1 point [-]

I'm truly not joking!!! You know perfectly well that I don't share much of what's commonly known as "sanity". So to me it's worthy of totally non-ironic consideration..

Comment author: [deleted] 11 April 2012 02:26:52PM *  1 point [-]

I'm sorry for the misunderstanding. I think my brain misfired because the theory involved a video game.

Can you elaborate on it? Also this probably isn't the only such incident you think is plausible, can you name others?

Comment author: nwthomas 04 July 2011 09:47:22PM 30 points [-]

I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.

Comment author: [deleted] 13 April 2012 11:25:18AM 1 point [-]

Wow, telepathy is a pretty big thing to discuss. Sure there isn't a simpler hypothesis? Upvoted.

Comment author: nwthomas 26 April 2012 06:25:21AM 0 points [-]

The data I'm working from is that contact with certain people sometimes causes me to have mystical experiences. This has happened somewhere between 20 and 100 times, with less than a dozen people. Sometimes but not always, it happens in both directions; i.e., they also have a mystical experience as a result of the contact.

The simpler hypothesis, from a materialist point of view, is that seeing these people just tripped some switch in my brain, without any direct mind-to-mind interaction being involved. Then we can say that I also tripped such a switch in their brains in the cases where it was reciprocal. We are left with the question of why this weird psychological phenomenon happens.

The religious explanation is in many ways easier and more natural. We can say that my souls brushed up against these people's. It makes sense from within the religious frame of mind that this sort of thing would happen. But obviously we run into the issues with religious views in general.

Comment author: ArisKatsaris 26 April 2012 09:42:47AM *  14 points [-]

If we replaced "mystical experiences" with something of less religious connotations like "raging hard-ons", you wouldn't think that 'souls brushing up against each other' is the most natural explanation -- you'd instead conclude that some aspect of psychology/biochemistry/pheromones is causing you to have a more intense reaction towards certain people and vice-versa.

From a physicalist perspective the brain is as much an organ as the penis, and "mystical experiences" as much a physical event in the brain as erections are a physical event in the penis.

Comment author: [deleted] 26 April 2012 10:09:05AM *  -1 points [-]

So true, so funny.

EDIT: Why was this downvoted? I intended to convey that I thought ArisKatsaris was right in saying that brains are just as physical as genitals, and also that I thought his similie was funny.

Comment author: [deleted] 26 April 2012 09:07:40AM 0 points [-]

You're giving a mysterious answer and proposing ontologically basic mental substances.

I still say that it is a rather extraordinary claim, and thus requires extraordinary evidence. So far you have presented close to none, and what you have could easily and more sensibly be explained with psychological kinks. See cold readings.

Comment author: RichardKennaway 26 April 2012 07:25:28AM 1 point [-]

Neither of these is an explanation.

Comment author: potato 15 June 2011 11:59:47AM -1 points [-]

The natural world is only different from other mathematically describable worlds in content not in type. Any universe that is described by some mathematical system has the same ontological status as the one that we experience directly. (90% about)

Comment author: [deleted] 18 April 2012 04:06:29PM 0 points [-]

I agree with this hypothesis.

Comment author: MattMahoney 26 April 2011 04:29:04PM 22 points [-]

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

Comment author: wedrifid 26 April 2011 05:57:08PM 4 points [-]

How do the votes work in this game again? "Upvote for insane", right?

Comment author: 79zombies 25 March 2011 01:03:25AM -2 points [-]

You will downvote this comment (Not confident at all - 0%).

Comment author: [deleted] 28 October 2010 05:00:36AM *  51 points [-]

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Comment author: Tuna-Fish 03 November 2010 01:20:43PM 16 points [-]

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

Comment author: Jack 31 October 2010 09:13:17AM 10 points [-]

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

Comment author: tenshiko 23 October 2010 03:38:45AM -1 points [-]

I believe that virtually perfect gender egalitarianism will not be achieved within my lifetime in the United States with certainty of 90%.

This depends on the assumption that I will only live at most about eighty more years, i.e. that the transhumanist revolution will not occur within that time and that I am either not frozen or fail to thaw. My belief in that assumption is 75%.

Comment author: wedrifid 12 December 2010 02:00:49PM 6 points [-]

Upvoted for drastic underconfidence.

Comment author: Alicorn 23 October 2010 03:43:02AM 5 points [-]

Define "virtually perfect gender egalitarianism".

Comment author: tenshiko 23 October 2010 04:17:57AM 1 point [-]

I have to admit that I knew in my heart I should define it but didn't, mostly because I know that the tenets are purely subjective and there's no way I can cover everything that would be involved. Here are a couple points:

  1. No personality traits are considered acceptable in males and unacceptable in females, or vice versa. E.x. aggressiveness, confinement to the domestic sphere, sexual conquest.
  2. Gender is absent from your evaluation of a person's potential utility, except in specific cases where reproduction is relevant (e.g., concern about maternity leave). Even if it is conclusively proven that average men cannot work in business companies without getting into some kind of scandal eventually or that average women cannot think about math as seriously, that shouldn't affect your preconceptions of Jane Doe or John Smith.
  3. For the love of ice, please let the notion of the man as the default human just die, like it should have SO LONG AGO. PLEASE.

I hope this doesn't fall into a semantics controversy.

Comment author: Alicorn 23 October 2010 04:26:30AM *  6 points [-]
  1. "Considered" by whom? Can I have, say, an aesthetic preference about these things (suppose I think that women look better in aprons than men do, can I prefer on this obviously trivial basis that women do more of the cooking?), or is any preference about the division of traits amongst sexes a problem for this criterion?

  2. "Potential utility" meaning the utility that the person under consideration might experience/get, or might produce? Also, does this lack of preconception thing seem to you to be compatible with Bayesianism? If I have no reason to suspect that John and Jane are anything other than average, on what epistemic basis do I not guess that he is likelier (by the hypothetical proofs you suppose) to be better at math and more likely to cause scandal?

  3. So what gender should the default human be, or should we somehow have two defaults, or should the default human be one with a set of sex/gender characteristics that rarely appear together in the species, or should there be no default at all (in which case what will serve the purposes currently served by having a default)?

I'm totally in favor of gender egalitarianism as I understand it, but it seems a little wooly the way you've written it up here. I'm sincerely trying to figure out what you mean and I'll back off if you want me to stop.

Comment author: tenshiko 23 October 2010 02:11:55PM 1 point [-]
  1. Perhaps an aesthetic preference isn't a problem (obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences). Note that I used the word "personality traits" - some division of other traits is inevitable. Things that upset me with the current state of affairs are where one boy fights with another and it is dismissed as boys being boys, while any other combination of genders would probably result in disciplinary action. Or how the general social trends (in Western cultures, at least) think that women wearing suits is commendable and becoming ordinary, but a man in a dress is practically lynched.

  2. Potential utility produced, for your company or project. I think I phrased this one a little wonkily earlier - you're right, under the proofs I layed out, if all you know about John and Jane are their genders, then of course the Bayesian thing to do is assume John will be better at math. What I mean is more that, if you do know more about John and Jane, having had an interview or read a resume, the assumption that they necessarily reflect the averages of their gender is like not considering whether a woman's positive mammogram could be false. For an extreme example, the majority of homocides in many countries are committed by men. Should the employer therefore assume that Jane is less likely than John to commit such a crime, even if she has a criminal record?

  3. I don't see why having an ungendered default is so difficult, besides for the linguistic dance associated with it in our language (and several others, but far from all of them), which is probably not going to be a problem for many more generations due to the increasing use of "they" as a singular pronoun. For instance, having a raceless or creedless default has proven not to be that hard, even if members of different races or creeds would react differently in such a situation. If one of the things I'm talking about actually happens in a cishuman lifetime, my bet would go on this one. Now, in situations where you need a more specific everyman, who goes to church every Sunday and has two children and a dog, there might be more use in a gendered, race-bearing, creed-bearing individual.

Maybe I should just go back and say "where virtually perfect acknowledges that there are some immutable differences between the sexes but that all others with detrimental effect have been eradicated".

This is why it surprises me so much that the levels of communication post had so little focus on the level of values or potential misunderstandings that can occur on the level of facts due to the ambiguity of language. The value that I am trying to express, and which I assume that you are as well or something close to it, is that men and women should be treated equally, but completely equal treatment would be impractical and not equal in the terms of benefit conferred. (For example, growth of breasts in men should be taken as a health concern, not a sign of attractiveness.) So we are forced to add specifics to our definitions that make them less clear.

Unless you still think something is wrong or missing in my definition to the point that we're talking about significantly different things, I would appreciate it if we moved on from this aspect of the issue.

Comment author: [deleted] 12 December 2010 12:11:28PM *  3 points [-]

(obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences).

Some personality traits are considered attractive in one sex and not another.

Comment author: tenshiko 12 December 2010 10:33:54PM 0 points [-]

As I implicitly stated, I don't think that personality traits for the most part should be considered attractive in one sex and not another. There are some physical traits that are arbitrary, like long hair, with attractiveness dimorphism, but I'm talking about physical traits that distinctly vary in whether they would be healthy between males and females. Like having pronounced mammary glands. That's obviously not a fertility marker in both sexes.

Comment author: RomanDavis 17 December 2010 10:46:07PM 2 points [-]

Are you sure this doesn't apply for personality traits as well?

Going into evopsych is so tempting right now, but the "just so story" practically writes itself.

Here's an alternative:

Since major personality traits are associated with hormones produced by parts of our body produced through embryogenisis based on our genes and the traits of our mother's womb. And since our reproductive organs are also so, it would be very surprising to find there was no correlation between personality traits and fertility/ virility, and it would be a major blow against your argument if it turned out to be one that is both strong and positive.

Comment author: Relsqui 23 October 2010 05:27:28AM 1 point [-]

in which case what will serve the purposes currently served by having a default

What are those purposes, anyway?

Comment author: Alicorn 23 October 2010 12:56:42PM 1 point [-]

Literary "everyman" types, not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults, and probably something I'm not remembering.

Comment author: Relsqui 23 October 2010 05:02:55PM 1 point [-]

not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults

How do you do that in English as it is now?

Comment author: Alicorn 23 October 2010 05:41:20PM 1 point [-]

People say things like "Take your average human. He's thus and such." If you want to start a paragraph with "Take your average human" and not use gendered language, you have to say things like "They're thus and such" (sometimes awkward, especially if you're also talking about plural people or objects in the same paragraph) or "Ey's thus and such", which many people don't understand and others don't like.

Comment author: NancyLebovitz 12 December 2010 02:55:19PM 1 point [-]

I don't have an average human, and I don't think the universe does either. I think there's a lot to be said for not having a mental image of an average human.

Furthermore, since there are nearly equal numbers of male and female humans, gender is trait where the idea of an average human is especially inaccurate.

I think the best substitute is "Take typical humans. They're thus and such." Your average alert listener will be ready to check on just how typical (modal?) those humans are.

Comment author: shokwave 12 December 2010 03:32:54PM 1 point [-]

Exactly. People make a fuss about a lack of singular nongendered pronouns. The plural nongendered pronouns are right there.

Comment author: Mercy 23 October 2010 07:09:11PM 0 points [-]

How is "they" any more ambiguous than "you"? Both can easily qualified with "all".

Comment author: Relsqui 23 October 2010 08:07:47PM 1 point [-]

It's not always grammatically feasible or elegant to do so. Also, the singular "you" is much more common than the singular "they," so your readers are more likely to expect it and are prepared for the potential ambiguity.

Comment author: Vladimir_M 23 October 2010 06:29:51PM *  6 points [-]

Alicorn:

"Ey's thus and such"

I find these invented pronouns awful, not only aesthetically, but also because they destroy the fluency of reading. When I read a text that uses them, it suddenly feels like I'm reading some language in which I'm not fully fluent so that every so often, I have to stop and think how to parse the sentence. It's the linguistic equivalent of bumps and potholes on the road.

Comment author: JGWeissman 23 October 2010 06:39:38PM 2 points [-]

After reading one story that used these pronouns, I was sufficiently used to them that they do not impact my reading fluency.

Comment author: Relsqui 23 October 2010 05:53:23PM 1 point [-]

Hmm. It's true, people do, but I think it's getting less common already. Were you asking, then, which of those alternatives the original commenter preferred?

Comment author: Alicorn 23 October 2010 05:54:52PM 1 point [-]

Not really, I'm just pointing out that gendered language isn't a one-sided policy debate. (I favor a combination of "they" and "ey", personally, or creating specific example imaginary people who have genders).

Comment author: ata 21 October 2010 10:11:22PM *  -2 points [-]

Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)

Comment author: tenshiko 23 October 2010 03:02:29AM 3 points [-]

Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you're counting "ones that lack self-concepts sufficiently strong to have any real preference to exist". Are you saying the presence of a notochord gives a fish higher moral worth than a crab?

Comment author: RobinZ 23 October 2010 08:01:20PM 3 points [-]

That's a good point - there are almost certainly invertebrate species on the same side of the line. Squid, for example.

Comment author: Vladimir_Nesov 21 October 2010 10:29:17PM 1 point [-]

"At least a little bit" is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.

Comment author: ata 21 October 2010 11:04:35PM *  0 points [-]

Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.

How so? You mean to the extent that any tiny change has some remote chance of affecting something that someone cares about, or anything more direct than that?

Comment author: Vladimir_Nesov 21 October 2010 11:32:02PM *  1 point [-]

Change, to the extent the notion makes sense (in the map, not territory) already comes with all of its consequences (and causes).

Given any mapping Worlds->Utilities, you get a partition of Worlds on equivalence classes of equal utility. Presumably, exactly equal utility is not easy to arrange, so these classes will be small in some sense. But whatever the case, these classes have boundaries, so that an arbitrarily small change in one direction or the other (from a point on a boundary) determines higher or lower resulting utility. Just make it so that one atom is at a different location.

Comment author: ata 21 October 2010 11:58:16PM *  0 points [-]

Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program "print 'I am experiencing pain'" doesn't.

(That's assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I'm not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because "My subjective emotivist intuition says that x is valuable, 85%" would not be an interesting claim.)

Comment author: Vladimir_Nesov 22 October 2010 12:05:24AM *  0 points [-]

Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program "print 'I am experiencing pain'" doesn't.

If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving "experience of pain", which is only one way to see the territory that simultaneously consists of atoms).

If your claim is about "experience of pain", then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of "experience of pain" that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that "experience of pain" is real).

Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.

Comment author: ata 22 October 2010 12:35:09AM *  0 points [-]

I don't really understand this comment, and I don't think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. "print 'I am experiencing pain'" was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people's moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people's moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).

If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof

My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.

Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.

I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it's not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.

Comment author: andrewbreese 14 October 2010 08:22:54PM 12 points [-]

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.

Comment author: Multiheaded 15 April 2012 08:52:03AM 1 point [-]

I don't understand..By what plausible mechanism could such a disastrous loss of knowledge happen specifically NOW?

Comment author: NancyLebovitz 11 April 2012 03:19:03PM 1 point [-]

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

The good news is that some version of this knowledge keeps getting rediscovered.

The bad news is that the knowledge seems to be mostly tacit and (so far) unteachable.

Comment author: [deleted] 11 April 2012 02:48:41PM -1 points [-]

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Down voted because I think this is very plausible.

Comment author: homunq 14 October 2010 02:09:03AM 13 points [-]

The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)

NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.

I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.

Comment author: MattMahoney 26 April 2011 04:01:43PM 2 points [-]

I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

Comment author: MichaelVassar 16 October 2010 03:42:49PM 0 points [-]

Do you mean unable with any scientific instrumentation that they could build, unable with careful attention, or unlikely to casually?

Are you only interested in branches from 'this' world in terms of measure rather than this class of simulation?

What's your take on Moore's Law in detail

Comment author: dfranke 13 October 2010 12:55:03PM 15 points [-]

Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.

Comment author: [deleted] 13 April 2012 11:30:14AM 0 points [-]

Might be belief hysteresis, but I am inclined towards a similar confidence level in that proposition.

Comment author: MichaelVassar 16 October 2010 03:44:10PM 0 points [-]

I disagree but I think that might be considered a reasonable probability by most people here.

Comment author: RobinZ 13 October 2010 01:02:35PM 1 point [-]

Confidence level?

Comment author: dfranke 13 October 2010 01:48:54PM 2 points [-]

Let's say 65%.

Comment author: dfranke 13 October 2010 12:59:48PM *  -1 points [-]

Furthermore: if the above is false, it will proven such within thirty years. If the above is true it will become the majority position among both natural scientists and academic philosophers within thirty years. Barring AI singularity in both cases. Confidence level 70%.

Comment author: dilaudid 13 October 2010 07:40:08AM *  19 points [-]

There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)

Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.

Comment author: Relsqui 13 October 2010 07:54:40AM *  10 points [-]

I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.

(I agree with your premise, but not your conclusion.)

Comment author: dilaudid 13 October 2010 08:11:34AM *  1 point [-]

To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.

I probably stole from The Economist too.

Comment author: Relsqui 13 October 2010 08:35:43AM 0 points [-]

That depends on what you're trying to accomplish. If you're not using your 200MHz machine because the things you want to work on require at least a gig of processing power, buying the new one might be very productive indeed. This doesn't mean you can't find a good purpose for your existing one, but if your needs are beyond its abilities, it's reasonable to pursue additional resources.

Comment author: dilaudid 13 October 2010 11:14:02AM 0 points [-]

Yeah I can see that applies much better to intelligence than to processing speed - one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell's Outliers (embarrassing source) seems to refute this - his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness - in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.

Comment author: Relsqui 13 October 2010 07:26:29PM 2 points [-]

I'm sure that's true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy--comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can't easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans were. So, yes, I don't doubt that there's tons of wasted human intelligence, but I find that unrelated to the question of AI.

I'm working from the assumption that folks who want FAI expect it to calculate, discover, and reason things which humans alone wouldn't be able to accomplish for hundreds or thousands of years, and which benefit humanity. If that's not the case I'll have to rethink this. :)

Comment author: dilaudid 14 October 2010 12:00:09PM 1 point [-]

I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).

I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.

Comment author: RichardKennaway 13 October 2010 07:43:36AM 3 points [-]

Did you have this in mind? Cognitive Surplus.

Comment author: dilaudid 13 October 2010 07:52:53AM 0 points [-]

Yes - thank you for the cite.

Comment author: nick012000 11 October 2010 03:32:07PM *  54 points [-]

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

Comment author: Normal_Anomaly 14 December 2010 02:47:10AM 0 points [-]

My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn't the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?

Comment author: Nick_Tarleton 11 October 2010 10:43:18PM 19 points [-]

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

Comment author: RobinZ 11 October 2010 04:57:20PM 18 points [-]

What reason do you have for assigning such high probability to time travel being possible?

Comment author: nick012000 12 October 2010 05:32:24AM 0 points [-]

Well, most of the arguments against it are, to my knowledge, start with something along the lines of "If time travel exists, causality would be fucked up, and therefore time travel can't exist," though it might not be framed quite that implicitly.

Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.

Comment author: RobinZ 12 October 2010 12:18:33PM 3 points [-]

That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.

Comment author: Perplexed 11 October 2010 11:18:28PM *  2 points [-]

And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation?

;)

Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.

Comment author: RobinZ 11 October 2010 11:28:27PM *  2 points [-]

I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.

Edit: Of course, evidence for that 95%+ would be appreciated.

Comment author: rabidchicken 11 October 2010 09:45:01PM -1 points [-]

nick voted up, robin voted down... This feels pretty weird.

Comment author: nick012000 11 October 2010 03:08:48PM 55 points [-]

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

Comment author: Swimmy 16 October 2010 08:04:12PM 5 points [-]

You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.

Comment author: wedrifid 16 October 2010 08:24:56PM 4 points [-]

Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!

Comment author: RobinZ 21 October 2010 10:31:51PM 5 points [-]

It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.

Comment author: RobinZ 11 October 2010 04:55:14PM 5 points [-]

I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?

Comment author: Vladimir_Nesov 11 October 2010 05:38:07PM *  1 point [-]

We should learn to present this argument correctly, since complexity of hypothesis doesn't imply its improbability. Furthermore, the prior argument drives probability through the floor, making 99% no more surprising than 1%, and is thus an incorrect argument if you wouldn't use it for 1% as well (would you?).

Comment author: RobinZ 11 October 2010 06:01:41PM *  8 points [-]

I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:

  1. God exists.
  2. God created the universe.
  3. God prefers not to violate natural laws.
  4. The stories about people seeing angels are based on real events.
  5. The angels seen during these events were actually just robots.
  6. The angels seen during these events were wielding laser turrets.

Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.

But yes - I'm not good at arguing.

Comment author: gwern 10 October 2010 01:12:38AM 1 point [-]

Previous survey on this topic: http://lesswrong.com/lw/2l/closet_survey_1/

Comment author: Strange7 08 October 2010 08:28:20AM -2 points [-]

What's with all this 'infinite utility/disutility' nonsense? Utility is a measure of preference, and 'preference' itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market conditions so inconceivably bizarre that such notes are widely accepted at face value) isn't even remotely possible. Protestations of willingness in the absence of demonstrated ability don't count; talk is cheap, if you really cared that much you'd be finding a way instead of whining.

I've had a funny feeling about this subject for a while, but the logic finally clicked just recently. Still, there could be some flaw I missed. ~98%

Comment author: wedrifid 08 October 2010 09:18:10AM *  4 points [-]

No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it,

Just willing. If they want it infinitely much and someone else gives it to them then they have infinite utility. Their wishes may also be arbitrarily trivial to achieve. They could assign infinite utility to having a single paperclip and be willing to do anything they can to make sure they have a paperclip. Since they (probably) do have the ability to get and keep a paperclip they probably do have infinite utility.

Call her "Clippet", she's a Paperclip Satisficer. Mind you she will probably still take over the universe so that she can make sure nobody else takes her paperclip away from her but while she's doing that she'll already have infinite utility.

The problem with infinities in the utility function is that it's stupid, not that it's impossible.

Comment author: Strange7 08 October 2010 04:24:12PM 1 point [-]

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

Comment author: Larks 15 December 2010 05:04:53PM 1 point [-]

Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She'll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.

Comment author: JoshuaZ 15 December 2010 05:34:11PM 2 points [-]

No. This is one of the problems with trying to have infinite utility. Kind Clippet won't actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

Comment author: Larks 15 December 2010 05:50:49PM 0 points [-]

Just put Kind Clippet in a box with no paperclips.

Comment author: Strange7 16 December 2010 02:49:53AM 0 points [-]

That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.

Comment author: JGWeissman 15 December 2010 05:47:37PM 3 points [-]

And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won't matter.

Comment author: JoshuaZ 15 December 2010 06:04:26PM 0 points [-]

Yes, you're right. You can do this with sorted n-tuples.

Comment author: cata 15 December 2010 05:30:21PM 3 points [-]

Does it even make sense to talk about "the chance to do X at no cost to Y?" Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it's a negligible influence, but if Y's utility is literally supposed to be infinite, it would dominate.

Comment author: wedrifid 09 October 2010 06:08:01AM *  1 point [-]

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

Um... yes? That's how it works. It just doesn't particularly relate to your declaration that infinite utility is impossible (rather than my position - that is is lame).

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

It is no better or worse or better than a theory that the utility function is '1' for having a paperclip and '0' for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn't infinite obviously rescales to 'infinitely small'). You appear to be confused about how the 'not testable' concept applies here...

Comment author: khafra 08 October 2010 12:56:43PM 0 points [-]

I'd be interested in the train of thought that lead to "paperclip" being switched out in favor of "grapefruit."

Comment author: wedrifid 08 October 2010 01:35:41PM 1 point [-]

Failed to switch out a grapefruit to paperclip when I was revising. (Clips seemed more appropriate.)

Comment author: khafra 08 October 2010 04:09:54PM 1 point [-]

Thanks; I'm rather disappointed in myself for not guessing that. I'd imagined you having a lapse of thought while eating a grapefruit while typing it up, or thinking about doing so; but that now seems precluded to a rather ridiculous degree by Occam's Razor.

Comment author: MrShaggy 08 October 2010 05:02:41AM 21 points [-]

Eating lots of bacon fat and sour cream can reverse heart disease. Very confident (>95%).

Comment author: RomanDavis 17 December 2010 10:58:45PM 2 points [-]

Downvoted. I've seen the evidence, too.

Comment author: MrShaggy 24 December 2010 03:43:52AM 2 points [-]

Downvoted means you agree (on this thread), correct? If so, I've wanted to see a post on rationality and nutrition for a while (on the benefits of high-animal fat diet for health and the rationality lessons behind why so many demonize that and so few know it).

Comment author: Desrtopa 17 December 2010 11:11:14PM 0 points [-]

What evidence?

If you're referring to the Atkins diet, I think that's a rather different matter from simply eating lots of bacon fat and sour cream, which doesn't preclude also eating plenty of carbohydrates.

Or worse, it might entail eating nothing else. The post isn't very precise.

Comment author: RomanDavis 17 December 2010 11:19:24PM *  1 point [-]

Eating some is better than none, because certain nutrients in animal fat are helpful for CDC. The point that vegetarianism is over rated for the health benefits is contrarian enough here and in the wider world to make a good post.

But yes, losing other vital nutrients would be bad.

And Atkins is silly and unhealthy. Why bring it up?

Comment author: Desrtopa 17 December 2010 11:40:41PM 1 point [-]

Because I thought that might be what you were referring to.

My mother lost about 90 pounds on it, and her health is definitely better than it was when she was overweight, but it did have some rather unpleasant side effects (although she generally refuses to acknowledge them, since they're lost in the halo effect.)

Comment author: JGWeissman 08 October 2010 05:13:28AM 2 points [-]

You have to actually think your degree of belief is rational.

I doubt you are following this rule.

Comment author: MrShaggy 09 October 2010 06:09:36AM *  3 points [-]

I was worried people would think that, but if I posted links to present evidence, I ran the risk of convincing them so they wouldn't vote it up! All I've eaten in the past three weeks is: pork belly, butter, egg yolks (and a few whites), cheese, sour cream (like a tub every three days), ground beef, bacon fat (saved from cooking bacon) and such. Now, that's no proof about the medical claim but I hope it's an indication that I'm not just bullshiting. But for a few links: http://www.ncbi.nlm.nih.gov/pubmed/19179058 (the K2 in question is virtually found only in animal fats and meats, see http://www.westonaprice.org/abcs-of-nutrition/175-x-factor-is-vitamin-k2.html#fig4)--the pubmed is on prevention of heart disease in humans http://wholehealthsource.blogspot.com/2008/11/can-vitamin-k2-reverse-arterial.html shows reversal in rat studies from K2 http://trackyourplaque.com/ -- a clinic that uses K2 among other things to reverse heart disease note that I am not trying to construct a rational argument but to convince people that I do hold this belief. I do think a rational argument can be constructed but this is not it.

Comment author: jkaufman 14 September 2011 06:44:46PM 3 points [-]

This was about a year ago: do you still hold this belief? Has eating like you described worked out?

Comment author: MrShaggy 11 October 2011 02:08:55PM *  1 point [-]

Not just hold the belief but eat that way even more consistently (more butter and less sour cream just because tastes change, but same basic principles). I'm young and didn't have any obvious signs of heart disease personally so can't say it "worked out" for me personally in that literal, narrow sense but I feel better, more mentally clear, etc. (I know that's kinda whatever of evidence, just saying since you asked).

Someone else recently posted their success with butter lowering their measurement of arterial plaque: "the second score was better (lower) than the first score. The woman in charge of the testing center said this was very rare — about 1 time in 100. The usual annual increase is about 20 percent." (http://blog.sethroberts.net/2011/08/04/how-rare-my-heart-scan-improvement/) (Note: I disagree with the poster's reasoning methods in general, just noting his score change.)

There was a recent health symposium that discussed this idea and related ones: http://vimeo.com/ancestralhealthsymposium/videos/page:1/sort:newest.

For those specifically related to heart health, these are most of them: http://vimeo.com/ancestralhealthsymposium/videos/search:heart/sort:newest

Comment author: simplicio 07 October 2010 11:28:44PM *  9 points [-]

The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)

Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.

Comment author: [deleted] 13 April 2012 11:33:10AM 0 points [-]

I disagree: We desperately need a continuous scale of personhood. Dolphins and Chims and Ara Parrots are people too!

Comment author: RobinZ 08 October 2010 02:44:41PM 2 points [-]

Allow me to provide the obligatory complaint about (mainstream) conflation of sentience and sapience, said complaint of course being a display the former but not the latter.

Comment author: wedrifid 07 October 2010 11:49:26PM 1 point [-]

Our belief to the contrary is a self-serving and self-aggrandizing rationalization.

Our? :)

Comment author: simplicio 07 October 2010 11:53:15PM 1 point [-]

Fixed.

Comment author: wedrifid 08 October 2010 12:15:59AM 2 points [-]

But possibly introducing a new problem in as much as the very term 'sentient' and some of the concept it represents isn't even present in the mainstream.

I recall back in my early high school years writing an essay that included a reference to sentience and was surprised when she didn't know what it meant. She was actually an extremely good English teacher and quite well informed generally... just not in the same subculture. While I didn't have the term for it back then it stuck in my mind as significant lesson on the topic of inferential distance.

Comment author: gwern 07 October 2010 02:08:56AM 3 points [-]

Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)

Comment author: Eneasz 06 October 2010 09:14:06PM 7 points [-]

Predicated on MWI being correct, and Quantum Immortality being true:

It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%

Comment author: wedrifid 07 October 2010 05:25:12PM 0 points [-]

Quantum Immortality being true:

Which way do I vote things that aren't so much wrong as they are fundamentally confused?

Thinking about QI as something about which to ask 'true or false?' implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to 'desired or undesired'.

Comment author: Nisan 10 October 2010 08:53:50PM 1 point [-]

So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?

It's clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be "true" is if it turns out that there's an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.

(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)

Comment author: jimrandomh 10 October 2010 09:01:35PM *  0 points [-]

My strategy is to behave as though quantum immortality is false until I'm reasonably sure I've lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.

Comment author: Vladimir_Nesov 10 October 2010 09:28:09PM 2 points [-]

If you lose measure with time, you'll lose any given amount given enough time. It's better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.

Comment author: Eneasz 08 October 2010 02:38:01PM 0 points [-]

I can't think of any purely self-interested reason why any individual should care about their measure (I grant there are altruistic reasons)

Comment author: wedrifid 09 October 2010 06:48:02AM 1 point [-]

Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger's cat and what you would care about after?

Comment author: Eneasz 10 October 2010 02:24:50PM 0 points [-]

Yes, but it's unclear why I should.

Comment author: Risto_Saarelma 07 October 2010 01:10:50PM 2 points [-]

Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that'd be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.

However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.

Comment author: Eneasz 07 October 2010 04:03:36PM 0 points [-]

The main question to me is whether quantum immortality should be taken seriously to begin with.

Perhaps a separate vote on that then?

Comment author: magfrump 06 October 2010 11:24:27PM 2 points [-]

Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don't work out.

This seems like a great reason not to trust quantum immortality.

Comment author: Apprentice 05 October 2010 07:44:25PM 13 points [-]

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Comment author: Ledfox 10 October 2010 07:57:48PM 0 points [-]

The "Meno" demands a down-vote from me, but only in this game.

Comment author: Vladimir_M 06 October 2010 09:32:30PM *  4 points [-]

Apprentice:

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Downvoted for agreement.

However, I must add that it would be extremely fallacious to conclude from this fact that the country is being run competently and not declining or even headed for disaster. This fallacy would be based on the false assumption that the country is actually run by the politicians in practice. (I am not arguing for these pessimistic conclusions, at least not in this context, but merely that given the present structure of the political system, optimistic conclusions from the above fact are generally unwarranted.)

Comment author: Apprentice 06 October 2010 09:44:45PM 0 points [-]

I absolutely agree with you.

Comment author: Mass_Driver 06 October 2010 05:30:14AM *  12 points [-]

Far too confident.

The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.

I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.

Comment author: Apprentice 06 October 2010 01:50:01PM *  7 points [-]

All right, I'll try to mount a defence.

I would be modestly surprised if any member of Congress has an IQ below 100. You just need to have a bit of smarts to get elected. Even if the seat you want is safe, i.e. repeatedly won by the same party, you likely have to win a competitive primary. To win elections you need to make speeches, answer questions, participate in debates and so on. It's hard. And you'll have opponents that are ready to pounce on every mistake you make and try make a big deal out of it. Even smart people make lots of mistakes and say stupid things when put on the spot. I doubt a person of below average intelligence even has a chance.

Even George W. Bush, who's said and done a lot of stupid things and is often considered dim for a politician, likely has an IQ above 120.

As for decency and honesty, a useful rule of thumb is that most people are good. Crooked people are certainly a significant minority but most of them don't hide their crookedness very well. And you can't be visibly crooked and still win elections. Your opponents are motivated to dig up the dirt on you.

As for honestly trying to serve their country I admit that this is a bit tricky. Congresspeople certainly have a structural incentive to put the interests of their district above that of their country. But they are not completely short-sighted and neither are their constitutents. Conditions in congressional district X are very dependent on conditions in the US as a whole. So I do think congresspeople try to honestly serve both their district and their country.

Non-corruption is again a bit tricky but here I side with Matt Yglesias and Paul Waldman:

The truth, however, is that Congress is probably less corrupt than at any point in our history. Real old-fashioned corruption, of the briefcase-full-of-cash kind, is extremely rare (though it still happens, as with William Jefferson, he of the $90,000 stuffed in the freezer).

Real old-school corruption like you have in third world countries and like you used to have more of in Congress is now very rare. There's still a real debate to be had about the role of lobbyists, campaign finance law, structural incentives and so on but that's not what I'm talking about here.

Are there still some bad apples? Definitely. But I stand by my view that the vast majority are not.

Comment author: magfrump 06 October 2010 11:36:59PM 2 points [-]

If by not-corrupt you meant "would consciously and earnestly object to being offered money for the explicit purpose of pursuing a policy goal that they perceived as not in the favor of their electorate or the country" and by "above-average intelligence" you meant "IQ at least 101" then I would downvote for agreement.

But if you meant "tries to assure that their actions are in the favor of their constituents and country, and monitors their information diet to this end" and "IQ above 110 and conscientiousness above average" then I maintain my upvote.

When I think of not-corrupt I think of someone who takes care not to betray people, rather than someone who does not explicitly betray them. When I think "above average intelligence" I think of someone who regularly behaves more intelligently than most, not someone who happens to be just to the right of the bell curve.

Comment author: bogdanb 19 February 2011 10:08:55PM *  1 point [-]

About the first paragraph: does your definition include in “corrupt” people who do not object in that situation because they believe that the benefit to the country of receiving the money (because they’d be able to use it for good things) exceeds the damage done to the country by whatever they’re asked to do?

I ask because I suspect many people in high positions have an honest but incorrectly high opinion about their worth to whatever cause they’re nominally supporting. (E.g., “without this money I’ll lose the election and the country would be much worse off because the other guy is evil”.)

Comment author: magfrump 20 February 2011 09:23:56PM 0 points [-]

I think that having damagingly uninformed opinions about the values of your actions (e.g. "I'll lose the election and the other guy is evil") counts as either corrupt (in terms of not monitoring information diet to take care not to betray people) or stupid (in terms of being unable to do so.)

If someone were to accept significant bribes, and then, say, donate all of the money to a highly efficient charity such as SIAI, NFP, or VillageReach, after doing a half-hour or longer calculation involving spreadsheets, then I might not count them as corrupt. However I think the odds that this has actually EVER occurred are practically insignificant.

Comment author: Apprentice 07 October 2010 09:19:46AM 1 point [-]

Point taken. And I concede that there are probably some congressmen with 100<IQ<110. But my larger point, which Vladimir made a bit more explicit, is that contrary to popular belief the problems of the USA are not caused by politicians being unusually stupid or unusually venal. I think a very good case can be made that politicians are less stupid and less venal than typical people - the problems are caused by something else.

Comment author: magfrump 07 October 2010 04:55:30PM 1 point [-]

I would certainly agree that politicians are unlikely to be below the mean level of competence, since they must necessarily run a campaign, be liked by a group of people, etc. I would be surprised if most politicians were very far from the median, although in the bell curve of politician intelligence there is probably a significant tail to the high-IQ side and a very small tail to the low-IQ side.

I would also agree that blaming politicians' stupidity for problems is, at the very least, a poor way of dealing with problems, which would be much better addressed with reform of our political systems; by, say, abolishing the senate or some kind of regulation of party primaries.

At the very least I'm not willing to give up on thinking that there are a lot of dumb and venal politicians, but I am willing to cede that that's not really a huge problem most of the time.

Comment author: wnoise 08 October 2010 05:34:30AM 1 point [-]

(Assuming US here). Abolishing the senate seems to be an overreaction at this point, though some reforms of how it does business certainly should be in order.

I think one of the biggest useful changes would be to reform voting so that the public gets more bits of input, by switching to approval or Condorcet style voting.