All of jacoblyles's Comments + Replies

It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.

The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he'... (read more)

5gwern
'Pretty sure', eh? Would you care to take a bet on this? I'd be happy to go with a few sorts of bets, ranging from "an organization that used to be SIAI or CFAR is put on the 'Individuals and Entities Designated by the State Department Under E.O. 13224' or 'US Department of State Terrorist Designation Lists' within 30 years" to ">=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years" etc. I'd be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give. If you're worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I'd have to reduce my bet substantially).
6TheOtherDave
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization. The question is whether we believe L3, and whether we ought to believe L3. Many of us don't seem to believe this. Do you believe it? If so, why?

Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "s... (read more)

9gwern
I don't know how you could read LW and not realize that we certainly do accept precautionary principles ("running on corrupted hardware" has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal's mugging in the last week, neither of which says 'you should just bite the bullet'!), and libertarianism is heavily overrepresented compared to the general population. No, one of the 'big black marks' on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There's nothing particular to SIAI/LW there.

Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.

Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad re... (read more)

0gwern
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.

We should try to pick up "moreright.com" from whoever owns it. It's domain-parked at the moment.

5arborealhominid
Moreright.net already exists, and it's a "Bayesian reactionary" blog- that is, a blog for far-rightists who are involved in the Less Wrong community. It's an interesting site, but it strikes me as decidedly unhelpful when it comes to looking uncultish.

The principles espoused by the majority on this site can be used to justify some very, very bad actions.

1) The probability of someone inventing AI is high

2) The probability of someone inventing unfriendly AI if they are not associated with SIAI is high

3) The utility of inventing unfriendly AI is negative MAXINT

4) "Shut up and calculate" - trust the math and not your gut if your utility calculations tell you to do something that feels awful.

It's not hard to figure out that Less Wrong's moral code supports some very, unsavory, actions.

2Mitchell_Porter
Your original question wasn't about LW. Before we turn this into a debate about finetuning LW's moral code, shall we consider the big picture? It's 90 years since the word "robot" was introduced, in a play which already featured the possibility of a machine uprising. It's over 50 years since "artificial intelligence" was introduced as a new academic discipline. We already live in a world where one state can use a computer virus to disrupt the strategic technical infrastructure of another state. The quest for AI, and the idea of popular resistance to AI, have already been out there in the culture for years. Furthermore, the LW ethos is pro-AI as well as anti-AI. But if they feel threatened by AIs, the average person will just be anti-AI. The average person isn't pining for the singularity, but they do want to live. Imagine something like the movement to stop climate change, but it's a movement to stop the singularity. Such a movement would undoubtedly appropriate any useful sentiments it found here, but its ethos and organization would be quite different. You should be addressing yourself to this future anti-AI, pro-human movement, and explaining to them why anyone who works on any form of AI should be given any freedom to do so at all.

Fortunately, the United States has a strong evangelical Christian lobby that fights for and protects home schooling freedom.

3DaFranker
Good point. I have a tendency to forget about them. Mind projection and all that.

...And you just blew your cover. :)

Nobody of any importance reads Less Wrong :)

1DaFranker
...you just jinxed it! Now congress is going to pass a new bill forbidding online aids to count towards compulsory education requirements for home schooling, and otherwise hamper the idea by whatever means necessary. After all, what better propaganda system is there than a bunch of gullible "teachers" who regurgitate everything you tell them to and whom children look up to as absolute authorities?

I'm pretty sure they are sourced from census data. I check the footnotes on websites like that.

Tagline: Coursera for high school

Mission: The economist Eric Hanushek has shown that if the USA could replace the worst 7% of K-12 teachers with merely average teachers, it would have the best education system in the world. What if we instead replaced the bottom 90% of teachers in every country with great instruction?

The Company: Online learning startups like Coursera and Udacity are in the process of showing how technology can scale great teaching to large numbers of university students (I've written about the mechanics of this elsewhere). Let's bring a ... (read more)

Nornagest180

Modern compulsory schooling seems to have at least three major sociological effects: socializing its students, offloading enough caregiver burden for both parents to efficiently participate in the workforce, and finally education. For a widespread homeschooling system to be attractive, it's either going to need to fulfill all three, or to be so spectacularly good at one or two that the shortcomings in the others are overwhelmed. Current homeschooling, for comparison, does an acceptable job of education but fails at the other two; consequently it's used ... (read more)

Related idea: semi-computerized instruction.

To the best of my (limited) knowledge, while there are currently various computerized exercises available, they aren't that good at offering instruction of the "I don't understand why this step works" kind, and are often pretty limited (e.g. Khan Academy has exercises which are just multiple choice questions, which isn't a very good bad format). One could try to offer a more sophisticated system - first, present experienced teachers/tutors with a collection of the problems you'll be giving to the studen... (read more)

5abramdemski
The nice thing about this is that it works on an existing market, while leveraging the successful tactics discovered through hard work by Coursera & the like to bring advances to the domain. Of course, techniques designed for university courses may not precisely transfer. I'm skeptical about 'leveraging' videos from Khan Academy for a for-profit education system. Makes it sound half-baked. This idea may fit with the general spaced-repetition enthusiasm I am seeing in other proposals. ...And you just blew your cover. :)
6cicatriz
Your approach -- targeting home-schoolers who are "nonconsumers" of public K-12 education -- is exactly the approach advocated by disruption theory and specifically the book Disrupting Class. Using public education as analogous to established leaders in other industries, disruption always comes from the outside because the leaders aren't structurally able to do anything other than serve their consumers with marginal improvements. ArtofProblemSolving.com is one successful example that's targeted gifted home-schoolers (and others looking for extracurricular learning) in math. I'm sure there are others. EdSurge.com is a good place to look for existing services, which you can sort by criteria including common core/state-standards aligned (you do have to register for free to get the list of resources). I also have thought about services that build on top of Khan Academy, but I wouldn't underestimate their ability to improve in that area. They just released a fantastic computer science platform. But they are a non-profit, so their growth depends, I suppose, on Bill Gates' mood and other philanthropy. To get to full disruption, it might take a for-profit with, as you suggest, monetization through tutoring and other valuable services.
3David_Gerard
"Give your listeners the facts—the Family Facts from the experts at The Heritage Foundation." I'm completely reassured.

If an AGI research group were close to success but did not respect friendly AI principles, should the government shut them down?

2Mitchell_Porter
Let's try an easier question first. If someone is about to create Skynet, should you stop them?

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

0Kenny
The comment was making the opposite point, namely that some people refuse to accept that there is even a common 'utilon' with which torture and 'dust specks' can be compared.
1wedrifid
There is no contradiction between this post and Eliezer's dust specks post.

Welcome!

The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo... (read more)

0moocow1452
Maybe it's incomprehensibility itself that makes some people happy? If you don't understand it, you don't feel responsible, and ignorance being bliss, all that weird stuff there is not your problem, and that's the end of it as far as your monkey bits are concerned.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.

What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.

Nornagest110

I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. A... (read more)

It's interesting that we view those who do make the tough decisions as virtuous - i.e. the commander in a war movie (I'm thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!

This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.

The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but... (read more)

A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It's certainly inconvenient, but the complexity of value is a fundamentally human feature.

It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don't match up.

Is this a design failure? I'm not so sure. I'm not sold on the desirability of having an easily computable value function.

5TheOtherDave
I would agree that we're often in positions where we're forced to choose between two things that we value and we just don't know how to make that choice. Sometimes, as you say, it's because we don't know how to compare the two. (Talk of numerical comparison is, I think, beside the point.) Sometimes it's because we can't accept giving up something of value, even in exchange for something of greater value. Sometimes it's for other reasons. I would agree that coming up with a way to evaluate possible states of the world that take into account all of the things humans value is very difficult. This is true whether the evaluation is by means of a utility function for fAI or via some other means. It's a hard problem. I would agree that replacing the hard-to-compute value function(s) I actually have with some other value function(s) that are easier to compute is not desirable. Building an automated system that can compute the hard-to-compute value function(s) I actually have more reliably than my brain can -- for example, a system that can evaluate various possible states of the world and predict which ones would actually make me satisfied and fulfilled to live in, and be right more often than I am -- sounds pretty desirable to me. I have no more desire to make that calculation with my brain, given better alternatives, than I have to calculate square roots of seven-digit numbers with it.
4Raemon
Upvoted for use of the phrase "How many freeons equal one egaliton?"

This is a great framework - very clear! Thanks!

Sorry, "meaning of life" is sloppy phrasing. "What is the meaning of life?" is popular shorthand for "what is worth doing? what is worth pursuing?". It is asking about what is ultimately valuable, and how it relates to how I choose to live.

It's interesting that we are imagining AIs to be immune from this. It is a common human obsession (though maybe only among unhappy humans?). An AI isn't distracted by contradictory values like a human is then, it never has to make hard choices? No choices at all really, just the output of the argmax expected utility function?

9TheOtherDave
I can't speak for anyone else, but I expect that a sufficiently well designed intelligence, faced with hard choices, makes them. If an intelligence is designed in such a way that, when faced with hard choices, it fails to make them (as happens to humans a lot), I consider that a design failure. And yes, I expect that it makes them in such a way as to maximize the expected value of its choice.... that is, so as to insofar as possible do what is worth doing and pursue what is worth pursuing. Which presumes that at any given moment it will at least have a working belief about what is worth doing and worth pursuing. If an intelligence is designed in such a way that it can't make a choice because it doesn't know what it's trying to achieve by choosing (that is, it doesn't know what it values), I again consider that a design failure. (Again, this happens to humans a lot.)

I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren't altruistic, then I wouldn't be making myself into the person I want to be.

It's a very different framework from util maximization, but I find it's much more satisfying and useful.

2SusanBrennan
And if it wasn't more satisfying and useful, would you still follow it?
3thomblake
I've realized that my sibling comment is logically rude, because I've left out some relevant detail. Most relevantly, I tend to self-describe as a virtue ethicist. I've noticed at least 3 things called 'virtue ethics' in the wild, which are generally mashed together willy-nilly: 1. an empirical claim, that humans generally act according to habits of action and doing good things makes one more likely to do good things in the future, even in other domains 2. the notion that ethics is about being a good person and living a good life, instead of whether a particular action is permissible or leads to a good outcome 3. virtue as an achievement; a string of good actions can be characterized after the fact as virtuous, and that demonstrates the goodness of character. There are virtue ethicists who buy into only some of these, but most often folks slip between them without noticing. One fellow I know will often say that #1 being false would not damage virtue ethics, because it's really about #2 and #3 - and yet he goes on arguing in favor of virtue ethics by citing #1.
1thomblake
That's an empirical question. Would you still subscribe to virtue ethics if you found out that humans don't really follow habits of virtue? If so, why? If not, what would ethics be about then, and why isn't it about that now?

Let me see if I understand what you're saying.

For humans, the value of some outcome is a point in multidimensional value space, whose axes include things like pleasure, love, freedom, anti-suffering, and etc. There is no easy way to compare points at different coordinates. Human values are complex.

For a being with a utility function, it has a way to take any outcome and put a scalar value on it, such that different outcomes can be compared.

We don't have anything like that. We can adjust how much we value any one dimension in value space, even discover ne... (read more)

4thomblake
If it's really your utility function, you're not following it "slavishly" - it is just what you want to do. If "questions about the meaning of life" maximize utility, then yes, there are those. Can you unpack what "questions about the meaning of life" are supposed to be, and why you think they're important? ('meaning of "life"' is fairly easy, and 'meaning of life' seems like a category error).

First, I don't buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I'm just not biting those bullets. I don't think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer's recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.

Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me... (read more)

1thomblake
You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks - I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction. I believe Eliezer's post was not so much directed at folks who disagree with utilitarianism - rather, it's supposed to be about taking the math seriously, for those who are. If you're not a utilitarian, you can freely regard it as another reductio. You don't have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it's hard to make a case for a utilitarianism in which goods are not commensurable - in practice, we can spend money towards any sort of good, and we don't favor only spending money on the highest-order ones, so that strongly suggests commensurability.

I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.

My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably v... (read more)

0thomblake
The point was not necessarily to advocate torture. It's to take the math seriously. Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?
6TheOtherDave
There's something really odd about characterizing "torture is preferable to this utterly unrealizable thing" as "advocating torture." It's not obviously wrong... I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they'd brought their audience along swap it out for simply "torture is preferable to alternatives", using the same kind of rhetorical techniques you use here... but it doesn't seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you. Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It's possible that keeping the torture a secret would have net positive utility; it's possible it would have net negative utility. All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here). Well, in some bizarre sense that's true. I mean, if I'm being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It's decidedly unclear in what sense it's an event at all.) Sure, that seems likely. I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50

Certain self-consistent metaphysics and epistemologies lead you to belief in God. And a lot of human emotions do too. If you eliminated all the religions in the world, you would soon have new religions with 1) smart people accepting some form of philosophy that leads them to theism 2) lots of less smart people forming into mutually supporting congregations. Hopefully you get all the "religion of love" stuff from Christianity (historically a rarity) and the congregations produce public goods and charity.

[This comment is no longer endorsed by its author]Reply

What makes us think that AI would stick with the utility function they're given? I change my utility function all the time, sometimes on purpose.

5wedrifid
There are very few situations in which an agent can most effectively maximise expected utility according to their current utility function by modifying themselves to have a different utility function. Unless the AI is defective or put in a specially contrived scenario it will maintain its current utility function because that is an instrumentally useful thing to do. If you are a paperclip maximiser then becoming a staples maximiser is a terribly inefficient strategy for maximising paperclips unless Omega is around making weird bargains. No you don't. That is, to the extent that you "change your utility function" at all you do not have a utility function in sense meant when discussing AI. It only makes sense to model humans as having 'utility functions' when they are behaving in a manner that can be vaguely approximated as expected utility maximisers with a particular preference function. Sure, it is possible to implement AIs that aren't expected utility maximisers either and those AIs could be made to do all sorts of arbitrary things including fundamentally change their goals and behavioral strategies. But if you implement an AI that tries to maximise a utility function then it will (almost always) keep trying to maximise that same utility function.

"Long-term monogamy should not be done on the pretense that attraction and arousal for one's partner won't fade. It will."

This is precisely the point of monogamy. Polyamory/sleeping around is a young man's game. Long-term monogamy is meant to maintain strong social units throughout life, long after the thrill is gone.

My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not.

He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).

Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.

Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.

0Davorak
If you are rational enough, perceptive enough and EY's writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.

My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial".

The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Chr... (read more)

"Does this sound like what you mean by a "beneficial irrationality"?"

No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.

In the post above Eliezer is basically lamenting that ... (read more)

7pjeby
I don't know 'bout no Eve and fruits, but I do know something about the "god-shaped hole". It doesn't actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a "core state" in NLP. Core states are emotional states of peace, oneness, love (in the universal-compassion sense), "being", or just the sense that "everything is okay". You could think of them as pure "reward" or "satisfaction" states. The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others' mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it. Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it's like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area. Most likely, this is because it's the unconditional presence of core states that's the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states. Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.... and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism. Appropriately trained rationalists, on the other hand, can simply reinstate the wirehead
1Eliezer Yudkowsky
Reply here.
0conchis
"Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point." So if that's Eliezer's point, and it's also your point, what is it that you actually disagree about? I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn't be so. In response, you seem to be asking him to prove that rational individuals must co-operate - when he already appears to have accepted that this isn't true. Isn't the relevant issue whether it is possible for rational individuals to co-operate? Provided we don't make silly mistakes like equating rationality with self-interest, I don't see why not - but maybe this whole thread is evidence to the contrary. ;)

"Except that we are free to adopt any version of rationality that wins. "

In that case, believing in truth is often non-rational.

Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.

I believe I consistently used the "believing in truth" definition of rational in the parent post.

4conchis
I agree that the multiple definitions are confusing, but I'm not sure that you consistently employ the "believing in truth" version in your post above.* It's not "believing in truth" that gets people into prisoners' dilemmas; it's trying to win. *And if you did, I suspect you'd be responding to a point that Eliezer wasn't making, given that he's been pretty clear on his favored definition being the "winning" one. But I could easily be the one confused on that. ;) "In that case, believing in truth is often non-rational." Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it's true for second-best reasons. (That is, if we were "better" in other (potentially modifiable) ways, the truth wouldn't be so harmful.)

There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.

You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Irration... (read more)

I one-box on Newcomb's Problem, cooperate in the Prisoner's Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.

5conchis
"However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case." Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around. "Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this." Really? Most of the "individual rationality -> suboptimal outcomes" results assume that actors have no influence over the structure of the games they are playing. This doesn't reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.

Also, by following their arguments, trying to clarify it and understanding the pieces. Your sincere and genuine attempt to understand them in the best possible light will make them open to your point of view.

The smart Christians are some of the most logical people I've ever met. There worldview fits together like a kind of Geometry. They know that you get a completely different form of it if you substitute one axiom for another (existence of God for non-existence of God), much like Euclid's world dissolves without the parallel postulate.

Once we got to th... (read more)

I am curious about the large emphasis that rationalists place on the religious belief. Religion is an old institution, ingrained in culture and valuable for aesthetic and social reasons. To convince a believer to leave his religion, you need not only convince him, but convince him so thoroughly as to drive him to take a substantial drop in personal utility to come to your side (to be more exact, he must weigh the utility gained from believing the truth to outweigh the material, social, and psychic benefits that he gets from religion).

For rationalists' atte... (read more)