Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "s...
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.
Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad re...
We should try to pick up "moreright.com" from whoever owns it. It's domain-parked at the moment.
The principles espoused by the majority on this site can be used to justify some very, very bad actions.
1) The probability of someone inventing AI is high
2) The probability of someone inventing unfriendly AI if they are not associated with SIAI is high
3) The utility of inventing unfriendly AI is negative MAXINT
4) "Shut up and calculate" - trust the math and not your gut if your utility calculations tell you to do something that feels awful.
It's not hard to figure out that Less Wrong's moral code supports some very, unsavory, actions.
Fortunately, the United States has a strong evangelical Christian lobby that fights for and protects home schooling freedom.
...And you just blew your cover. :)
Nobody of any importance reads Less Wrong :)
I'm pretty sure they are sourced from census data. I check the footnotes on websites like that.
Tagline: Coursera for high school
Mission: The economist Eric Hanushek has shown that if the USA could replace the worst 7% of K-12 teachers with merely average teachers, it would have the best education system in the world. What if we instead replaced the bottom 90% of teachers in every country with great instruction?
The Company: Online learning startups like Coursera and Udacity are in the process of showing how technology can scale great teaching to large numbers of university students (I've written about the mechanics of this elsewhere). Let's bring a ...
Modern compulsory schooling seems to have at least three major sociological effects: socializing its students, offloading enough caregiver burden for both parents to efficiently participate in the workforce, and finally education. For a widespread homeschooling system to be attractive, it's either going to need to fulfill all three, or to be so spectacularly good at one or two that the shortcomings in the others are overwhelmed. Current homeschooling, for comparison, does an acceptable job of education but fails at the other two; consequently it's used ...
Related idea: semi-computerized instruction.
To the best of my (limited) knowledge, while there are currently various computerized exercises available, they aren't that good at offering instruction of the "I don't understand why this step works" kind, and are often pretty limited (e.g. Khan Academy has exercises which are just multiple choice questions, which isn't a very good bad format). One could try to offer a more sophisticated system - first, present experienced teachers/tutors with a collection of the problems you'll be giving to the studen...
Out of wedlock birth rates have exploded with sexual freedom:
-http://www.familyfacts.org/charts/205/four-in-10-children-are-born-to-unwed-mothers
Marriage is way down:
If an AGI research group were close to success but did not respect friendly AI principles, should the government shut them down?
I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
Welcome!
The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.
The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo...
The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.
What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.
I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. A...
It's interesting that we view those who do make the tough decisions as virtuous - i.e. the commander in a war movie (I'm thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!
This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.
The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but...
A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It's certainly inconvenient, but the complexity of value is a fundamentally human feature.
It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don't match up.
Is this a design failure? I'm not so sure. I'm not sold on the desirability of having an easily computable value function.
This is a great framework - very clear! Thanks!
Sorry, "meaning of life" is sloppy phrasing. "What is the meaning of life?" is popular shorthand for "what is worth doing? what is worth pursuing?". It is asking about what is ultimately valuable, and how it relates to how I choose to live.
It's interesting that we are imagining AIs to be immune from this. It is a common human obsession (though maybe only among unhappy humans?). An AI isn't distracted by contradictory values like a human is then, it never has to make hard choices? No choices at all really, just the output of the argmax expected utility function?
I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren't altruistic, then I wouldn't be making myself into the person I want to be.
It's a very different framework from util maximization, but I find it's much more satisfying and useful.
Let me see if I understand what you're saying.
For humans, the value of some outcome is a point in multidimensional value space, whose axes include things like pleasure, love, freedom, anti-suffering, and etc. There is no easy way to compare points at different coordinates. Human values are complex.
For a being with a utility function, it has a way to take any outcome and put a scalar value on it, such that different outcomes can be compared.
We don't have anything like that. We can adjust how much we value any one dimension in value space, even discover ne...
First, I don't buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I'm just not biting those bullets. I don't think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer's recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.
Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me...
I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.
My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably v...
Certain self-consistent metaphysics and epistemologies lead you to belief in God. And a lot of human emotions do too. If you eliminated all the religions in the world, you would soon have new religions with 1) smart people accepting some form of philosophy that leads them to theism 2) lots of less smart people forming into mutually supporting congregations. Hopefully you get all the "religion of love" stuff from Christianity (historically a rarity) and the congregations produce public goods and charity.
What makes us think that AI would stick with the utility function they're given? I change my utility function all the time, sometimes on purpose.
"Long-term monogamy should not be done on the pretense that attraction and arousal for one's partner won't fade. It will."
This is precisely the point of monogamy. Polyamory/sleeping around is a young man's game. Long-term monogamy is meant to maintain strong social units throughout life, long after the thrill is gone.
My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not.
He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).
Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.
Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.
My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial".
The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Chr...
"Does this sound like what you mean by a "beneficial irrationality"?"
No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that ...
"Except that we are free to adopt any version of rationality that wins. "
In that case, believing in truth is often non-rational.
Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.
I believe I consistently used the "believing in truth" definition of rational in the parent post.
There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.
You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.
Irration...
I one-box on Newcomb's Problem, cooperate in the Prisoner's Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.
Also, by following their arguments, trying to clarify it and understanding the pieces. Your sincere and genuine attempt to understand them in the best possible light will make them open to your point of view.
The smart Christians are some of the most logical people I've ever met. There worldview fits together like a kind of Geometry. They know that you get a completely different form of it if you substitute one axiom for another (existence of God for non-existence of God), much like Euclid's world dissolves without the parallel postulate.
Once we got to th...
I am curious about the large emphasis that rationalists place on the religious belief. Religion is an old institution, ingrained in culture and valuable for aesthetic and social reasons. To convince a believer to leave his religion, you need not only convince him, but convince him so thoroughly as to drive him to take a substantial drop in personal utility to come to your side (to be more exact, he must weigh the utility gained from believing the truth to outweigh the material, social, and psychic benefits that he gets from religion).
For rationalists' atte...
It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he'... (read more)