All of PlatypusNinja's Comments + Replies

I think the key difference is that delta utilitarianism handles it better when the group's utility function changes. For example, if I create a new person and add it to the group, that changes the group's utility function. Under delta utilitarianism, I explicitly don't count the preferences of the new person when making that decision. Under total utilitarianism, [most people would say that] I do count the preferences of that new person.

0DanielLC
You only count their preferences under preference utilitarianism. I never really understood that form. If you like having more happy people, then your utility function is higher for worlds with lots of happy people, and creating happy people makes the counter go up. If you like having happier people, but don't care how many there are, then having more people doesn't do anything.

I suppose you could say that it's equivalent to "total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function".

(Under mere "total utilitarianism that only takes into account the utility of already extant people", the government could wirehead its constituency.)


Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of peopl... (read more)

My intended solution was that, if you check the utility of your constituents from creating more people, you're explicitly not taking the utility of the new people into account. I'll add a few sentences at the end of the article to try to clarify this.

Another thing I can say is that, if you assume that everyone's utility is zero at the decision point, it's not clear why you would see a utility gain from adding more people.

0DefectiveAlgorithm
Isn't this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn't this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?

...Followup: Holy crap! I know exactly one person who wants Hermione to be defeated by Draco when Lucius is watching. Could H&C be Dumbledore?

1ahartell
Why do you think H&C wants Hermione to be defeated by Draco? (I think you may have misspoken but since you said it twice I'm not sure)

My theory is that Lucius trumped up these charges against Hermione entirely independent of the midnight duel. He was furious that Hermione defeated Draco in combat, and this is his retaliation.

I doubt that Hermione attended the duel; or, if she did attend it, I doubt that anything bad happened.

My theory does not explain why Draco isn't at breakfast. So maybe my theory is wrong.


I am confused about why H&C wanted Hermione to be defeated by Draco during the big game when Lucius was watching. If you believe H&C is Quirrell (and I do): did Quirrell go to all that trouble just to impress Lucius with how his son was doing? That seems like an awful risk for not much reward.

3PlatypusNinja
...Followup: Holy crap! I know exactly one person who wants Hermione to be defeated by Draco when Lucius is watching. Could H&C be Dumbledore?

The new Update Notifications features (http://hpmor.com/notify/) is pretty awesome but I have a feature request. Could we get some sort of privacy policy for that feature?

Like, maybe just a sentence at the bottom saying "we promise to only use your email address to send you HPMOR notifications, and we promise never to share your email address with a third party"?

It's not that I don't trust you guys (and in fact I have already signed up) but I like to check on these things.

I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.

But I went back much later and read it again, and there wasn't nearly as much outrage as I remembered.

Good story!

Ouch! I -- I actually really enjoyed Ender's Game. But I have to admit there's a lot of truth in that review.

Now I feel vaguely guilty...

2NihilCredo
You feel guilty about using porn? How did you send us a message from the mysterious 20th century?
9CronoDAS
There's a pretty obvious defense; even if it is just pornography that appeals to a different emotion, it's still damn good pornography!

I found this series much harder to enjoy than Eliezer's other works -- for example the Super Happy People story, the Brennan stories, or the Sword of Good story.

I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.

At first, before I knew who the author was, I put this down to simple bad writing. Comments in Chapter 6 suggest that maybe Harry has some severe psychological issues, and that he's deliberately being written as obnoxious and hyperactive in order to meet plot... (read more)

1PlatypusNinja
But I went back much later and read it again, and there wasn't nearly as much outrage as I remembered. Good story!

I think the issue was that Harry was constantly, perpetually, invariably reacting to everything with shock and outrage. It got... tiresome.

I suspect that a main inspiration for writing the story was Eliezer's constant shock and outrage over the fact that Rowling's characters show absolutely no interest in the inner workings of their weird Universe. I vividly remember how outrageous this was for me when I read the originals. Actually, I have only read the first two books, so when I read Eliezer's time-turner scene, I first believed that he invented the a... (read more)

6gwern
Culture shock can be tiresome for the people not suffering it. I've been reading blogs and forum postings by expats in South Korea lately, and that constant perpetual shock & outrage? Par for the course for some people.
3EStokes
A lot of kids are obnoxious and hyperactive. Shock and outrage are IC too. (Not that I think Harry is obnoxious or hyperactive or too shocked and outraged.)
JoshuaZ110

I think some of Harry's annoyingness is due to the fact that he's modeled after young Eliezer. He's a mix of wish-fulfillment for young Eliezer and an opportunity for older Eliezer to criticize his younger self. This is really apparent with the chapters involving the Sorting Hat.

This isn't visible, right? I will feel very bad if it turns out I am spamming the community with half-finished drafts of the same article.

Hi! I'd like to suggest two other methods of counting readers: (1) count the number of usernames which have accessed the site in the past seven days (2) put a web counter (Google Analytics?) on the main page for a week (embed it in your post?) It might be interesting to compare the numbers.

2gwillen
Hello! Fancy meeting you here.

The good news is that this pruning heuristic will probably be part of any AI we build. (In fact, early forms of this AI will have to use a much stronger version of this heuristic if we want to keep them focused on the task at hand.)

So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)

The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.

This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anth... (read more)

0PhilGoetz
First, "the anthropic argument" usually refers to the argument that the universe has physical constants and other initial conditions favorable to life, because if it didn't, we wouldn't be here arguing about it. Second, what you say is true, but someone making the argument already knows this. The anthropic argument says that "people before 1500AD" is clearly not a random sample, but "you, the person now conscious" is a random sample drawn from all of history, although a sample of very small size. You can dismiss anthropic reasoning along those lines for having too small a sample size, without dismissing the anthropic argument.
0SilasBarta
Thank you for saying this. I agree. Since at least the time I made this comment, I have tentatively concluded that anthropic reasoning is useless (i.e. necessarily uninformative), and am looking for a counterexample.

Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.

But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;

A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

Given no other information, we don't know which is more likely. We need numbers for "rarely", "most", and "exceedingly few". For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a heada... (read more)

2SilasBarta
I thought Truly Part of you is an excellent introduction to rationalism/Bayesianism/Less Wrong philosophy that avoids much use of numbers, graphs, and technical language. So I think it's more appropriate for the average person, or for people that equations don't appeal to. Does anyone who meets that description agree? And could someone ask Alicorn if she prefers it?
Alicorn140

You're missing the point. This post is suitable for an audience whose eyes would glaze over if you threw in numbers, which is wonderful (I read the "Intuitive Explanation of Bayes' Theorem" and was ranting for days about how there was not one intuitive thing about it! it was all numbers! and graphs!). Adding numbers would make it more strictly accurate but would not improve anyone's understanding. Anyone who would understand better if numbers were provided has their needs adequately served by the "Intuitive" explanation.

I would like to know more about your statement "50,000 users would surely count as a critical mass". How many users does Craigslist have in total?

I especially think it's unlikely that Craigslist would be motivated by the opinions of 50,000 Facebook users, especially if you had not actually conducted a poll but merely collected the answers of those that agree with you.

You should contact Craigslist and ask them what criteria would actually convince them that Craigslist users want for-charity ads.

4pete22
Actually, if you click through the link to Buckmaster's quote, there's an insta-poll right underneath it: "Should Craigslist take text ads to fund charity?" As of now there are 729 total votes and it's running 70% against. Facebook may have a little higher overlap with CL's userbase than ZDnet, but I would think the overlap in both cases is significant. Doesn't this weigh against the views of any future FB group, especially since (as Platypus points out) a poll should count for more than a petition?
3pete22
This was my first thought too. Taking the question further -- even if, by some reliable polling method, you could draw a Venn diagram of CL and facebook users, wouldn't there be a lot of selection bias? If, say, 40% of CL users are also on facebook, by definition they're probably a lot more tolerant of ads than the other 60%.

each person could effectively cause $20,000 to be generated out of nowhere

As a rationalist, when you see a strange number like this, you have to ask yourself: Did I really just discover a way to make lots of money very efficiently? Or could it be that there was a mistake in my arithmetic somewhere?

That one billion dollars is not being generated out of nowhere. It is being generated as payment for ad clicks.
Let's check your assumptions: How much money will the average user generate from banner ad clicks in five years? How many users does Craigslist have?... (read more)

So, people who have a strong component of "just be happy" in their utility function might choose to wirehead, and people in which other components are dominant might choose not to.

That sounds reasonable.

Well, I said most existing humans are opposed to wireheading, not all. ^_^;

Addiction might occur because: (a) some people suffer from the bug described above; (b) some people's utility function is naturally "I want to be happy", as in, "I want to feel the endorphin rush associated with happiness, and I do not care what causes it", so wireheading does look good to their current utility function; or (c) some people underestimate an addictive drug's ability to alter their thinking.

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its pro... (read more)

7sark
Why would evolution come up with a fully general solution against such 'bugs in our utility functions'? Take addiction to a substance X. Evolution wouldn't give us a psychological capacity to inspect our utility functions and to guard against such counterfeit utility. It would simply give us a distaste for substance X. My guess is that we have some kind of self-referential utility function. We do not only want what our utility functions tell us we want. We also want utility (happiness) per se. And this want is itself included in that utility function! When thinking about wireheading I think we are judging a tradeoff, between satisfying mere happiness and the states of affairs which we prefer (not including happiness).
0bgrah449
Addiction still exists.

Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.

-2MugaSofer
It's worth noting that the example is an Experience Machine, not wireheading. In theory, your current utility function might not be changed by such a Better Life. It might just show how much Better it really is. Of course, it's clearly unethical to use such a device because of the opportunity cost, but then the same is true of sports cars.

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its pro... (read more)

I think I am happy with how these rules interact with the Anthropic Trilemma problem. But as a simpler test case, consider the following:

An AI walks into a movie theater. "In exchange for 10 utilons worth of cash", says the owner, "I will show you a movie worth 100 utilons. But we have a special offer: for only 1000 utilons worth of cash, I will clone you ten thousand times, and every copy of you will see that same movie. At the end of the show, since every copy will have had the same experience, I'll merge all the copies of you back into one."

Note that, although AIs can be cloned, cash cannot be. ^_^;

I claim that a "sane" AI is one that declines the special offer.

(I'm not sure what the rule is here for replying to oneself. Apologies if this is considered rude; I'm trying to avoid putting TLDR text in one comment.)

Here is a set of utility-rules that I think would cause an AI to behave properly. (Would I call this "Identical Copy Decision Theory"?)

  • Suppose that an entity E clones itself, becoming E1 and E2. (We're being agnostic here about which of E1 and E2 is the "original". If the clone operation is perfect, the distinction is meaningless.) Before performing the clone, E calculates its ex

... (read more)
7PlatypusNinja
I think I am happy with how these rules interact with the Anthropic Trilemma problem. But as a simpler test case, consider the following: An AI walks into a movie theater. "In exchange for 10 utilons worth of cash", says the owner, "I will show you a movie worth 100 utilons. But we have a special offer: for only 1000 utilons worth of cash, I will clone you ten thousand times, and every copy of you will see that same movie. At the end of the show, since every copy will have had the same experience, I'll merge all the copies of you back into one." Note that, although AIs can be cloned, cash cannot be. ^_^; I claim that a "sane" AI is one that declines the special offer.

It's difficult to answer the question of what our utility function is, but easier to answer the question of what it should be.

Suppose we have an AI which can duplicate itself at a small cost. Suppose the AI is about to witness an event which will probably make it happy. (Perhaps the AI was working to get a law passed, and the vote is due soon. Perhaps the AI is maximizing paperclips, and a new factory has opened. Perhaps the AI's favorite author has just written a new book.)

Does it make sense that the AI would duplicate itself in order to witness this event in greater multiplicity? If not, we need to find a set of utility rules that cause the AI to behave properly.

0PlatypusNinja
(I'm not sure what the rule is here for replying to oneself. Apologies if this is considered rude; I'm trying to avoid putting TLDR text in one comment.) Here is a set of utility-rules that I think would cause an AI to behave properly. (Would I call this "Identical Copy Decision Theory"?) * Suppose that an entity E clones itself, becoming E1 and E2. (We're being agnostic here about which of E1 and E2 is the "original". If the clone operation is perfect, the distinction is meaningless.) Before performing the clone, E calculates its expected utility U(E) = (U(E1)+U(E2))/2. * After the cloning operation, E1 and E2 have separate utility functions: E1 does not care about U(E2). "That guy thinks like me, but he isn't me." * Suppose that E1 and E2 have some experiences, and then they are merged back into one entity E' (as described in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ and elsewhere). Assuming this merge operation is possible (because the experiences of E1 and E2 were not too bizarrely disjoint), the utility of E' is the average: U(E') = (U(E1) + U(E2))/2.

In modern times, some people have started to see nature more as an enemy to be conquered than as a god to be worshiped.

I've seen people argue the opposite. In ancient times, nature meant wolves and snow and parasites and drought, and you had to kill it before it killed you. Only recently have we developed the idea that nature is something to be conserved. (Because until recently, we weren't powerful enough that it mattered.)

2David_J_Balan
Other commentors have said something similar, and you may well be right about this, I'm certainly not enough of a historian to know. However, the main point of the post remains: a lot of people today have what I think are pretty bad reasons for giving a privileged place to nature, and I have offered an alternative one that I think has more going for it.

Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.

Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone's encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.

I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).

My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I d... (read more)

Also: it seems like a really poor plan, in the long term, for the fate of the entire plane to rest on the sanity of one dude. If Hirou kept the sword, he could maybe try to work with the wizards -- ask them to spend one day per week healing people, make sure the crops do okay, etc. Things maybe wouldn't be perfect, but at least he wouldn't be running the risk of everybody-dies.

I think my concern about "power corrupts" is this: humans have a strong drive to improve things. We need projects, we need challenges. When this guy gets unlimited power, he's going to take two or three passes over everything and make sure everybody's happy, and then I'm worried he's going to get very, very bored. With an infinite lifespan and unlimited power, it's sort of inevitable.

What do you do, when you're omnipotent and undying, and you realize you're going mad with boredom?

Does "unlimited power" include the power to make yourself not bored?