Comment author: PlatypusNinja 19 April 2010 05:55:38AM *  6 points [-]

Hi! I'd like to suggest two other methods of counting readers: (1) count the number of usernames which have accessed the site in the past seven days (2) put a web counter (Google Analytics?) on the main page for a week (embed it in your post?) It might be interesting to compare the numbers.

Comment author: PlatypusNinja 31 March 2010 11:18:59PM 0 points [-]

The good news is that this pruning heuristic will probably be part of any AI we build. (In fact, early forms of this AI will have to use a much stronger version of this heuristic if we want to keep them focused on the task at hand.)

So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)

Comment author: PhilGoetz 27 March 2010 09:29:48PM *  4 points [-]

For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe.

That's not how the anthropic principle works.

The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.

It's unclear if you get to count transhumans and AIs in V, which is the same problem Yvain is raising here about whether to include bats and ants in the distribution.

You can't conclude that there aren't other planets with life because you ended up here, because the probability of different values of V doesn't depend on the observable W. There's no obvious reason why P(there are 9999 other planets with life | I'm on this planet here with life) / P(there are 9999 other planets with life) would be different than P(there are 0 other planets with life | I'm on this planet with life) / P(there are 0 other planets with life).

(I divided by the priors to show that the anthropic principle takes effect only in the conditional probability; having a different prior probability is not an anthropic effect.)

Disclaimer: I'm a little drunk.

I'm troubled now that this formulation doesn't seem to work, because it relies on saying "P(fraction of all humans who have lived so far is < X)". It doesn't work if you replace the "<" with an "=". But the observable has an "=".

BTW, outside transhumanist circles, the anthropic principle is usually used to justify having a universe fine-tuned for life, not to figure out where you stand in time, or whether life will go extinct.

Comment author: PlatypusNinja 31 March 2010 06:49:09PM 3 points [-]

The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.

This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anthropic argument, then I think we have to conclude that the anthropic argument is wrong and useless.

I would be interested in hearing examples of applications of the anthropic argument which are not vulnerable to the "depending on your reference class you get results that are either completely bogus or, in the best case, unverifiable" counterargument.

(I don't mean to pick on you specifically; lots of commentors seem to have made the above claim, and yours was simply the most well-explained.)

Comment author: Alicorn 26 February 2010 11:21:27PM 11 points [-]

You're missing the point. This post is suitable for an audience whose eyes would glaze over if you threw in numbers, which is wonderful (I read the "Intuitive Explanation of Bayes' Theorem" and was ranting for days about how there was not one intuitive thing about it! it was all numbers! and graphs!). Adding numbers would make it more strictly accurate but would not improve anyone's understanding. Anyone who would understand better if numbers were provided has their needs adequately served by the "Intuitive" explanation.

Comment author: PlatypusNinja 27 February 2010 07:56:44PM *  2 points [-]

Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.

But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;

In response to What is Bayesianism?
Comment author: PlatypusNinja 26 February 2010 11:03:52PM 0 points [-]

A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

Given no other information, we don't know which is more likely. We need numbers for "rarely", "most", and "exceedingly few". For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a headache, but 1% of humans have a brain tumor, then the brain tumor is actually more likely.

(The calculation we're performing is: compare ("rarely" times "most") to "exceedingly few" and see which one is larger.)

Comment author: PlatypusNinja 10 February 2010 08:15:06PM 5 points [-]

I would like to know more about your statement "50,000 users would surely count as a critical mass". How many users does Craigslist have in total?

I especially think it's unlikely that Craigslist would be motivated by the opinions of 50,000 Facebook users, especially if you had not actually conducted a poll but merely collected the answers of those that agree with you.

You should contact Craigslist and ask them what criteria would actually convince them that Craigslist users want for-charity ads.

In response to comment by Jack on Shut Up and Divide?
Comment author: Kevin 10 February 2010 12:14:28AM *  32 points [-]

:) Sorry.

In 2006, Craigslist's CEO Jim Buckmaster said that if enough users told them to "raise revenue and plow it into charity" that they would consider doing it. (source: http://blogs.zdnet.com/BTL/?p=4082 ) They really do listen to their users and the reason there is no advertising on Craigslist is that no one is asking for it.

A single banner ad on Craigslist would raise at least one billion for charity over five years. They could put a large "X" next to the ad, allowing you to permanently close it. There seems to be little objection to this idea. The optional banner is harmless, and a billion dollars could be enough to dramatically improve the lives of millions, save very real people from lifetimes of torture or slavery, or make a serious impact in the causes we take seriously around here. As a moral calculus, the decision is a no brainer. So we just need a critical mass of Craigslist users telling Jim that we need a banner ad on Craigslist. Per a somewhat recent email to Craig, they are still receptive to this idea if the users suggest it.

The numbers involved are a little insane. Fifty thousand people should count as critical mass, which means each person could effectively cause $20,000 to be generated out of nowhere and donated to charity. My mistake last time was doing it as a Facebook group rather than a Facebook fan page, where the more useful viral functions have moved. This time I would also drop the money on advertising to get an easy initial critical mass.

In response to comment by Kevin on Shut Up and Divide?
Comment author: PlatypusNinja 10 February 2010 05:41:28AM *  2 points [-]

each person could effectively cause $20,000 to be generated out of nowhere

As a rationalist, when you see a strange number like this, you have to ask yourself: Did I really just discover a way to make lots of money very efficiently? Or could it be that there was a mistake in my arithmetic somewhere?

That one billion dollars is not being generated out of nowhere. It is being generated as payment for ad clicks.
Let's check your assumptions: How much money will the average user generate from banner ad clicks in five years? How many users does Craigslist have? What fraction of those users would have to request banner ads, for Craigslist to add them?

My completely uneducated guess is 100$, ten million, and 50%. This matches your "generate one billion dollars" number but suggests that critical mass would be five million rather than fifty thousand. Note, also, that Facebook users are not necessarily Craigslist users.

I would be interested to hear what numbers you are using. Mine could easily be wrong.

Comment author: sark 09 February 2010 11:56:46AM *  7 points [-]

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

Why would evolution come up with a fully general solution against such 'bugs in our utility functions'?

Take addiction to a substance X. Evolution wouldn't give us a psychological capacity to inspect our utility functions and to guard against such counterfeit utility. It would simply give us a distaste for substance X.

My guess is that we have some kind of self-referential utility function. We do not only want what our utility functions tell us we want. We also want utility (happiness) per se. And this want is itself included in that utility function!

When thinking about wireheading I think we are judging a tradeoff, between satisfying mere happiness and the states of affairs which we prefer (not including happiness).

In response to comment by sark on A Much Better Life?
Comment author: PlatypusNinja 09 February 2010 06:18:06PM 1 point [-]

So, people who have a strong component of "just be happy" in their utility function might choose to wirehead, and people in which other components are dominant might choose not to.

That sounds reasonable.

Comment author: bgrah449 04 February 2010 06:47:03PM 0 points [-]

Addiction still exists.

Comment author: PlatypusNinja 07 February 2010 10:46:57AM *  1 point [-]

Well, I said most existing humans are opposed to wireheading, not all. ^_^;

Addiction might occur because: (a) some people suffer from the bug described above; (b) some people's utility function is naturally "I want to be happy", as in, "I want to feel the endorphin rush associated with happiness, and I do not care what causes it", so wireheading does look good to their current utility function; or (c) some people underestimate an addictive drug's ability to alter their thinking.

In response to A Much Better Life?
Comment author: PlatypusNinja 04 February 2010 06:18:49PM 4 points [-]

Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.

Comment author: PlatypusNinja 04 February 2010 06:29:41PM 11 points [-]

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.

This AI has the ability to rewrite itself to "while(true) { happy(); }". It evaluates this action in terms of its current utility function: "If I wirehead myself, how many paperclips will I produce?" vs "If I don't wirehead myself, how many paperclips will I produce?" It sees that not wireheading is the better choice.

If, for some reason, I've written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I've simply written a very large amount of source code that compiles to "while(true) { happy(); }".

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

View more: Prev | Next