Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JonatasMueller 30 December 2010 10:01:33PM 0 points [-]

Good guide, indeed having more money to spend through whatever career may allow for being more useful for charity.

The expedition analogy is good. I'll get into discussing the specific goal or utility function. What is the goal we're heading to?

I'd say the goal as I see it is to increase the intelligence (or cure the lack of it) to make the agents of this world able to willingly solve their problems, and thereby reach a state of technological advancement that allows them to get rid of all problems for good, and start doing better things such as spending time in paradise and exploring the universe.

We shouldn't medicate our problems in the short-term, we should think in the long-term in curing them for good. How to do that? Scientific research into intelligence, artificial intelligence and human intelligence augmentation.

How does "saving" (should I say, prolonging?) African lives help with that? Not at all, in my view. Africa receives many billions of dollars in donations, there's clearly something wrong with the way it works, and you're not going to fix it by adding a million dollars to that sea of resources that doesn't end up changing anything in the long-term. It's like a car that leaks fuel, you can keep adding more and more fuel, or you should try and fix it, and that is what I suggest. You should rather spend a million dollars in a vital area that is badly lacking funding, such as intelligence augmentation and artificial intelligence.

I don't think that we want to "save lives". Save suffering instead. If you prolong an African life you're probably prolonging suffering, which is a waste. A life of suffering and misery is not worth saving. People have no souls. This is a physical world, if you lose consciousness somewhere you still got plenty of it all around.

Comment author: sgeek 04 July 2011 02:01:29PM 1 point [-]

I think you're falsely assuming that "Africa" is a single monolithic recipient for that "sea of resources" - that ignores both the spectacular variation between and within African nations, and the difference between resources given to a corrupt government aand resources applied by non-government organisations for the benefit of people there.

I think it is fair to say that the staggering sums of money given by Western nations to African governments has been at best a complete waste of money - in fact I consider that money to have caused significant net harm. It props up corrupt regimes, increases and strengthens class differences, and generally results in increased oppression and widespread misery of various kinds. Your argument applies very well to this - "Africa" does indeed receive billions of dollars, and there is indeed something broken (most of the governments receiving the money).

This argument does not apply to the international NGO's working in Africa. Some of those organisations are short-term oriented and thus arguably pointless in the long term, but some are not. A classic example would be Kiva, which offers micro-loans for people to start small businesses to support themselves and family (incidentally not just in Africa) - there are a fair few organisations doing things like this, and it is "teach a man to fish" rather than "give a man fish". These initiatives, when they work right (which they often do) help lift Africans out of poverty and put them in a position to do something about their own future (and Africa's future). A lot of worthwhile initiatives centre around education, for instance, for fairly obvious reasons.

I think you're conflating "intelligence" with other concepts such as education and good judgement (which are what's actually needed here). Rephrased like that, it becomes obvious that a much more practical action is to fund and organise education of African people - give them the means with which to figure out the solutions to their own problems, but now rather than post-Singularity. Add direct financial support (eg. by Kiva or Grameen etc) in order that these now-educated people have the means to implement their ideas, and we have tomorrow's solution today. This is currently happening, but all we tend to know about Africa's current situation is an assortment of dramatic bad news merged together into a highly misleading narrative. To give you some idea of how significantly our perceptions differ from reality on this matter, here's a TED talk from from the incomparable statistician Hans Rosling 4.5 years ago: http://www.youtube.com/watch?v=RUwS1uAdUcI - feel free to poke around for more recent presentations and data, of course, but even this old one is an eye-opener.

I'm not saying that investment in education and entrepreneurship in Africa is necessarily the most effective use of resources from a strictly utilitarian standpoint - what I am saying is that you have not presented a strong case for African investment not being a worthwhile use of resources. Personally I regard your argument largely as an excuse to not feel guilty about distant suffering, but that is just an unsupported opinion.

Comment author: sgeek 09 June 2011 12:06:57PM *  0 points [-]

I'd say it's an error to give weight to any particular highly-improbable scenario without any evidence to distinguish it from the other highly-improbable scenarios. Here's why.

There is a nonzero possibility that some entity will acquire (or already have) godlike powers later today (as per your "I am a god" definition), and decide to use them to increase utility exponentially in response to a number derived somehow from an arbitrary combination of actions by any arbitrary combination of people in the past and the ever-moving present (and let's remember that the requirement could equally well be "condition y is met" or "condition y is not met"). I can't figure out a way to make the number of permutations actually infinite, but considering the negative as well as the positive options makes them cancel out anyway - we have no reason to believe that my posting this comment is more or less likely to trigger a utility increase than my (hypothetically) not posting this comment. The theoretically-possible outcome (huge increase in utility) does not depend on our actions in any predictable way, so there is no reason to modify our actions on this basis.

This leads to the following rational 'conclusion' (specifically considering this issue only) about taking any particular action, on a scale from -1 (definitely don't do) through 0 (indifferent) to 1 (definitely do):

±1/n, n->infinity

(Edit: Actually, this should just be 0. I should lay off the maths when I'm tired.)

(where n is the number of different possible sequences of actions which could possibly trigger the utility increase, and n therefore is unthinkably huge and continues to grow exponentially with each passing second)

Alternatively:

There is also a nonzero possibility etc etc decrease utility etc. etc. Every scenario which could lead to massive increase in utility could instead lead to massive decrease in utility, and we have no way to determine which is less likely.

Tim, you want "a good reason not to be jerked around by unlikely gods in general". Personally I much prefer my first answer (and I suspect you will too), but my alternative answer offers a much more concise rebuttal for any claim of infinite utility increase from an unlikely god:

"Your unlikely god will grant arbitrarily large increase in utility if I take the specified action? Well, my unlikely god will wreak arbitrarily large decrease in utility if I take the specified action. Give me evidence that makes your god and its claim of positive utility more likely than my god and its claim of negative utility, and we can talk - until then the probabilities exactly balance out, so for now I'll just carry on regardless.

Comment author: stcredzero 31 May 2011 11:51:31PM *  2 points [-]

Avoid saying anything unless they are certain they are correct.

I don't think this is a pernicious behavior at all. I suspect that this is actually a sign of rationality.

http://www.paulgraham.com/heroes.html (See Robert Morris)

Comment author: sgeek 02 June 2011 10:09:40PM 5 points [-]

I think the key here is qualification - Robert Morris avoided being wrong by not stating things unqualified unless he was sure of them, whereas the failure mode for rationalists is not expressing an idea at all unless fairly sure of it.

We want ideas to be shared before they're well-supported, because discussion is generally the best way for them to find support (or disproof) - we just need to signal the uncertainty when we introduce an idea.

It's much like what I've been taught in analytical chemistry - every number has a stated uncertainty associated with it.

Comment author: MixedNuts 01 June 2011 09:25:23AM 8 points [-]

Yeah, but humans only exist of creatures rational.

Comment author: sgeek 02 June 2011 09:56:48PM 6 points [-]

We're working on that.