Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: blogospheroid 02 November 2016 09:29:27AM 4 points [-]

Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser. Anyway, please take this as positive reinforcement for what it is worth. You're doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.

Comment author: blogospheroid 12 January 2016 08:07:05AM 0 points [-]

This basically boils down to the root of the impulse to remove a chesterton's fence, doesn't it?

Those who believe that these impulses come from genuinely good sources (eg. learned university professors) like to take down those fences. Those who believe that these impulses come from bad sources (eg. status jockeying, holiness signalling) would like to keep them.

The reactionary impulse comes from the basic idea that the practice of repeatedly taking down chesterton's fences will inevitably auto-cannibalise and the system or the meta-system being used to defend all these previous demolitions will also fall prey to one such wave. The humans left after that catastrophe will be little better than animals, in some cases maybe even worse, lacking the ability and skills to survive.

Comment author: blogospheroid 10 December 2015 07:29:27AM 2 points [-]

Donated $100 to SENS. Hopefully, my company matches it. Take that, aging, the killer of all!

Comment author: blogospheroid 24 August 2015 09:34:11AM *  0 points [-]

I'm not a physicist, but aren't this and the linked quanta article on Prof. England's work bad news? (great filter wise)

If this implies self-assembly is much more common in the universe, then that makes it worse for the latter proposed filters (i.e. makes them EDIT higher probability)

Comment author: blogospheroid 20 August 2015 06:23:10AM 19 points [-]

I donated $300 which I think my employer is expected to match. So $600 to AI value alignment here!

Comment author: jimrandomh 01 July 2015 05:27:57PM 3 points [-]

I'm disappointed that my group's proposal to work on AI containment wasn't funded, and no other AI containment work was funded, either. Still, some of the things that were funded do look promising. I wrote a bit about what we proposed and the experience of the process here.

Comment author: blogospheroid 02 July 2015 04:57:20AM 0 points [-]

I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.

Comment author: [deleted] 08 May 2015 09:22:39AM *  3 points [-]

High prices do two different kinds of parallel rationing. They ration the good to its higher marginal utility uses: people who need it more will be willing to sacrifice more for it. This is a good thing. They also ration the good away from the poor and towards the rich. This is not really a good thing.

How could, in general, one have the first but not the second? Ration a thing to high marginal utility uses, but ability to afford, income, social class should not play much a role?

My attempt: let the price go high, because it incentivizes production. But also subsidize a certain quota of it per person, roughly as much as the highest marginal utility use is (drink, one quick shower etc. calculate it). Make the quota sellable, transferable, because people will do it anyway on the black market.

In response to comment by [deleted] on California Drought thread
Comment author: blogospheroid 08 May 2015 09:26:10AM 2 points [-]

Letting market prices reign everywhere, but providing a universal basic income is the usual economic solution.

Comment author: blogospheroid 04 March 2015 06:59:50AM 1 point [-]

Guys everyone on reddit/Hpmor seems to be talking about a spreadsheet with all solutions listed. Could anyone please post the link as a reply to this comment. Pretty please with sugar on top :)

Comment author: advancedatheist 19 January 2015 12:21:43AM *  8 points [-]

Well, someone had to say it:


Dylan Evans Founder and CEO of Projection Point; author, Risk Intelligence

The Great AI Swindle

Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.

This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.

This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.

It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.

Comment author: blogospheroid 20 January 2015 11:50:57AM 0 points [-]

A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.

To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.

Comment author: blogospheroid 01 January 2015 08:46:02AM 8 points [-]

Is anyone aware of the explanation behind why technetium is radioactive while molybdenum and ruthenium, the two elements astride it in the periodic table are perfectly normal? Searching on google on why certain elements are radioactive are giving results which are descriptive, as in X is radioactive, Y is radioactive, Z is what happens when radioactive decay occurs, etc. None seem to go into the theories which have been proposed to explain why something is radioactive.

View more: Next