Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: advancedatheist 19 January 2015 12:21:43AM *  7 points [-]

Well, someone had to say it:

http://edge.org/response-detail/26073

Dylan Evans Founder and CEO of Projection Point; author, Risk Intelligence

The Great AI Swindle

Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.

This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.

This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.

It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.

Comment author: jkaufman 20 January 2015 03:41:09PM 7 points [-]

It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

GiveWell recommends extremely few charities. Unless you similarly write off the Red Cross, United Way, the Salvation Army, and everyone else GiveWell doesn't recommend, this looks like motivated skepticism.

Comment author: Daniel_Burfoot 15 January 2015 03:15:35PM *  0 points [-]

(e.g. be precise and concise, carefully encapsulate state, make small reusable modular parts which are usually pure functions, REPL-driven development, etc. etc.)

I am a Java programmer, and I believe in those principles, with some caveats:

  • Java is verbose. But within the constraints of the language, you should still be as concise as possible.
  • Encapsulation and reusable modular design is a central goal of the language and OO design in general. I think Java achieves the goal to a significant degree.
  • Instead of using a REPL, you do edit/compile/run loops. So you get two layers of feedback, one from the compiler and the other from the program itself.
  • Even though Java doesn't emphasize functional concepts, you can still use those concepts in Java. For example, you can easily make objects immutable just by supplying only a constructor and no mutator methods (I use this trick regularly).
  • Java 8 is really a big step forward: we can now use default interface methods (i.e. mixins) and lambda syntax with collection operations.

I don't understand how to like it yet

My feeling towards Java is just that it's a very reliable old workhorse. It does what I want it to do, consistently, without many major screwups. In this sense it compares very strongly to other technology tools like MySQL (what, an ALTER TABLE is a full table copy? What if the table is very large?) and even Unix (why can't I do some variant of ls piped through cut to get just the file sizes of all the files in a directory?)

Comment author: jkaufman 18 January 2015 12:18:25AM 1 point [-]

why can't I do some variant of ls piped through cut to get just the file sizes of all the files in a directory?

Nerd sniped. After some fiddling, the problem with ls | cut is that cut in delimiter mode treats multiple spaces in a row as multiple delimiters. You could put cut in bytes or character mode instead, but then you have the problem that ls uses "as much as necessary" spacing, which means that if the largest file in your directory needs one more digit to represent then ls will push everything to the right one more digit.

If you want to handle ls output then awk would be easier, because it collapses multiple successive delimiters [1] but normally I'd just use du [2]. Though I have a vague memory that du and ls -l define file size differently.

(This doesn't counter your point at all -- unix tools are kind of a mess -- but I was curious.)

[1] ls -l | awk '{print $5}' [2] du -hs *

Comment author: bigjeff5 28 January 2011 05:55:24AM *  5 points [-]

So given there have been 9 heads in a row, maybe your average bear would think its more likely to come up heads than it’s genuine expected value, so I would argue that the market would probably overvalue the true likelihood (which according to you is 10/11ths), and so they would bet on heads and you would want to be short (bet on tails) if their expectation/price is greater than 10/11ths.

That was the point of the coin flip example. It was to point out that the market is not random even if it appears to be about as random as a coin flip. Information from the previous flip factors into the next flip, reducing the likelihood that any given trend will continue.

What I think you are missing is the fact that everybody knows that the way to take advantage of an inflated price is to sell short - it is not a unique insight on your part. Since it's common knowledge, obviously everybody knows that everybody else knows that the way to exploit this situation is to sell short. Therefore there is a very high probability that a significant portion of the market will bet on tails to capture the edge. This destroys your likelihood of successfully beating the edge, because too many people are going to attempt the same thing you are. Those who recognize this (and there are many) also know that the next level of exploitation is to bet on heads.

The end result is a wash - there is a 50/50 chance that the 11'th flip will be heads, not a 10/11 chance like traditional probability suggests, because everybody is trying to out-exploit everybody else. The market is anti-inductive.

Think of poker. Someone who knows the probabilities of poker hands can win in far more situations than someone who does not. They will know when to bet and when to fold, maximizing their success. However, when playing with players who know the probabilities of poker hands this strategy becomes much less successful. Because everyone is only playing cards they can win with, it is only the really lucky players who get more good cards than bad who come out ahead.

However, by exploiting the likelihood of given cards, they can pretend they have cards they do not have and convince the rest of the players to give up. This makes their probability of winning skyrocket, and the net result is that the best hand winds far less often than probability suggests. This is the result of everybody knowing how to exploit the probabilities - everybody knows to go for the edge.

In high level poker, however, the best hand wins 99% of the time. Each particular hand wins at almost exactly the rate its likelihood of appearing suggests. Why? Because everybody at the table knows the probabilities for a given hand, and everybody is going to attempt to exploit those probabilities. Furthermore, everybody knows that everybody knows this, so it is only on extremely rare occasions that someone is actually able to exploit the probabilities and win with a weaker hand. In this scenario it is extremely difficult to inflate the value of the cards you are holding by bluffing, because the person you are bluffing knows the likelihood that you have the hand you are pretending to have and can compare that to their own hand to get their chances of winning. It still works on occasion though, and can be pretty spectacular.

This is the efficiency of the market. It is anti-inductive because information flows freely. As soon as an exploit is discovered it is nullified by the fact that it cannot be hidden, and everybody will therefore take advantage of it.

Comment author: jkaufman 15 January 2015 04:24:03PM 1 point [-]

In high level poker, however, the best hand wins 99% of the time.

Really? This seems surprisingly high.

Comment author: jkaufman 15 January 2015 02:58:38PM 2 points [-]

One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair.

This is very misleading. Pendulums are extremely predictable with continuously varying momentum while markets are anti-inductive.

Comment author: alwhite 08 January 2015 05:29:30PM 1 point [-]

The GWP is the summation of the GDP for each country. The GDP is then converted to USD for comparison sakes. GDP also is not average income, so it's not entirely accurate to assume that GWP per capita is the same as having $12,000 USD. The number is all about comparison and estimation.

I realize that this is a very crude number but I still think it is useful for recognizing that we do not yet produce enough to appease all basic needs equally.

Do you disagree with that statement? Are you suggesting that we do currently produce enough and all we need to do is redistribute?

Comment author: jkaufman 08 January 2015 08:16:06PM 11 points [-]

We do currently produce enough for everyone's basic needs, yes. But "all we need to do is redistribute" isn't it: when the state steps in and massively redistributes you screw up incentives and decrease production. We haven't yet figured out how to meet everyone's basic needs without disrupting the system that gives us the economic productivity that would make this possible.

Comment author: FrameBenignly 08 January 2015 05:21:57AM -2 points [-]

Music is like the art of math. The playing of musical instruments is art, but the writing of it and the instrument design and the understanding of how those instruments operate is all math. Music can be created without art, but music cannot be created without math; not even in the slightest aspect of it. It is the only major form of classical arts to which that claim can be prescribed. A drum requires a calculation to generate reverberation to make itself heard. A scale must be calculated from its underlying frequencies. Strings must be measured in length, thickness, and tension to determine their resonance. The hole spacing and size of wind instruments must be calculated. Even something as simple as humming while alternating between high and low is a binary expression of either volume or frequency. It is only over the course of several millennia that we have developed the ability to teach an artistically gifted person to generate music without learning a bit of math. But that person still owes their artistic creations to the mathematicians of history. The connection is not at all tenuous. It is a very clear case of cause and effect.

Comment author: jkaufman 08 January 2015 08:10:09PM 3 points [-]

It is only over the course of several millennia that we have developed the ability to teach an artistically gifted person to generate music without learning a bit of math.

What? This doesn't sound like you're describing folk music at all.

Comment author: Dahlen 07 January 2015 06:19:36AM *  12 points [-]

There seem to be two broad categories of discussion topics on LessWrong: topics that are directly and obviously rationality-related (which seems to me to be an ever-shrinking category), and topics that have come to be incidentally associated with LessWrong to the extent that its founders / first or highest-status members chose to use this website to promote them -- artificial intelligence and MIRI's mission along with it, effective altruism, transhumanism, cryonics, utilitarianism -- especially in the form of implausible but difficult dilemmas in utilitarian ethics or game theory, start-up culture and libertarianism, polyamory, ideas originating from Overcoming Bias which, apparently, "is not about" overcoming bias, NRx (a minor if disturbing concern)... I could even say California itself, as a great place to live in.

As a person interested in rationality and little else that this website has to offer, I would like for there to be a way to filter out cognitive improvement discussions from these topics. Because unrelated and affiliated memes are given more importance here than related and unaffiliated memes, I have since begun to migrate to other websites* for my daily dose of debiasing. Obviously it would be all varieties of rude of me to tell everybody else "stop talking about that stuff! Talk about this stuff instead... while I sit here in the audience and enjoy listening to you speaking", and obviously the best thing I could do to further my purpose of seeing more rationality material on LessWrong would be to post some high-quality rationality material -- which I do plan on doing, but I still feel that my ideas have some maturing and polishing to undergo before they're publishable. So what I intend to do with this post is to poll people for thoughts and opinions on this matter, and perhaps re-raise the old discussions about revamping the Main/Discussion division of LessWrong.

Also, for what it's worth, it seems to me that most of the bad PR LessWrong gets comes from those topics that I've mentioned in the first paragraph being more visible to outsiders than the stated mission of "refining the art of human rationality". People often can't get beyond the peculiarities of Bayland to the actual insights that we value this community most for -- and to be honest, if I hadn't read the Sequences first and instead got hit in the face with persuasions to donate to charity or to believe in x-risk or to get my head frozen upon my first visit to LW, I'd have politely "No-Thank-You"ed the messengers like I do door-to-door salesmen. To outsiders not predisposed to be friendly to transhumanism & co. through their demographics, to conflate the two sides of LessWrong is to devalue the side that champions rationality. Unless, of course, that was the point all along and LessWrong has less intrinsic value for the founders than its purpose as an attractor of smart, concerned young people.


* notably SSC, RibbonFarm, TheLastPsychiatrist, and even highly biased but well-written blogs coming from the opposite side of the political spectrum -- hopefully for our respective biases to cancel out and for me to be left with a more accurate worldview than I started out with. (I don't read political material that I agree with, and to be honest it would be difficult to even come across texts prioritizing the same issues that I care about. I sometimes feel like I'm the first one of my political inclination...) I'm not necessarily endorsing any of these for anyone else (except Scott, read Scott, he's amazing), it's just that there is where I get my food for thought. They raise issues and put a new spin on things that don't usually occur to me.

Comment author: jkaufman 07 January 2015 05:31:51PM 3 points [-]

As a person interested in rationality and little else that this website has to offer

I'm confused why you categorize SSC as appropriate for debiasing but not LW; doesn't SSC have as much of a mix of non-rationality material as LW? Is it a mix you like better? Do you just enjoy SSC for other reasons?

Comment author: knb 31 December 2014 10:25:27AM 0 points [-]

That's such a strange comment. It seems like he was an especially sensitive young man who had a weird psychological reaction to reading radical feminist writings.

Here’s the thing: I spent my formative years—basically, from the age of 12 until my mid-20s—feeling not “entitled,” not “privileged,” but terrified. I was terrified that one of my female classmates would somehow find out that I sexually desired her, and that the instant she did, I would be scorned, laughed at, called a creep and a weirdo, maybe even expelled from school or sent to prison.

That's sad, but it surely must be an extremely uncommon problem. Not many young men read radfem tracts to begin with. Having that kind of extreme reaction must be very rare.

Comment author: jkaufman 31 December 2014 05:48:11PM *  10 points [-]

surely must be an extremely uncommon problem

Aaronson's description felt very familiar to me, describing my middle and early high school years pretty well. In my case this didn't involve reading radical feminist writing, just paying attention to what adults said about how people were to treat each other.

(And despite having had several relationships and now being married I've still never initiated a relationship, mostly out of really not wanting to come off as creepy.)

Comment author: Metus 29 December 2014 06:49:11PM *  8 points [-]

A question for specialists on EA.

If I live in a place where I can choose between a standard mix of electricity sources consisting of hydrocarbons, nuclear and renewables, and a "green" mix of renewables exclusively that costs more, should I buy the green mix or buy the cheaper/cheapest mix and donate the difference to GiveWell?

Comment author: jkaufman 30 December 2014 03:15:26AM 11 points [-]

I'd break this down into two questions:

  • Would it be better for you to pay for a neighbor of yours to switch to the green mix or give that money to GiveWell?
  • Is there something special about the energy you use coming from renewable sources as opposed to the energy someone else uses?

For the first question, subsidising renewable energy is probably a good thing, but there's no reason to expect this particular opportunity to be up there with the world's best organizations. For the second it doesn't seem to me that it matters. So I'd say buy the normal stuff and give the difference to the best organization you can find.

Comment author: is4junk 24 December 2014 03:54:01PM 0 points [-]

When I think about it I end up with a bad drake equation for both the 'win' and the 'outcome payoff'. In the drake equation you get to start off with the number of planets in the universe.

When you win is also interesting. Being revived 1 year after death should be worth more then 1m years after death.

Comment author: jkaufman 27 December 2014 11:11:53PM 1 point [-]

Previous discussion of Drake-style equations for cryonics: http://lesswrong.com/lw/fz9

View more: Next