Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwern 14 October 2012 08:38:22PM 7 points [-]

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.

I don't know how you could read LW and not realize that we certainly do accept precautionary principles ("running on corrupted hardware" has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal's mugging in the last week, neither of which says 'you should just bite the bullet'!), and libertarianism is heavily overrepresented compared to the general population.

One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.

No, one of the 'big black marks' on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There's nothing particular to SIAI/LW there.

Comment author: jacoblyles 15 October 2012 05:02:11PM *  2 points [-]

It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.

The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he's not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.

So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.

I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with.

And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by "shut up and calculate", which says trust your arithmetic utilitarian calculus and not your ugh fields.

Comment author: gwern 14 October 2012 07:54:41PM 0 points [-]

LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.

Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.

Comment author: jacoblyles 14 October 2012 08:07:30PM *  0 points [-]

Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.

One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.

Comment author: [deleted] 16 April 2012 11:50:26AM *  3 points [-]

Nevermind that there were no actual plans for destroying fabs, and that the whole "terrorist plot" seems to be a collective hallucination.

Nevermind that the author in question has exhaustively argued that terrorism is ineffective.

In response to comment by [deleted] on Our Phyg Is Not Exclusive Enough
Comment author: jacoblyles 14 October 2012 07:03:27PM *  4 points [-]

Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.

Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!")

LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.

Comment author: shminux 18 March 2012 04:15:43AM 4 points [-]

I got a distinct cultish vibe when I joined, but only from the far-out parts of the site, like UFAI, but not from the "modern rationality" discussions. When I raised the issue on #lesswrong, the reaction from most regulars was not very reassuring: somewhat negative and more emotional than rational. The same happened when I commented here. That's why I am looking forward to the separate rationality site, without the added untestable and useless to me EY's idiosyncrasies, such as the singularity, the UFAI and the MWI.

Comment author: jacoblyles 18 August 2012 07:56:19PM 1 point [-]

We should try to pick up "moreright.com" from whoever owns it. It's domain-parked at the moment.

Comment author: Randaly 18 August 2012 05:22:49AM *  -2 points [-]

Banning topics that show unpalatable results of Yudkowsky's philosophy was a bad start.

If you're talking about the basilisk, it wasn't banned because it reflected poorly on Yudkowsky's philosophy, it was banned because it was potentially quite dangerous. Its author was a former visiting fellow at the SI, generally agreed with Yudkowsky, and agreed (IIRC) with the decision to censor the topic.

So is the near perfect certainty that the institute members project on their entirely a priori reasoning regarding the nature of future AI.

The claim that institute members have near perfect certainty is false. You can easily see this in the linked section of the FAQ; this is true to the extent that Aaron seemed to think that the SI folks simply weren't address some criticisms that they semi-agreed with. For that matter, see the most recent post in main on LW.

If the FAQ isn't the place for self-skepticism, perhaps you could point us to other places where it exists?

He did- see my linked comment.

I am sad I missed Aaron's original article before he was pressured into changing it. I probably would have found much to agree with.

It was almost identical; it replaced the paragraph about the Heritage Institute with a link to the SI's FAQ, with a few brief and non-substantive criticisms (that it didn't address many potential criticisms, that it didn't address their full impact, and that it was too short), and then made a terrible analogy between the SI and Uri Geller without any substantive evidence.

Comment author: jacoblyles 18 August 2012 05:38:24AM *  1 point [-]

If you're talking about the basilisk, it wasn't banned because it reflected poorly on Yudkowsky's philosophy, it was banned because it was potentially quite dangerous.

Uh, I read it. Is something supposed to happen to me? Or is the danger that a large portion of SIAI followers might fall away, and the activities of SIAI have infinite positive expected utility, and so therefore not censoring LW has high negative utility?

The problem with assigning extremely large utilities to anything is that it let's you morally bootstrap all sorts of evils related to achieving that thing.

Comment author: Mitchell_Porter 18 August 2012 05:24:37AM 1 point [-]

Let's try an easier question first. If someone is about to create Skynet, should you stop them?

Comment author: jacoblyles 18 August 2012 05:32:54AM *  1 point [-]

The principles espoused by the majority on this site can be used to justify some very, very bad actions.

1) The probability of someone inventing AI is high

2) The probability of someone inventing unfriendly AI if they are not associated with SIAI is high

3) The utility of inventing unfriendly AI is negative MAXINT

4) "Shut up and calculate" - trust the math and not your gut if your utility calculations tell you to do something that feels awful.

It's not hard to figure out that Less Wrong's moral code supports some very, unsavory, actions.

Comment author: lukeprog 06 August 2012 04:22:30AM *  14 points [-]

I hope SI will agree that the FAQ answer you linked is inadequate

As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.

Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like "put off the singularity until we can make it go well") is nearly the opposite of its original mission ("make the singularity happen as quickly as possible"). The story of that transition is here.

Comment author: jacoblyles 18 August 2012 04:46:14AM *  0 points [-]

SI has enough of a culture of self-skepticism that its current mission.... is nearly the opposite of its original mission

It appears that SIAI folks are willing to adapt to agree with Yudkowsky's current beliefs. I'm not sure that shows the institution as a whole has self-skepticism. Banning topics that show unpalatable results of Yudkowsky's philosophy was a bad start. So is the near perfect certainty that the institute members project on their entirely a priori reasoning regarding the nature of future AI.

If the FAQ isn't the place for self-skepticism, perhaps you could point us to other places where it exists?

I am sad I missed Aaron's original article before he was pressured into changing it. I probably would have found much to agree with.

Comment author: DaFranker 17 August 2012 08:11:13PM 1 point [-]

...you just jinxed it! Now congress is going to pass a new bill forbidding online aids to count towards compulsory education requirements for home schooling, and otherwise hamper the idea by whatever means necessary.

After all, what better propaganda system is there than a bunch of gullible "teachers" who regurgitate everything you tell them to and whom children look up to as absolute authorities?

Comment author: jacoblyles 18 August 2012 01:00:22AM 6 points [-]

Fortunately, the United States has a strong evangelical Christian lobby that fights for and protects home schooling freedom.

Comment author: abramdemski 17 August 2012 06:16:42AM 4 points [-]

This Company starts in the United States and ties into existing home school regulations with a self-driven web learning program that requires minimum parental involvement and results in a high school degree.

The nice thing about this is that it works on an existing market, while leveraging the successful tactics discovered through hard work by Coursera & the like to bring advances to the domain.

Of course, techniques designed for university courses may not precisely transfer.

I'm skeptical about 'leveraging' videos from Khan Academy for a for-profit education system. Makes it sound half-baked.

This idea may fit with the general spaced-repetition enthusiasm I am seeing in other proposals.

It cloaks itself as merely a tool to aid homeschool parents, similar to existing mail-order tutoring materials, hiding its radical mission to end high school as we know it.

...And you just blew your cover. :)

Comment author: jacoblyles 17 August 2012 06:42:58PM 2 points [-]

...And you just blew your cover. :)

Nobody of any importance reads Less Wrong :)

Comment author: David_Gerard 17 August 2012 07:59:44AM 1 point [-]
Comment author: jacoblyles 17 August 2012 06:40:37PM *  0 points [-]

I'm pretty sure they are sourced from census data. I check the footnotes on websites like that.

View more: Next