Nornagest comments on Ben Goertzel on Charity - Less Wrong

1 Post author: XiXiDu 09 March 2011 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 10 March 2011 08:44:33PM *  -2 points [-]

I feel that people here are way too emotional. If you tell them they'll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they'll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they'll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.

Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible...

The whole 'too dangerous' argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.

If he's wrong, then he'll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.

Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I'm trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I'd really want to do something else. But before I do that I'll first have to see if it holds water.

But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling...

Yup, I haven't read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn't really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That's basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can't judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don't know jack shit about high-energy physics. And I feel Yudkowsky doesn't know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what 'intelligence' is to ask people for their money to save the universe from paperclip maximizers.

See, I'm just someone who got dragged into something he thinks is bogus and of which he doesn't want to be a part of but who nonetheless feels that he can't ignore it either. So I'm just hoping it goes away if I try hard enough. How wrong and biased, huh? But I'm neither able to ignore it nor get myself to do something about it.

Comment author: Nornagest 10 March 2011 09:34:18PM *  2 points [-]

Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.

And I'd hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that's also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We're doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That's fine, and it does deserve further discussion.

But the topic of rationality isn't at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they're brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.

Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.

Comment author: Wei_Dai 10 March 2011 11:07:39PM *  4 points [-]

Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.

Disagree on the "fewer" part. I'm not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don't remember if it hasn't been done due to lack of resources, or if there's some downside to the idea.)

Comment author: Nornagest 10 March 2011 11:14:43PM *  1 point [-]

Fair enough. It ultimately comes down to whether or not tickling transhumanists' brains wins us more than we'd gain from appearing however more approachable to non-transhumanist rationalists, and there's enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.

I stand by my statements when it comes to SIAI-specific values, though.

Comment author: komponisto 10 March 2011 09:56:44PM 0 points [-]

Upvoted for complete agreement, particularly:

Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.

(...)

Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.

Comment author: komponisto 11 March 2011 02:13:59AM 1 point [-]
Comment author: wedrifid 11 March 2011 12:14:17AM 1 point [-]

I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause

One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.

Comment author: Nornagest 11 March 2011 12:28:35AM *  1 point [-]

My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).

There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.

Comment author: wedrifid 11 March 2011 12:51:50AM 0 points [-]

There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.

Hmm...

  • Roko's Basilisk
  • Boxed AI trying to extort you
  • The 'People Are Jerks" failure mode of CEV

I can't think of any other possible examples off the top of my head. were these the ones you were thinking of?

Comment author: Nornagest 11 March 2011 01:07:07AM *  0 points [-]

Also Pascal's mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you've hit the main ones.

Comment author: Pavitra 10 March 2011 10:42:10PM 0 points [-]

This should be a top-level post, if only to maximize the proportion of LessWrongers that will read it.