Nornagest comments on Ben Goertzel on Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
I feel that people here are way too emotional. If you tell them they'll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they'll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they'll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.
The whole 'too dangerous' argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.
Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I'm trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I'd really want to do something else. But before I do that I'll first have to see if it holds water.
Yup, I haven't read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn't really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That's basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can't judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don't know jack shit about high-energy physics. And I feel Yudkowsky doesn't know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what 'intelligence' is to ask people for their money to save the universe from paperclip maximizers.
See, I'm just someone who got dragged into something he thinks is bogus and of which he doesn't want to be a part of but who nonetheless feels that he can't ignore it either. So I'm just hoping it goes away if I try hard enough. How wrong and biased, huh? But I'm neither able to ignore it nor get myself to do something about it.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I'd hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that's also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We're doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That's fine, and it does deserve further discussion.
But the topic of rationality isn't at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they're brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Disagree on the "fewer" part. I'm not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don't remember if it hasn't been done due to lack of resources, or if there's some downside to the idea.)
Fair enough. It ultimately comes down to whether or not tickling transhumanists' brains wins us more than we'd gain from appearing however more approachable to non-transhumanist rationalists, and there's enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
Upvoted for complete agreement, particularly:
Please do not downvote comments like the parent.
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
I can't think of any other possible examples off the top of my head. were these the ones you were thinking of?
Also Pascal's mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you've hit the main ones.
This should be a top-level post, if only to maximize the proportion of LessWrongers that will read it.