homunq comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 15 August 2010 03:21:51PM 13 points [-]

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Comment author: homunq 31 August 2010 04:11:20PM *  3 points [-]

I do "think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction". I also think that SIAI is not addressing the most important problem in that regard. I suspect there's a lot of people who would agree, for various reasons.

In my case, the logic is that I think:

1) That corporations, though not truly intelligent, are already superhuman and unFriendly.

2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness

3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI ...

3a) ... while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.

I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.

Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.

Comment author: orthonormal 31 August 2010 09:07:57PM 1 point [-]

From your analysis, it seems that FHI would be very well aligned with your goals: it's a high-profile, academic rather than corporate, entity which can publicize existential risks (and takes corporate creation of such seriously, IIRC).

Would this not be desirable, or is there any organization within the broader anticorporate movement you speak of that would even think to do the same with comparable competency?

Comment author: homunq 01 September 2010 02:31:41PM *  0 points [-]

I believe that explicitly political movements, not academic ones, are the only ones which are other-optimizing enough to fight the mal-optimization of corporations. And I think that at our current level of corporate power versus AI-relevant technological understanding, my energy is best spent fighting the former rather than advancing the latter (and I majored in cognitive science and work as a programmer, so I hold that same conclusion for most people.)

I realize that these beliefs are partly tribal (something which allows me to get along with my wife and friends) and partly hope-seeking (something which allows me to get up in the morning). I think that these are valid reasons to give a belief the benefit of the doubt. I would not, however, use these excuses to justify a belief with no rational basis, or to avoid considering an argument for the lack of rational basis. Anyway, even if one tried to rid oneself of tribal and hope-seeking biases, beyond the caveats in the previous sentence, I don't think it would help one be appreciably more rational.

Comment author: pjeby 31 August 2010 05:27:30PM 1 point [-]

In my case, the reason is that I think that corporations, though not truly intelligent,

They get to use borrowed intelligence from their human symbiotes, though. ;-) (Or would they be symbionts? Hm...)

Comment author: timtyler 31 August 2010 09:07:56PM 0 points [-]

Re: coordinated action to tame corporations

One thing we need is corporation reputation systems. We have product reviews, and so forth - but the whole area is poorly organised.

Comment author: timtyler 31 August 2010 09:05:11PM *  0 points [-]

Why are corporations "not truly intelligent"? They contain humans, surely. Would you say that humans are "not truly intelligent" either?

Comment author: homunq 01 September 2010 02:21:16PM 1 point [-]

They contain humans. However, while corporations themselves are psychopathic, most are not controlled and staffed by psychopaths. This gives corporations (thank Darwin) cognitive biases which systematically reduce their intelligence when pursuing obviously unFriendly goals.

In the end, it depends on your definition of intelligence. The intelligence of a corporation in choosing strategies to fit its goals is sometimes of the level of natural selection (weak), sometimes human intelligence (true), and sometimes effective crowd intelligence (mildly superhuman). I'd guess that on the whole, they average somewhat below human intelligence (but much higher power) when pursuing explicitly unFriendly subgoals; and somewhat above human intelligence when pursuing subgoals that happen to be neutral or Friendly. But that does not necessarily mean they are on balance Friendly, because their root goals are not.

Comment author: timtyler 01 September 2010 03:36:45PM -2 points [-]

The basic idea with corporations is that they are kept in check by an even more powerful organisation: the government. If any corporation gets too big, the Monopolies and Mergers commission intervenes and splits it up. As far as I know, no corporation has ever overthrown their "parent" government.

Comment author: wnoise 01 September 2010 03:43:07PM 0 points [-]

Other governments however...