XiXiDu comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 09 December 2010 08:12:27PM *  8 points [-]

I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.

You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such decisions. I'd rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).

I feel uncomfortable that I don't know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

Comment author: Vladimir_Nesov 09 December 2010 08:13:53PM *  2 points [-]

I removed that sentence.

I apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).

Comment author: timtyler 10 December 2010 07:10:43PM *  -1 points [-]

That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

In TURING'S CATHEDRAL, George Dyson writes:

For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.

I think many people would like to be in that group - if they can find a way to arrange it.

Comment author: shokwave 10 December 2010 08:02:30PM 1 point [-]

Quote from George Dyson

Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.

If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of 'social cooperation' mechanism will not arise.

Comment author: timtyler 10 December 2010 08:29:22PM *  -2 points [-]

Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences - which in turn is likely to make them satisfied. I am not sure what you are talking about - but surely this kind of thing is already happening all the time - with Sergey Brin, James Harris Simons - and so on.

Comment author: katydee 10 December 2010 08:31:54PM 0 points [-]

That doesn't really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of "cheerful, contented, intellectually and physically well-nourished people."

Comment author: sketerpot 10 December 2010 07:47:43PM 0 points [-]

This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.

Comment author: timtyler 10 December 2010 08:01:33PM -1 points [-]

It's a fair description of today's more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same - but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.

Comment author: timtyler 10 December 2010 07:15:50PM *  0 points [-]

That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

Better yet, you could use a kind of doublethink - and then even actually mean it. Here is W. D. Hamilton on that topic:

A world where everyone else has been persuaded to be altruistic is a good one to live in from the point of view of pursuing our own selfish ends. This hypocracy is even more convincing if we don't admit it even in our thoughts - if only on our death beds, so to speak, we change our wills back to favour the carriers of our own genes.

  • Discriminating Nepotism - as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.