timtyler comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 30 December 2010 11:30:11AM *  1 point [-]

I simply can't see where the above beliefs might come from. I'm left assuming that you just don't mean the same thing by AI as I usually mean.

And I can't see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven't thought about. If that kind of intelligence is as likely as other risks then it doesn't matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability.

There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:

Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence -- are people associated with SIAI.

But I have never heard any remotely convincing arguments in favor of this odd, outlier view !!!

BTW the term "self-modifying" is often abused in the SIAI community. Nearly all learning involves some form of self-modification. Distinguishing learning from self-modification in a rigorous formal way is pretty tricky.

Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn't sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.

If you would at least let some experts take a look at your work and assess its effectiveness and general potential. But there exists no peer review at all. There have been some popular people attend the Singularity Summit. Have you asked them why they do not contribute to the SIAI? Have you for example asked Douglas Hofstadter why he isn't doing everything he can to mitigate risks from AI? Sure, you got some people to donate a lot of money to the SIAI. But to my knowledge they are far from being experts and contribute to other organisations as well. Congratulations on that, but even cults get rich people to support them. I'll update on donors once they say why they support you and their arguments are convincing or if they are actually experts or people being able to showcase certain achievements.

My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.

Intelligence is powerful, intelligence doesn't imply friendliness, therefore intelligence is dangerous. Is that the line of reasoning based on which I shall neglect other risks? If you think so then you are making it more complicated than necessary. You do not need intelligence to invent stuff to kill us if there's already enough dumb stuff around that is more likely to kill us. And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks. The problems are far too diverse, you can't combine them and proclaim that you are going to solve all of them by simply defining friendliness mathematically. I just don't see that right now because it is too vague. You could as well replace friendliness with magic as the solution to the many disjoint problems of intelligence.

Intelligence is also not the solution to all other problems we face. As I argued several times, I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development? As I see it we will have to painstakingly engineer intelligent machines. There won't be some meta-solution that outputs meta-science to subsequently solve all other problems.

Comment author: timtyler 30 December 2010 07:41:47PM 1 point [-]

Intelligence is also not the solution to all other problems we face.

Not all of them - most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones - and so on. It probably won't fix the speed of light limit, though.

Comment author: JoshuaZ 30 December 2010 08:02:23PM 0 points [-]

Not all of them - most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones - and so on. It probably won't fix the speed of light limit, though.

What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

Comment author: laakeus 31 December 2010 07:05:03AM 5 points [-]

I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.

Comment author: jimrandomh 30 December 2010 08:05:57PM 4 points [-]

I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

War won't be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.

Comment author: JoshuaZ 30 December 2010 08:07:03PM 0 points [-]

Yes, that makes sense, but in context I don't think that's what was meant since Tim is one of the people here is more skeptical of that sort of result.

Comment author: timtyler 30 December 2010 08:12:48PM *  0 points [-]
Comment author: JoshuaZ 30 December 2010 09:59:24PM 1 point [-]

Thanks for clarifying (here and in the other remark).

Comment author: shokwave 31 December 2010 08:02:29AM 3 points [-]

How can you think any of these problems can be solved by intelligence when none of them have been solved?

War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).

Many wars are due to ideological priorities.

This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.

Comment author: timtyler 30 December 2010 08:09:37PM *  2 points [-]

I was about to reply - but jimrandomh said most of what I was going to say already - though he did so using that dreadful "singleton" terminology, spit.

I was also going to say that the internet should have got the 2010 Nobel peace prize.

Comment author: nshepperd 31 December 2010 08:40:58AM *  1 point [-]

I have my doubts about war, although I don't think most wars really come down to conflicts of terminal values. I'd hope not, anyway.

However as for the rest, if they're solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply "finding ways to get what you want".

Do you think energy limits really couldn't be solved by simply producing through thought working designs for safe and efficient fusion power plants?

ETA: ah, perhaps replace "intelligence" with "sufficient intelligence". We haven't already solved all these problems already in part because we're not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.

Comment author: TheOtherDave 30 December 2010 09:01:04PM 1 point [-]

As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)

But I'm interested in the context you imply (of humans becoming more intelligent).

My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.

I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)

So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that's true, everyone else gets to decide whether we'd rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).

So, yes, if we choose to incentivize wars, then we'll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.

Conversely, if it turns out that war really is an important problem to solve, then I'd expect fewer wars.