Rain comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 16 March 2012 08:17:56PM *  0 points [-]

I assumed unfriendly AI was one more crazy speculative idea about the future, around the level of "We'll discover psionics and merge into a single cosmic consciousness" and not really worthy of any more consideration.

I do not believe that it is that speculative and I am really happy that there are people like Eliezer Yudkowsky who think about it.

In most of my submissions I try to show that there are a lot of possibilities of how the idea of superhuman AI could fail to be a risk.

Why do I do that? The reason for doing so are due to the main points of disagreement with Eliezer Yudkowsky and others who believe that "this is crunch time". The points being that 1) I believe that they are overconfident when it comes to risks from AI, that the evidence simply doesn't allow you to dramatize the case the way they do, and 2) I believe that they are overconfident when it comes to their methods of reasoning.

I would never have critized them if they had said, 1) "AI might pose a risk. We should think about it and evaluate the risk carefully." and 2) "Here are some logical implications of AI being a risk. We don't know if AI is a risk so those implications are secondary and should be discounted accordingly."

But that is not what is happening. They portray friendly AI as a moral imperative and use the full weight of all logical implications of risks from AI to blow up its expected utility.

And that's where my saying that I "found no flaws but feel that there are flaws" comes into play.

I understand that P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X). The problem is that, as muflax put it, I don't see how you can believe in the implied invisible and remain even remotely sane. It does not work out. Even though on an intellectual level I completely agree with it, my intuition is yelling that something is very wrong here. It is ear-battering. I can't ignore it. Call it irrational or just sad, I can't help it.

I think you believe that superintelligent AI may not be possible, that it's unlikely to "go foom", and that in general it's not a great use of our time to worry about it.

It is fascinating. If I could work directly at it then I would do it. But giving away my money? Here we get to point #1, mentioned above.

Is there enough evidence that my money would make a difference? This question is deep. The question is not just about the likelihood of a negative Singularity, but also the expected utility of contributing any amount of money to friendly AI research. I am seriously unable to calculate that. I don't even know if I should get an MRI to check for unruptured brain aneurysms.

Another problem is that I am not really altruistic. I'd love to see everybody happy. But that's it. But then I also don't really care about myself that much. I only care if I might suffer, but not about being dead. That's what makes the cryonics question pretty easy for me. I just don't care enough.

It could have been an indirect effect of realizing that the person who wrote these was very smart and he believed in it.

This is one of the things I don't understand. I don't think Eliezer is that smart. But even if he was, I don't think that increases the probability of him being right about some extraordinary ideas very much. Especially since I have chatted with other people that are equally smart who told me that he is wrong.

There are many incredible smart people who hold really absurd ideas.

The biggest problem is that he hasn't achieved much. All he did was putting some of the work of other people together, especially in the field of rationality and heuristics and biases. And he wrote a popular fanfic. That's it.

Yeah, he got some rich people to give him money. But the same people also support other crazy ideas with the same amount of money. That's little evidence.

It could have been that they taught me enough rationality to realize I might be wrong about this and should consider changing my mind.

Sure, I am very likely wrong. But that argument cuts both ways.

You said you were reading the debate with Robin, and that seems like a good starting point.

I will try. Right now I am very off-put by Eliezer's style of writing. I have a hard time to understand what he is saying while Robin is very clear and I agree about like everything he says.

But I will try to continue and research everything I don't understand.

...which shouldn't discourage you from reading the Sequences. They're really good. Really.

In what respect? Those posts that I have read were quite interesting. But I even enjoy reading a calculus book right now. And just as I expect to never actually benefit from learning calculus I don't think that it is instrumentally useful to read the Sequences. It is not like that I am raving mad. I have enough rationality to live a good life without the Sequences.

If you mean that they are good in convincing you of risks from AI, then I also ask you how sure you are that they are not only convincing but actually factually right? Do you believe that you have the expertise that is necessary to discern a good argument about artificial intelligence from one that is not even wrong?

It's a really good use of your time (debating with me isn't;

Just one last question if you allow. What are you doing against risks from AI? Do you pursue a carrier where you can earn a lot of money to contribute it to SIAI?

Comment author: Rain 18 March 2012 02:01:46AM *  5 points [-]

Yeah, he got some rich people to give him money.

I'm not rich. My gross annual salary is lower than Eliezer Yudkowsky's or Nick Bostrom's. (mentioned since you keep using me as your example commenter for donations)