Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: siIver 15 June 2017 07:30:48PM 1 point [-]

This looks solid.

Can you go into a bit of detail on the level / spectrum of difficulty of the courses you're aiming for, and the background knowledge that'll be expected? I suspect you don't want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who's interested to participate meaningfully.

Comment author: AlexMennen 09 June 2017 05:39:20AM 0 points [-]

Sometimes when explicit reasoning and intuition conflict, intuition turns out to be right, and there is a flaw in the reasoning. There's nothing wrong with using intuition to guide yourself in questioning a conclusion you reached through explicit reasoning. That said, DragonGod did an exceptionally terrible job of this.

Comment author: siIver 10 June 2017 03:01:42PM *  1 point [-]

Yeah, you're of course right. In the back of my mind I realized that the point I was making was flawed even as I was writing it. A much weaker version of the same would have been correct, "you should at least question whether your intuition is wrong." In this case it's just very obvious to me me that there is nothing to be fixed about utilitarianism.

Anyway, yeah, it wasn't a good reply.

Comment author: siIver 08 June 2017 04:01:56PM *  3 points [-]

This is the ultimate example of... there should be a name for this.

You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...

man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.

The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.

The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.

Comment author: siIver 26 May 2017 02:23:04PM 0 points [-]

This seems like something we should talk about more.

Although, afaik there shouldn't be a decision between motivation selection and capability controlling measures – the former is obviously the more important part, but you can also always "box" the AI in addition (insofar as that's compatible with what you want it to do).

Comment author: whpearson 22 May 2017 09:49:49PM 0 points [-]

(Un)luckily we don't have many examples of potentially world destroying arms races. We might have to adopt the inside view. We'd have to look at how much mutual trust and co-operation there is currently for various things. Beyond my current knowledge.

By the research aspect, I think research can be done without the public having a good understanding of the problems. E.g. cern/CRISPR. I can also think of other bad outcomes of the the public having an understanding of AIrisk. It might be used as another stick to take away freedoms, see the war on terrorism and drugs for examples of the public's fears.

Convincing the general public of AIrisk seems like shouting fire in crowded movie theatre, it is bound to have a large and chaotic impact on society.

This is the best steelman of this argument, that I can think of at the moment. I'm not sure I'm convinced. But I do think we should put more brain power into this question.

Comment author: siIver 23 May 2017 04:32:14PM 0 points [-]

That sounds dangerously like justifying inaction.

Literally speaking, I don't disagree. It's possible that spreading awareness has a net negative outcome. It's just not likely. I don't discourage looking into the question, and if facts start pointing the other way I can be convinced. But while we're still vaguely uncertain, we should act on what seems more likely right now.

Comment author: whpearson 21 May 2017 09:59:30PM 1 point [-]

Why do you think this time is different to the nuclear arms race? The federation of atomic scientists didn't prevent it. It only slackened because russia ran ouf of steam.

Comment author: siIver 22 May 2017 06:47:38PM 0 points [-]

I guess it's a legit argument, but it doesn't have the research aspect and it's a sample size of one.

Comment author: siIver 21 May 2017 06:32:41PM 1 point [-]

This just seems like an incredibly weak argument to me. A) it seems to me that prior research will be influenced much more than the probability for an arms race, because the first is more directly linked to public perception, B) we're mostly trying to spread awareness of the risk not the capability, and C) how do we even know that more awareness on the top political levels would lead to a higher probability for an arms race, rather than a higher probability for an international cooperation?

I feel like raising awareness has a very clear and fairly safe upside, while the downside is highly uncertain.

Comment author: whpearson 17 May 2017 12:35:30PM 4 points [-]

To play devil's advocate is increasing everyone's appreciation of the risk of AI a good idea?

A risky AI implies believing that the AI is powerful. This potential impact of AI is currently under appreciated. We don't have large governmental teams working on it hoovering up all the talent.

Spreading the news of the dangerousness of AI might have the unintended consequence of starting the arms race.

This seems like a crucial consideration.

Comment author: siIver 17 May 2017 08:50:11PM 0 points [-]

Pretty sure it is. You have two factors, increasing the awareness of AI risk and of AI specifically. The first is good, the second may be bad but since the set of people caring about AI generally is so much larger, the second is also much less important.

Comment author: siIver 16 May 2017 08:08:48PM *  1 point [-]

I whole-heartedly agree with you, but I don't have anything better than "tell everyone you know about it." On that topic, what do you think is the best link to send to people? I use this, but it's not ideal.

Comment author: siIver 13 May 2017 04:50:47PM *  0 points [-]

Essentially:

Q: Evolution is a dumb algorithm, yet it produced halfway functional minds. How can it be that the problem isn't easy for humans, who are much smarter than evolution?

A: Evolution's output is not just one functional mind. Evolution put out billions of different minds, an extreme minority of them being functional. If we had a billion years of time and had a trillion chances to get it right, the problem would be easy. Since we only have around 30 years and exactly 1 chance, the problem is hard.

View more: Next