Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lukas_Gloor 10 April 2017 03:38:23PM *  1 point [-]

The survival instinct part, very probably, but the "constant misery" part doesn't look likely.

Agree, I meant to use the analogy to argue for "Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior." (I do hold the view that animals in nature suffer a lot more than they are happy, but that doesn't follow from anything I wrote in the above post.)

Are we talking about humans now? I thought the OP considered humans to be more or less fine, it's the animals that were the problem.

Right, but I thought your argument about sentient beings not committing suicide refers to humans primarily. At least with regard to humans, exploring why the appeal to low suicide rates may not show much seems more challenging. Animals not killing themselves could just be due to them lacking the relevant mental concepts.

I have no idea what this means.

It's a metaphor. Views on population ethics reflect what we want the "playlist" of all the universe's experience moments to be like, and there's no objective sense of "net utility being positive" or not. Except when you question-beggingly define "net utility" in a way that implies a conclusion, but then anyone who disagrees will just say "I don't think we should define utility that way" and you're left arguing over the same differences. That's why I called it "aesthetic" even though that feels like it doesn't give the seriousness of our moral intuitions due justice.

Ah. Well then, let's kill everyone who fails our aesthetic judgment..?

(And force everyone to live against their will if they do conform to it?) No; I specifically said not to do that. Viewing morality as subjective is supposed to make people more appreciative that they cannot go around completely violating the preferences of those they disagree with without the result being worse for everyone.

Comment author: DustinWehr 26 April 2017 04:35:07PM 0 points [-]

Lukas, I wish you had a bigger role in this community.

Comment author: Darklight 05 April 2017 03:49:48AM 13 points [-]

I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.

So, no, we don't think people who worry about superintelligence are uneducated cranks... a lot of ML people do take it seriously enough that we've had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.

We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we're likely to be much more careful in the future when things start to get closer to being realized. That's my humble opinion anyway.

Comment author: DustinWehr 18 April 2017 05:29:46PM *  0 points [-]

I've kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now.

What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations.

Not an example of that (which I haven't tried to think of), but the scenario that concerns me the most, so far, is not that some researchers will inadvertently unleash a dangerous ASI while racing to be the first, but rather that a dangerous ASI will be unleashed during an arms race between (a) states or criminal organizations intentionally developing a dangerous ASI, and (b) researchers working on ASI-powered defences to protect us against (a).

Comment author: James_Miller 04 April 2017 09:23:11PM *  2 points [-]

If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.

Edited thanks to Douglas_Knight

Comment author: DustinWehr 18 April 2017 04:45:24PM 0 points [-]

He might be willing to talk off the record. I'll ask. Have you had Darklight on? See http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqm8

Comment author: DustinWehr 05 April 2017 02:46:44PM 5 points [-]

If my own experience and the experiences of the people I know is indicative of the norm, then thinking about ethics, the horror that is the world at large, etc, tends to encourage depression. And depression, as you've realized yourself, is bad for doing good (but perhaps good for not doing bad?). I'm still working on it myself (with the help of a strong dose of antidepressants, regular exercise, consistently good sleep, etc). Glad to hear you are on the path to finding a better balance.

Comment author: DustinWehr 04 April 2017 04:37:45PM 0 points [-]

For Bostrom's simulation argument to conclude the disjunction of the two interesting propositions (our doom, or we're sims), you need to assume there are simulation runners who are motivated to do very large numbers of ancestor simulations. The simulation runners would be ultrapowerful, probably rich, amoral history/anthropology nerds, because all the other ultrapowerful amoral beings have more interesting things to occupy themselves with. If it's a set-it-and-forget-it simulation, that might be plausible. If the simulation requires monitoring and manual intervention, I think it's very implausible.

Comment author: DustinWehr 04 April 2017 01:44:51PM 3 points [-]
Comment author: James_Miller 03 April 2017 10:29:14PM 7 points [-]

This perception problem is a big part of the reason I think we are doomed if superintelligence will soon be feasible to create.

Comment author: DustinWehr 04 April 2017 01:29:38PM 2 points [-]

If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.

Comment author: Manfred 03 April 2017 10:46:33PM *  3 points [-]

We've returned various prominent AI researchers alive the last few times, we can't be that murderous.

I agree that there's a perception problem, but I think there are plenty of people who agree with us too. I'm not sure how much this indicates that something is wrong versus is an inevitable part of the dissemination (or, if I'm wrong, the eventual extinction) of the idea.

Comment author: DustinWehr 04 April 2017 01:23:20PM 0 points [-]

I'm not sure either. I'm reassured that there seems to be some move away from public geekiness, like using the word "singularity", but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it's an adequate illustration). I suspect getting some famous "cool"/sexy non-scientist people on board would help; I keep coming back to Jon Hamm (who, judging from his cameos on great comedy shows, and his role in the harrowing Black Mirror episode, has plenty of nerd inside).

Comment author: bogus 03 April 2017 11:12:54PM *  2 points [-]

A friend of mine, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists.

That's not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the "rationality community", however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one's neighbors even when they wrong you, one wouldn't think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal's mugging can be a two-way street.)

Comment author: DustinWehr 04 April 2017 01:38:24AM 0 points [-]

heh, I suppose he would agree

Comment author: DustinWehr 03 April 2017 10:06:59PM *  13 points [-]

A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That's an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.

I hope everyone is aware of that perception problem.

View more: Next