Comment author: DanielLC 24 May 2015 09:23:26AM *  0 points [-]

Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn't mean they won't get distracted and lose focus on fighting when they're injured or in danger. It means that they won't avoid getting injured or killed. It's a lot easier to kill someone if they don't mind it if you succeed.

Comment author: the-citizen 08 June 2015 05:45:10AM 0 points [-]

True! I was actually trying to be funny in (4), tho apparently I need more work.

Comment author: the-citizen 24 May 2015 07:20:45AM *  2 points [-]

mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models

While there are quite a few exceptions, most actual philosophy is not done through metaphors and analogies. Some people may attempt to explain philosophy that way, while others with a casual interest in philosophy might not known the difference, but few actual philosophers I've met are silly enough not to know an analogy is an analogy. Philosophy and empirical science aren't conflicting approaches or competing priorities. They interact and refine eachother in useful ways. For example philosophy may help improve reasoning where we have only limited evidence, or it may help us understand the appropriate way for evidence to be used, classified or interpreted. It's only problematic when its used for social purposes or motivated reasoning rather than challenging our own assumptions.

I think there are certain specific instances where LW's dominant philosophical interpretations are debatable, and I'd like to hear more of what your objections of those kind are.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good

I think just being wrong or misleading (assuming you think the main thrust of the sequences is problematic), isn't enough to be a memetic hazard. Otherwise we'd be banning all sorts of stuff floating around in books and the internet. I suggest memetic hazard ought to be things that are uniquely dangerous in leading to immediate harm to mental health (suicide, extreme depression or extreme aggression).

Comment author: the-citizen 19 May 2015 07:42:28AM 0 points [-]

Suffering and AIs

Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.

Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.

If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).

Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P

Comment author: hairyfigment 17 May 2015 03:46:48PM 0 points [-]
Comment author: the-citizen 19 May 2015 07:40:49AM 0 points [-]

That seems like an interesting article, though I think it is focused on the issue of free-will and morality which is not my focus.

Comment author: the-citizen 17 May 2015 06:52:33AM *  0 points [-]

Suffering and AIs

Disclaimer - Under utilitarianism suffering is an intrinsically bad thing. While I am not a utilitarian, many people are and I will treat it as true for this post because it is the easiest approach for this issue. Also, apologies if others have already discussed this idea, which seems quite possible

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.

Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.

If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).

Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P

Comment author: buybuydandavis 29 December 2014 09:02:52PM 0 points [-]

There's not a problem if he is aware of the dependency on his moral premises and discloses that dependency to his readers. I don't see evidence of either.

The lack of an "unretract" feature is annoying.

Yeah. Interesting that in my inbox, it is not showing as retracted.

Comment author: the-citizen 02 January 2015 08:22:06AM 0 points [-]

Yeah I think you're right on that one. Still, I like and share his moral assumption that my-side-ism is harmful because it distorts and is often opposed to the truth in communication.

I retracted an earlier incorrect assertion and then edited to make this one instead. Not sure how that works exactly...

Comment author: buybuydandavis 27 December 2014 09:34:37PM *  3 points [-]

Less intelligence can render you immune to a lot of the anti epistemology running around out there. A lot of very stupid ideas take some intelligence to consume.

I like the concept of cognitive miserliness, though I've thought of it as cognitive aversion.

While I'm cognitively compulsive, and expect most people to have a greater aversion to thought than I do, they do seem compulsive in their aversion, paying huge costs to avoid putting out even the most trivial cognitive effort. "No, I don't wanna think, and you can't make me!"

I'd note the guy's "rational analysis of the Jack->Anne->George problem left much to be desired. Just list out the options and test them. Notation matters.

JackM->AnneM->GeorgeNM
JackM->AnneNM->GeorgeNM

Similarly, the bat and ball prices are trivial if you just write out the equation.

The way to be a cognitive miser is the use the right tools and notation. He might have demonstrated effective "mindware" in these problems.

The author also needs to work on his own rationality. The car example is just bad start to finish. You need a lot more information to even estimate net deaths from the car in question.

His gratuitous imposition of his own moral assumptions are worse.

We weigh evidence and make moral judgments with a myside bias that often leads to dysrationalia that is independent of measured intelligence.

Preferring your side is not necessarily dysrational. What rationality has to do with moral judgments is a non trivial topic.

Comment author: the-citizen 29 December 2014 02:30:58PM *  0 points [-]

His gratuitous imposition of his own moral assumptions are worse.

I don't see the problem with moral assumptions, as long as they are clear and relevant. I think generally the myside effect is a force that stands against truth-seeking - I guess its a question of definition whether you consider that to be irrational or not. People that bend the truth to suit themselves distort the information that rational people use for decision making, so I think its relevant.

*The lack of an "unretract" feature is annoying.

Comment author: TheAncientGeek 15 December 2014 12:19:33PM *  0 points [-]

I think you have an idea from our previous discussions why I don't think you physicalism, etc, is relevant to ethics.

Comment author: the-citizen 17 December 2014 12:38:01PM *  0 points [-]

Indeed I do! :-)

Comment author: RobbBB 14 December 2014 10:30:39AM 3 points [-]

If you want to build an unfriendly AI, you probably don't need to solve the stability problem. If you have a consistently self-improving agent with unstable goals, it should eventually (a) reach an intelligence level where it could solve the stability problem if it wanted to, then (b) randomly arrive at goals that entail their own preservation, then (c) implement the stability solution before the self-preserving goals can get overwritten. You can delegate the stability problem to the AI itself. The reason this doesn't generalize to friendly AI is that this process doesn't provide any obvious way for humans to determine which goals the agent has at step (b).

Comment author: the-citizen 15 December 2014 06:02:05AM 1 point [-]

Cheers thanks for the informative reply.

Comment author: ChristianKl 14 December 2014 11:42:48AM 0 points [-]

the one true morality might be some deep ecology that required a much lower human population, among many other possibilities

Or simply extremly smart AI's > human minds.

Comment author: the-citizen 15 December 2014 05:44:46AM 0 points [-]

Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.

View more: Next