mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models
While there are quite a few exceptions, most actual philosophy is not done through metaphors and analogies. Some people may attempt to explain philosophy that way, while others with a casual interest in philosophy might not known the difference, but few actual philosophers I've met are silly enough not to know an analogy is an analogy. Philosophy and empirical science aren't conflicting approaches or competing priorities. They interact and refine eachother in useful ways. For example philosophy may help improve reasoning where we have only limited evidence, or it may help us understand the appropriate way for evidence to be used, classified or interpreted. It's only problematic when its used for social purposes or motivated reasoning rather than challenging our own assumptions.
I think there are certain specific instances where LW's dominant philosophical interpretations are debatable, and I'd like to hear more of what your objections of those kind are.
I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good
I think just being wrong or misleading (assuming you think the main thrust of the sequences is problematic), isn't enough to be a memetic hazard. Otherwise we'd be banning all sorts of stuff floating around in books and the internet. I suggest memetic hazard ought to be things that are uniquely dangerous in leading to immediate harm to mental health (suicide, extreme depression or extreme aggression).
Suffering and AIs
Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.
Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.
If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).
Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P
That seems like an interesting article, though I think it is focused on the issue of free-will and morality which is not my focus.
Suffering and AIs
Disclaimer - Under utilitarianism suffering is an intrinsically bad thing. While I am not a utilitarian, many people are and I will treat it as true for this post because it is the easiest approach for this issue. Also, apologies if others have already discussed this idea, which seems quite possible
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.
Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.
If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).
Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P
Yeah I think you're right on that one. Still, I like and share his moral assumption that my-side-ism is harmful because it distorts and is often opposed to the truth in communication.
I retracted an earlier incorrect assertion and then edited to make this one instead. Not sure how that works exactly...
His gratuitous imposition of his own moral assumptions are worse.
I don't see the problem with moral assumptions, as long as they are clear and relevant. I think generally the myside effect is a force that stands against truth-seeking - I guess its a question of definition whether you consider that to be irrational or not. People that bend the truth to suit themselves distort the information that rational people use for decision making, so I think its relevant.
*The lack of an "unretract" feature is annoying.
Indeed I do! :-)
Cheers thanks for the informative reply.
Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.
True! I was actually trying to be funny in (4), tho apparently I need more work.