Considerations similar to Kenzi's have led me to think that if we want to beat potential filters, we should be accelerating work on autonomous self-replicating space-based robotics. Once we do that, we will have beaten the Fermi odds. I'm not saying that it's all smooth sailing from there, but it does guarantee that something from our civilization will survive in a potentially "showy" way, so that our civilization will not be a "great silence" victim.
The argument is as follows: Any near-future great filter for humankind is probably self...
This is not a criticism of your presentation, but rather the presuppositions of the debate itself. As someone who thinks that at the root of ethics are moral sentiments, I have a hard time picturing an intelligent being doing moral reasoning without feeling such sentiments. I suspect that researchers do not want to go out of their way to give AIs affective mental states, much less anything like the full range of human moral emotions, like anger, indignation, empathy, outrage, shame and disgust. The idea seems to be if the AI is programmed with certain pref...
It's hard to disagree with Frank Jackson that moral facts supervene on physical facts - that (assuming physicalism) two universes couldn't differ with respect to ethical facts unless they also differed in some physical facts. (So you can't have to physically identical universes where something is wrong in one and the same thing is not wrong in the other.) That's enough to get us objective morality, though it doesn't help us at all with its content.
The way we de facto argue about objective morals is like this: If some theory leads to an ethically repugnant ...
I thought it's supposed to work like this: The first generation of AI are designed by us. The superintelligence is designed by them, the AI. We have initial control over what their utility functions are. I'm looking for a good reason for we should expect to retain that control beyond the superintelligence transition. No such reasons have been given here.
A different way to put a my point: Would a superintelligence be able to reason about ends? If so, then it might find itself disagreeing with our conclusions. But if not - if we design it to have what for humans would be a severe cognitive handicap - why should we think that subsequent generations of SuperAI will not repair that handicap?
Given that there is a very significant barrier to making children that deferred to us for approval on everything, why do you think the barrier would be reduced if instead of children, we made a superintelligent AI?
I guess I disagree with the premise that we will have superintelligent successors who will think circles around us, and yet we get to specify in detail what ethical values they will have, and it will stick. Forever. So let's debate what values to specify.
A parent would be crazy to think this way about a daughter, optimizing in detail the order of priorities that he intends to implant into her, and expecting them to stick. But if your daughter is a superintelligence, it's even crazier.
I think the burden of answering your "why?" question falls to those who feel sure that we have the wisdom to create superintelligent, super-creative lifeforms who could think outside the box regarding absolutely everything except ethical values. For those, they would inevitably stay on the rails that we designed for them. The thought "human monkey-minds wouldn't on reflection approve of x" would forever stop them from doing x.
In effect, we want superintelligent creatures to ethically defer to us the way Euthyphro deferred to the gods. B...
The one safe bet is that we'll be trying to maximize our future values, but in the emulated brains scenario, it's very hard to guess at what those values would be. It's easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It's one of the worst things.
Like most people, I don'...
There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.
If you want to see what runaway intelligence signaling looks like, go to grad school in analytic philosophy. You will find amazingly creative counterexamples, papers full symbolic logic, speakers who get attacked with refutations from the audience in mid-talk, and then, sometimes, deftly parry the killing blow with a clever metaphor, taking the questioner down a peg...
It's not too much of a stretch to see philosophers as IQ signaling athletes. Tennis has its ATP ladder, and everybody gets a rank. In philosophy it's slightly less blatant, partly b... (read more)