All of lump1's Comments + Replies

lump150

If you want to see what runaway intelligence signaling looks like, go to grad school in analytic philosophy. You will find amazingly creative counterexamples, papers full symbolic logic, speakers who get attacked with refutations from the audience in mid-talk, and then, sometimes, deftly parry the killing blow with a clever metaphor, taking the questioner down a peg...

It's not too much of a stretch to see philosophers as IQ signaling athletes. Tennis has its ATP ladder, and everybody gets a rank. In philosophy it's slightly less blatant, partly b... (read more)

lump130

Considerations similar to Kenzi's have led me to think that if we want to beat potential filters, we should be accelerating work on autonomous self-replicating space-based robotics. Once we do that, we will have beaten the Fermi odds. I'm not saying that it's all smooth sailing from there, but it does guarantee that something from our civilization will survive in a potentially "showy" way, so that our civilization will not be a "great silence" victim.

The argument is as follows: Any near-future great filter for humankind is probably self... (read more)

lump100

This is not a criticism of your presentation, but rather the presuppositions of the debate itself. As someone who thinks that at the root of ethics are moral sentiments, I have a hard time picturing an intelligent being doing moral reasoning without feeling such sentiments. I suspect that researchers do not want to go out of their way to give AIs affective mental states, much less anything like the full range of human moral emotions, like anger, indignation, empathy, outrage, shame and disgust. The idea seems to be if the AI is programmed with certain pref... (read more)

lump120

It's hard to disagree with Frank Jackson that moral facts supervene on physical facts - that (assuming physicalism) two universes couldn't differ with respect to ethical facts unless they also differed in some physical facts. (So you can't have to physically identical universes where something is wrong in one and the same thing is not wrong in the other.) That's enough to get us objective morality, though it doesn't help us at all with its content.

The way we de facto argue about objective morals is like this: If some theory leads to an ethically repugnant ... (read more)

lump100

I thought it's supposed to work like this: The first generation of AI are designed by us. The superintelligence is designed by them, the AI. We have initial control over what their utility functions are. I'm looking for a good reason for we should expect to retain that control beyond the superintelligence transition. No such reasons have been given here.

A different way to put a my point: Would a superintelligence be able to reason about ends? If so, then it might find itself disagreeing with our conclusions. But if not - if we design it to have what for humans would be a severe cognitive handicap - why should we think that subsequent generations of SuperAI will not repair that handicap?

3passive_fist
You're making the implicit assumption that a runaway scenario will happen. A 'cognitive handicap' would, in this case, simply prevent the next generation AI from being built at all. As I'm saying, it would be a lousy SI and not very useful. But it would be friendly.
lump110

Given that there is a very significant barrier to making children that deferred to us for approval on everything, why do you think the barrier would be reduced if instead of children, we made a superintelligent AI?

2passive_fist
The 'child' metaphor for SI is not very accurate. SIs can be designed and, most importantly, we have control over what their utility functions are.
lump110

I guess I disagree with the premise that we will have superintelligent successors who will think circles around us, and yet we get to specify in detail what ethical values they will have, and it will stick. Forever. So let's debate what values to specify.

A parent would be crazy to think this way about a daughter, optimizing in detail the order of priorities that he intends to implant into her, and expecting them to stick. But if your daughter is a superintelligence, it's even crazier.

3Vaniver
Suppose it's twenty years from now, and know exactly what genes go into the heritable portion of intelligence and personality, which includes both stuff like the Big Five and the weird preferences twins sometimes share. Suppose further that genetic modification of children is possible and acceptable, and you and your partner have decided that you'll have a daughter, and naturally you want her IQ to be as high as possible (suppose that's 170 on today's scale). So she's going to be able to think circles around you, but be comparable to her augmented classmates. But personality isn't as obvious. Do you really want her to be maximally agreeable? Extraverted? Open? The other two might be easy to agree on; you might decide to zero out her neuroticism without much debate, and maximize her conscientiousness without much more. But, importantly, her ability to outthink you doesn't mean she will outthink the personality you chose for her. Why would she want to? It's her personality. That's what a non-crazy version looks like: we know that personality traits are at least partly heritable for humans, and so we can imagine manipulating what personality traits future humans have by manipulating their genes. We also have some idea of how raising children impacts their personality / methods of relating with other people, and we can similarly imagine manipulating their early environment to get the personalities and relationships that we want. We can further strengthen the analogy by considering the next generation. Your daughter has found a partner and is considering having a granddaughter; the IQ manipulation technology has improved to the point where the granddaughter is expected to score the equivalent of 220 on today's scale, but there's still a comparable personality question. If you were highly open and decided that your daughter should be highly open too, it seems likely that your daughter will use similar logic to decide that your granddaughter should also be highly open.
lump110

I think the burden of answering your "why?" question falls to those who feel sure that we have the wisdom to create superintelligent, super-creative lifeforms who could think outside the box regarding absolutely everything except ethical values. For those, they would inevitably stay on the rails that we designed for them. The thought "human monkey-minds wouldn't on reflection approve of x" would forever stop them from doing x.

In effect, we want superintelligent creatures to ethically defer to us the way Euthyphro deferred to the gods. B... (read more)

1passive_fist
I don't think there's any significant barrier to making a superintelligence that deferred to us for approval on everything. It would be a pretty lousy superintelligence, because it would essentially be crippled by its strict adherence to our wishes (making it excruciatingly slow) but it would work, and it would be friendly.
lump100

The one safe bet is that we'll be trying to maximize our future values, but in the emulated brains scenario, it's very hard to guess at what those values would be. It's easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It's one of the worst things.

Like most people, I don'... (read more)

4Capla
It's a tragedy of the commons combined with selection pressures. If there is just a few people who decide to spread out and make as many copies as possible, then there will be slightly more of those people in the next generation. Those new multipliers will copy themselves in turn. Eventually, the population is swamped by individuals who favor unrestrained reproduction. This happens even if it is a very slight effect (if 99% of the world thinks it's good to only have one copy a year, and 1% usually only makes one copy but every ten years has an extra, given enough time, the vast majority of the population has 1.1 copies a year). The population balloons, and we don't have that tremendous wealth per captia anymore.

There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.