Fredrik comments on Savulescu: "Genetically enhance humanity or face extinction" - Less Wrong

4 [deleted] 10 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (193)

You are viewing a single comment's thread. Show more comments above.

Comment author: Fredrik 10 January 2010 05:26:31PM *  -1 points [-]

You don't have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.

But yes, I sympathize with you, I'm just like that myself actually. Some people wouldn't be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it's safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren't capable of grasping it, would oppose it strongly - possiby enough to base a war on the rest of the world on it.

It would also take time to reach the whole population with a governmentally mandated treatment. There isn't even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.

Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.

We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.

Comment author: mattnewport 10 January 2010 08:41:55PM 3 points [-]

If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.

Comment deleted 10 January 2010 09:54:50PM [-]
Comment author: mattnewport 10 January 2010 10:21:50PM 1 point [-]

+30 IQ points across the board would save the world

I find that claim highly dubious.

Comment author: ChristianKl 14 January 2010 12:05:36PM 0 points [-]

30 additional points of intelligence for everzone could mean that AI gets developed sooner and therefore there less time for FAI research.

The same goes for biological research that might lead to biological weapons.

Comment deleted 15 January 2010 02:48:11PM [-]
Comment author: ChristianKl 15 January 2010 03:59:03PM 1 point [-]

The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn't mean that they don't fall in the group think of their institution.

Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.

Comment deleted 15 January 2010 05:20:59PM *  [-]
Comment author: ChristianKl 16 January 2010 12:20:35AM 0 points [-]

Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn't smart enough to improve itself.

Comment author: Morendil 15 January 2010 05:41:32PM *  0 points [-]

"Editing the mental states of ems" sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.

Moreover, it's not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.

It's a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it's still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.

When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological "add-ons" ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)

Comment deleted 15 January 2010 05:58:00PM [-]
Comment author: Vladimir_Nesov 15 January 2010 09:38:45PM 2 points [-]

To whoever downvoted Roko's comment -- check out the distinction between these ideas:

Comment author: ciphergoth 16 January 2010 11:20:15AM *  1 point [-]

I'd volunteer and I'm sure I'm not the only one here.

Comment author: Morendil 15 January 2010 07:59:57PM 0 points [-]

Please expand on what "the end" means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?

Comment deleted 15 January 2010 05:55:33PM [-]
Comment author: Vladimir_Nesov 15 January 2010 09:34:49PM *  0 points [-]

I worry these modified ems won't share our values to a sufficient extent.

Comment author: Fredrik 10 January 2010 10:23:47PM 0 points [-]

So individual autonomy is more important? I just don't get that. It's what's behind the wheels of the autonomous individuals that matters. It's a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to "way too fracking high".

It's everyone's happiness and progress that matters. If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?

Comment author: mattnewport 10 January 2010 10:31:25PM 6 points [-]

If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?

The same that's not to like about forcing anything on someone against their will because despite their protestations you believe it's in their own best interests. You can justify an awful lot of evil with that line of argument.

Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.

Comment author: SoullessAutomaton 11 January 2010 12:09:16AM 0 points [-]

On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.

Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.

Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.

Comment author: mattnewport 11 January 2010 12:47:13AM 4 points [-]

On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.

I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people's stated goals are not in line with their own 'best interests'. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.

Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.

There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to 'help' them against their will.

Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.

Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.

In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.

Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.

Comment author: SoullessAutomaton 11 January 2010 02:59:37AM 1 point [-]

I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.

Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.

If they really do know what they're getting into and are okay with it, then fine, not my problem.

If it helps, I also have no problem with someone valuing self-determination so highly that they'd rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they'd like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.

There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns.

Actually making someone aware of a danger they're approaching is often easier said than done. People have a habit of disregarding things they don't want to listen to. What's that Douglas Adams quote? Something like, "Humans are remarkable among species both for having the ability to learn from others' mistakes, and for their consistent disinclination to do so."

Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.

I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one's choices.

I begin to suspect that may be the root of our actual disagreement here.

In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it).

It's a completely different issue, actually.

...but there's a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.

Tying back to the "helping people against their will" issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn't show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.

On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]

A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.

Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone's lives? No--if only because that doesn't generally work out very well. Yet, lacking a solution doesn't make the problem any less real.

So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.

Not to say I wouldn't also be suspicious of such a proposition, but don't pretend that opposing the idea is free. It's not, so long as we're all sharing this society.

Maybe you're happy to pay the costs of allowing other people to make mistakes, but I'm not. It may very well be that the alternatives are worse, but that doesn't make the situation any more pleasant.

Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.

Complicated? That's clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.

[0] One might be tempted to argue that many of these aren't really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.

Comment author: mattnewport 11 January 2010 09:02:53AM *  2 points [-]

In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.

Explaining to them why you believe they're making a mistake is justified. Interfering if they choose to continue anyway, not.

I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one's choices.

I begin to suspect that may be the root of our actual disagreement here.

I don't recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.

This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.

Badly hung over, he doesn't show up to work the next day and is fired from his job.

I don't want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual's choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.

This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn't owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.

Complicated? That's clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.

I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don't think the case is strong enough to justify compulsion. It's not something I have a great deal of interest in however so I haven't looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I'm somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children - I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.

Comment author: Fredrik 11 January 2010 12:48:08AM -2 points [-]

I might be wrong in my beliefs about their best interests, but that is a separate issue.

Given the assumption that undergoing the treatment is in everyone's best interests, wouldn't it be rational to forgo autonomous choice? Can we agree on that it would be?

Comment author: mattnewport 11 January 2010 12:55:54AM 4 points [-]

I might be wrong in my beliefs about their best interests, but that is a separate issue.

It's not a separate issue, it's the issue.

You want me to take as given the assumption that undergoing the treatment is in everyone's best interests but we're debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don't believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone's best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.

Comment author: Fredrik 11 January 2010 02:17:57AM 1 point [-]

Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.

Comment author: mattnewport 11 January 2010 11:51:50PM 1 point [-]

I think that AI with greater than human intelligence will happen sooner or later and I'd prefer it to be friendly than not so yes, I'm for the Friendly AI project.

In general I don't support attempting to restrict progress or change simply because some people are not comfortable with it. I don't put that in the same category as imposing compulsory intelligence enhancement on someone who doesn't want it.

Comment author: Fredrik 12 January 2010 04:16:31AM 0 points [-]

Well, the AI would "presume to know" what's in everyone's best interests. How is that different? It's smarter than us, that's it. Self-governance isn't holy.

Comment author: mattnewport 12 January 2010 04:53:01AM 3 points [-]

An AI that forced anything on humans 'for their own good' against their will would not count as friendly by my definition. A 'friendly AI' project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don't think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.