Savulescu: "Genetically enhance humanity or face extinction"
In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are "Unfit for the future" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.
Skip to 1:30 to avoid the tedious introduction
Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.
Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.
Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (193)
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I'd judge against putting all our eggs in the AI basket.
"We" aren't deciding where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn't always "FAI research".
From a thread http://esr.ibiblio.org/?p=1551#comments in Armed and Dangerous:
Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.
The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.
There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.
There may not be a single strategy that is perfect on it's own, but there will always be an optimum course of action, which may be a mixture of strategies (eg dump $X into nanotech safety, $Y into intelligence enhancement, and $Z into AGI development). You might never have enough information to know the optimal strategy to maximise your utility function, but one still exists, and it is worth trying to estimate it.
I mention this because previously I have heard "there is no perfect solution" as an excuse to give up and abandon systematic/mathematical analysis of a problem, and just settle with some arbitrary suggestion of a "good enough" course of action.
It isn't just that there is no "perfect" solution, to many problems there is no solution at all; just a continuing difficulty that must be continually worked through. Claims of some optimal (or even good enough) solution to these sorts of social problems is usually a means to advance the claimants' agendas, especially when they propose using gov't coercion to force everybody to follow their prescriptions.
That claims of this type are sometimes made to advance agendas does not mean we shouldn't make these claims, or that all such claims are false. It means such claims need to be scrutinised more carefully.
I agree that more often than not there is not a simple solution, and people often accept a false simple solution too readily. But the absence of a simple solution does not mean there is no theoretical optimal strategy for continually working through the difficulty.
Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson's ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.
Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?
I think you misunderstand Robin's scenario; if we survive, the Malthusian scenario is inevitable after some point.
Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.
Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn't mean I favor a Malthusian result, just that the alternatives are worse.
I don't agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)
He says that a singleton is unlikely but not negligibly so.
Ah, I see that you are right. Thanks.
Who's doing that? Governments also use surveillance, intelligence, tactical invasions and other strategies to combat terrorism.
The reason why we have terrorism is because we don't have a moral consensus that labels killing people as bad. The US does a lot to convince Arabs that killing people is just when there's a good motive.
Switching to a value based foreign policy where the west doesn't violate it's moral norms in the mind of the Arabs could help us to get a moral consensus against terrorism but unfortunately that doesn't seem politically viable at the moment.
I'd find this pleasant to believe, and I've been a longstanding critic of US foreign policy, but:
Terrorism isn't a big problem, it should be a long way down the list of problems the US needs to think about. It's interesting to speculate on what would make a difference to it, but it would be crazy to make it more than a very small influence on foreign policy.
Terrorists are already a long way from the moral consensus, which is one reason they're so rare.
It seems incredibly implausible to me that they're taking their moral lead from the US in any case.
And of course while killing people is bad all other things being equal, almost everyone already believes that; what they believe is that it's defensible in the pursuit of some other good (such as saving lives elsewhere) which I also believe.
Terrorists usually aren't a long way from the moral consensus of their community. If you take polls asking people what they think of the US the answers radically changed in the last ten years in the middle east.
In Iran the Western ideals of democracy work enough to destabilize the government a bit. Our values actually work. They are something that people can believe in and draw meaning from.
So, we could decompile humans, and do FAI to them. Or we could just do FAI. Isn't the latter strictly simpler?
I believe it's almost backwards: with IA, you get small mistakes accumulating into irreversible changes (with all sorts of temptations to declare the result "good enough"), while with FAI you have a chance of getting it absolutely right at some point. The process of designing FAI doesn't involve any abrupt change, the same way as you'd expect for IA. On the other hand, if there is no point with IA where you can "let go" and be sure the result holds the required preference, the "abrupt change" of deploying FAI is the point where you actually win.
Well, the attention of those capable of solving FAI should be undivided. Those who aren't equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.
A small dose of outside view shows that it's all nonsense. The idea of evil terrorist or criminal mastermind is based on nothing - such people don't exist. Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
See everything Schneier has ever written about it if you need data confirming what I just said.
Savulescu explicitly discusses smart sociopaths.
We forecast technology becoming more powerful and available to more people with time. As a corollary, the un-maximized destructive power of idiots also grows, eventually enough to cause x-risk scenarios.
Kinda funny, the first terrorist which came to my mind was this guy.
From Wikipedia: Kaczynski was born in Chicago, Illinois, where, as an intellectual child prodigy, he excelled academically from an early age. Kaczynski was accepted into Harvard University at the age of 16, where he earned an undergraduate degree, and later earned a PhD in mathematics from the University of Michigan. He became an assistant professor at the University of California, Berkeley at age 25, but resigned two years later.
It took the FBI 17 years to arrest the Una-Bomber and he only got caught because he published a pamphlet in the New York Times, which his brother could identify.
Anyway, IMO Savalescu merely says that with further technological progress it could be possible for smart ( say IQ around 130 ) sociopaths to kill millions of people. Do you really believe that this is impossible?
Wikipedia describes Una Bomber's feats as "mail bombing spree that spanned nearly 20 years, killing three people and injuring 23 others".
3 people in twenty years just proves my point that he either never cared about maximizing destruction or was really horrible about it. You can do better in one evening by getting an SUV, filling it with gas canisters for extra effect, and driving it into a school bus at full speed. See Mythbusters for some ideas.
The facts of the matter are such people don't exist. They're possible in a way that Russell's Teapot is possible.
Yeah, good points, but Kaczynski tried to kill especially math or science professors or generally people who contributed to technological progress. He didn't try to kill as many people as possible, so detonating a bunch of school kids was not on his agenda.
Anyway, IMO it is odd to believe that there is less than a 5% probability that some psychopath in the next 50 years could kill millions of people, perhaps through advanced bio-technology ( Let alone nanotechnology or uFAI). That such feats were nearly impossible in the past does not imply that they will be impossible in the future.
Unless you believe distribution of damaging psychopaths is extremely fat tailed, lack of moderately successful ones puts a very tight bound on probability of extremely damaging psychopath.
All the "advanced biotech / nanotech / ai" is not going to happen like that. If it happens at all, it will give more power to large groups with enough capital to research and develop them, not to lone psychopaths.
I hope you're right, and I also think that it is more likely than not. But you seem to be overly confident. If we are speculating about the future it is probably wise to widen our confidence intervals...
I think Schneier is one of the most intelligent voices in the debate on terrorism but I'm not convinced you sum up his position entirely accurately. I had a browse around his site to see if I could find some specific data to confirm your claim and had trouble finding anything. The best I could find was Portrait of the Modern Terrorist as an Idiot but it doesn't contain actual data. I'm rather confused why you linked to the specific blog post you chose which seems largely unrelated to your claim. Do you have any better links you could share?
Note that in the article I link he states:
What about the recent reports of Muslim terrorists being (degreed) engineers in disproportionate numbers? While there's some suggestion of an economic/cultural explanation, it does indicate that at least some terrorists are people who were at least able to get engineering degrees.
There a terrorist attempt only recently:
"Nation on edge after Christmas terrorism attempt"
Read some Schneier. A more accurate headline should be: "Nation on edge after an idiot demonstrates his idiocy". Nearly all terrorism has been performed by people who have serious mental deficiencies - even the 9/11 attacks depended on a lot of luck to succeed. Shit happens, but random opportunities usually aid the competent more than the incompetent. And nearly all criminals and terrorists are of lower intelligence, the few that are reasonably intelligent are seriously lacking in impulse control, which screws up their ability to make and carry through plans. Besides Bruce Schneier's work, see "The Bell Curve", and most newer literature on intelligence.
Biased sample!
Could you elaborate a bit on this analysis? It'd be interesting how you arrived at that number.
Thanks, that was indeed interesting.
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this. Do you have some reason to believe that their expectations were too optimistic by a factor of about 10 (1970 vs 2100) rather than some other number?
I am very skeptical about any human gene-engineering proposals (for anything other than targeted medical treatment purposes.)
Even if we disregard superhuman artificial intelligences, there are a lot of more direct and therefore much quicker prospective technologies in sight: electronic/chemical brain-enhancing/control, digital supervision technologies, memetic engineering, etc.
IMO, the prohibitively long turnaround time of large scale genetic engineering and its inherently inexact (indirect) nature makes it inferior to almost any thinkable alternatives.
We have had successful trials of gene therapy in the last year to let apes see additional colors. We will have the possibility to sequence the gene of all of humanity sometimes in the next decade. We will have the tech to choose to do massive testing and correlate the test scores with genes and develop gene therapy to switch those genes off in the next decade.
If we don't have ethical problems with doing so we could probably start pilot trials at the end of this decade for genetical engineering with gene therapy.
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
And why would that be?...
I don't think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce 'freedoms.'
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to 'protect us' from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven't been destroyed, but doesn't imply that you won't be destroyed. As simple as that.
I can't observe myself getting destroyed either, however.
When you close your eyes, the World doesn't go dark.
The world probably doesn't go dark. We can't know for sure without using sense data.
http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
not very. Thanks for the link.
I think I was equating quantum immortality with anthropic explanations, in general. My mistake.
Source? I'm curious how that's calculated.
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you're completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn't affect your argument, of course.)
You're talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn't, so you can't make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don't know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I'll try to look it up. ETA: Nope, can't find it now.)
No, I'm not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don't know the answers, but I think I do have a clue that others don't seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
If quantum suicide works, then there's little hurry to use it, since it's not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
Um, what? Why did we evolve to fear death? I suspect I'm missing something here.
You're converting an "is" to an "ought" there with no explanation, or else I don't know in what sense you're using "should".
Have you looked at Jacques Mallah's papers?
Yes, something like that.
So I assume you're not afraid of AI?
Much the same tech as is used to make intelligent machines augments human intelligence - by preprocessing its sensory inputs and post-processing its motor outputs.
In general, it's much quicker and easier to change human culture and the human environment than it is to genetically modify human nature.
"Richard Dawkins - The Shifting Moral Zeitgeist"
Human culture is more end-user-modifiable than the human genome is - since we created it in the first place.
The problem is that culture is embedded in the genetic/evolutionary matrix; there are severe limits on what is possible to change culturally.
Culture is what separates us from cavemen. They often killed their enemies and ate their brains. Clearly culture can be responsible for a great deal of change in the domain of moral behaviour.
If Robin Hanson is right, moral progress is simply a luxury we indulge in in this time of plenty.
Probably testable - if we can find some poor civilised folk to study.
Did crime increase significantly during the Great Depression? Wouldn't this potentially be falsifying evidence for Hanson's hypothesis?
Perhaps the Great Depression just wasn't bad enough, but it seems to cast doubt on the hypothesis, at the very least.
Crime is down during the current recession. It's possible that the shock simply hasn't been strong enough, but it may be evidence nonetheless.
I think Hanson's hypothesis was more about true catastrophes, though--if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn't worry about morality.
Indeed, rarely do we eat brains.
Culture has also produced radical Islam. Just look at http://www.youtube.com/watch?v=xuAAK032kCA to get a bit more pessimistic about the natural moral zeitgeist evolution in culture.
What fraction of the population, though? Some people are still cannibals. It doesn't mean there hasn't been moral progress. Update 2011-08-04 - the video link is now busted.
The persistence of the taboo against cannibalism is an example where we haven't made moral progress. There's no good moral reason to treat eating human meat as any different than meat of other animals, once the animals in question are dead, though there may be health reasons. It's just an example of prejudice and unreasonable moral disgust.
Personally, I think the changes are rather directional - and represent moral progress. However, that is a whole different issue.
Think how much the human genome has changed in the last 40-100 years to see how much more rapid cultural evolution can be. Culture is likely to continue to evolve much faster than DNA does - due to ethical concerns, and the whole "unmaintainable spaghetti code" business.
I like today's morals better than those of any other time and I'd prefer if the idea of moral progress was defensible, but I have no good answer to the criticism "well, you would, you are of this time".
I don't think most people living in other times & places privately agreed with their society's public morality, to the same extent that we do today.
For most of history (not prehistory), there was no option for public debate or even for openly stating opinions. Morality was normally handed down from above, from the rulers, as part of a religion. If those people had an opportunity to live in our society and be acclimatized to it, many of them may have preferred our morality. I don't believe the reverse is true, however.
This doesn't prove that our morality is objectively better - it's impossible to prove this, by definition - but it does dismiss the implication of the argument that "you like today's morality because you live today". Only the people who live today are likely to like their time's morality.
In the middle ages in Europe the middle class lived after much stricter morality than the ruling class when it comes to question such as having sex.
Morality was often the way of the powerless to feel like they are better than the ruling class.
Thanks, this is a good point - and of course there's plenty to dislike about lots of morality to be found today, there's reason to hope the people of tomorrow will overall like tomorrow's morality even better. As you say, this doesn't lead to objective morality, but it's a happy thought.
If drift were a good hypothesis, steps "forwards" (from our POV) would be about as common as steps "backwards". Are those "backwards" steps really that common?
If we model morality as a one-dimensional scale and change as a random walk, then what you say is true. However, if we model it as a million-dimensional scale on which each step affects only one dimension, after a thousand steps we would expect to find that nearly every step brought us closer to our current position.
EDIT: simulation seems to indicate I'm wrong about this. Will investigate further. EDIT: it was a bug in the simulation. Numpy code available on request.
I would regard any claim that abolition of hanging, burning witches, caning children in schools, torture, stoning, flogging, keel-hauling and stocks are "morally orthogonal" with considerable suspicion.
There no abolishion of torture anyone in the US. Some clever people ran a campaign in last decade that eroded the consensus that torture is always wrong. At the same time the US hasn't reproduced burning witches.
I'm happy to see those things abolished too, but since I'm not a moral realist I can't see how to build a useful model of "moral progress".
The video of the talk has two parts, only first of which was included in the post. Links to both parts:
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection - perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I'm starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don't quote me on that. Or whatever, go ahead.
Not just "rotten eggs" either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become "criminals", it is mandating gov't meddling with their brains. I, for example, don't use alcohol or any other recreational drug, I don't use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
You don't have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I'm just like that myself actually. Some people wouldn't be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it's safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren't capable of grasping it, would oppose it strongly - possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn't even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.
So individual autonomy is more important? I just don't get that. It's what's behind the wheels of the autonomous individuals that matters. It's a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to "way too fracking high".
It's everyone's happiness and progress that matters. If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?
The same that's not to like about forcing anything on someone against their will because despite their protestations you believe it's in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people's stated goals are not in line with their own 'best interests'. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to 'help' them against their will.
Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.
In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone's best interests, wouldn't it be rational to forgo autonomous choice? Can we agree on that it would be?
It's not a separate issue, it's the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone's best interests but we're debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don't believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone's best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
I find that claim highly dubious.
30 additional points of intelligence for everzone could mean that AI gets developed sooner and therefore there less time for FAI research.
The same goes for biological research that might lead to biological weapons.
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn't mean that they don't fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
On the topic of shingles, shingles is associated with depression. Should I ask my GP for the vaccine for prevention given that I live in Australia, have had chickenpox, but haven't had shingles?
I'm not sure quite what you're advocating here but 'dealing with the 10% of sticklers in a firm but fair way' has very ominous overtones to me.
I think I'd feel bad about the resulting fallout in the politicians' home lives.
My feeling is that if you rendered politicians incapable of lying it would be hard to distinguish from rendering them incapable of speaking.
If to become a politician you had to undergo some kind of process to enhance intelligence or honesty I wouldn't necessarily object. Becoming a politician is a voluntary choice however and so that's a very different proposition from forcing some kind of treatment on every member of society.
Simply using a lie detector for politicians might be a much better idea. It's also much easier. Of course a lie detector doesn't really detect whether someone would be lying but the same goes for any cognitive enhancement.
Out of curiosity, what do you have in mind here as "participate in society"?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don't actually seem that much better. Hence your point about "the people who do get made smarter can figure it out", I guess.
Those people don't get jobs or university education that they would need to use the dangerous knowledge about how to manufacture artificial viruses because they aren't smart enough in competition to the rest.
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers - possibly very efficiently due to our superior intelligence - rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don't have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
The key question isn't: Should we do genetic engineering when we know the complete effects of it but should we try genetically engineering even when we don't know what result we will get.
Should we gather centralized databases of DNA sequences of every human being and mine them for gene data? Are potential side effects worth the risk of starting now with genetic engineering? Do we accept the increased inequality that could result out of genetic engineering. How do we measure what constitutes a good gene? Low incarnation rates, IQ, EQ?