Basically, as technology improves, it will increase the ability of any individual human to change the world, and by extension, it will increase any individual's ability to inflict more significant damage on it, if they so desire. This could be significant in the case of individuals who are especially angry with the world, and who want to take others down with them (e.g. the Columbine shooters, the Unabomber to an extent)
Now, the thing is this - what if someone angry at the world ultimately developed the means to annihilate the world at his own will? (or to cause massive destruction?) Certainly, this has not happened yet, but it's a possibility with improved technology (especially an improved ability to bioengineer viruses and various nanoparticles). Now, one of the biggest constraints to this is lack of resources (available to an individual). But of course, with the development of nanotechnology (and the use of fewer resources used to construct certain things, and other developments such as the substitution of carbon tubes for other materials), this may not be as much of a constraint as it is now. We could improve monitoring, but this would obviously present a threat to civil liberties. (This is not an argument against technology - I'm a transhumanist after all, and I completely embrace technological developments. But this is something I've never seen a good solution to) Of course, reducing the number of angry individuals would also reduce the probability of this happening. This demands an understanding of psychology (especially the psychologies of people who are self-centered, don't like it when they have to compromise, and who collect grudges very easily). And then a creative way to make them less angry (but this is quite difficult, which is why the creativity is needed). especially since many people get angry at the very thought of compromise. So has anyone else thought of this? And of possible solutions?
New Comment
36 comments, sorted by Click to highlight new comments since:
[-]Emile180

This is not an argument against technology - I'm a transhumanist after all, and I completely embrace technological developments.

If technology brings more harm than good, we should want to believe that technology does more harm than good - group affiliation is a very bad guide for epistemic rationality.

[-]FAWS30

The parent in no way deserved to be voted down and that it was looks like a bad sign about the health of this community to me. Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.

I didn't downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.

Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.

Agreed. I was just reacting to something that could be read as implying that group affiliation weighted as much or more than arguments.

In my mind, Inquiline's phrase sounds a bit like something sometimes hear among Christians, "if Evolution is true, then Christianity is wrong", which is used as an argument from one christian to another to reject evolution.

[-]FAWS10

I didn't downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.

I was referring to your comment being voted down. The funny thing is I originally wrote "this comment" and edited to "the parent" to avoid ambiguity.

Hah, ok.

Downvoted the post specifically for making this glaring error. I hope the author will engage this question.

Edit in response to downvoting of this comment: What?

I am the second downvote.

You hope the author will engage the question how? By abjectly apologizing? By disagreeing? If a simple response of "Good point, thanks" would be sufficient, then what was the point of your comment?

If a simple response of "Good point, thanks" would be sufficient, then what was the point of your comment?

It's a big first step to actually make that "simple response". It's even more important to recognize the problem if you are not inclined to agree.

Vladimir: I upvoted your comment, because I didn't think it was that bad. Principle of charity on the OP: maybe they meant: "I don't think this is enough of a threat that it makes technology a net negative, so it isn't meant as a knockdown argument of transhumanism?"

"Principle of charity" conflicts with principle of Tarski.

I'm not sure what you mean here. I was proposing an alternate interpretation of the OP's phrasing. I'm not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake. If technology is bad, I want to believe that too. Can you clarify what you think is the specific problem?

I'm not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake.

This was my point. There is no power to "principle of charity", since it ought not shift your level of belief in the author intending correct meaning as incorrect one.

You seem to be taking a statement of the form (to my reading):

"X appears to imply Y, but it doesn't (assertion). In fact, Y is false (separate assertion)."

and reading it as:

"X appears to imply Y, but I'm a Y-disbeliever (premise). Therefore, Y is false (inference from premise)."

Basically, it seems like you're reading "I'm a transhumanist" as a statement about InquilineKea from which they fallaciously draw a conclusion about reality, while I'm reading it as a disguised direct statement about reality, semantically equivalent to "pursuing the right technologies has positive expected value" (or whatever).

A more charitable interpretation of your post is that you're arguing against belief-as-identity in general, and using "I'm a transhumanist" as an example of it, but if so that's not clear to me.

I am mostly arguing against belief-as-identity and bottom-line reasoning in general. I agree that the original statement could be interpreted in different ways.

The "I am still a transhumanist" here might be another example of that; possibly more high-profile due to being part of the Fun Theory sequence.

Shouldn't it be: If some or all branches of technology in current sociopolitical environment bring more harm than good according to shared values of group X, then we should want to believe it?

Is a sociopathic intelligent individual deliberately doing humanity harm a greater risk than a reasonable and sincere intelligent individual making a terrible mistake, or an organisation of reasonable and sincere intelligent individuals making a terrible mistake? The population of the last two groups is much larger than of the first group.

Mistakes are small but numerous - e.g. car accidents.

Evil individuals are rare, but are sometimes highly destructive - e.g. Hitler, Stalin, Mao.

Humanity as a whole probably has more to fear from the latter category.

Hitler, Stalin, and Mao aren't just evil individuals. Somehow they are connected to a strucutre, a society, that enabled the evil.

Don't forget the power of sincerity combined with stupidity. Hitler was ridiculously incompetent - e.g., setting his organisations at each other's throats in wartime? - and World War II only went as well as it did for him because he had excellent generals. Mao was a successful revolutionary, an inspiring leader and relentlessly terrible at actually running a country - his successors carefully backed out of most of his ideas even while maintaining his personality cult. Stalin was, I suggest, less existentially dangerous because he cared about maintaining power more than about perpetuating an ideology per se.

The danger Tim describes is one of stupid politicians with reasonable power bases doing dangerous things with great sincerity - not a wish to burn everything down.

Evil individuals are rare, but are sometimes highly destructive - e.g. Hitler, Stalin, Mao.

This suggests a kind of Black Swan effect: truly evil people are rare, but their impact is disproportionately large.

This can cause a subtle form of bias. Most people never meet an evil person (or don't realize it if they do) so it is hard for them to truly understand or visualize what evil is. They might believe in evil in some abstract sense, but it remains a theoretical concept detached from any personal experience, like black holes or the ozone layer.

James Halperin's The Truth Machine long ago converted me to the idea that the best way to deal with this is to abandon privacy and the right to privacy as a societal ideal, and hope that our ability to thwart terrorists races their increase in power. Even an opt-in total surveillance system would help a lot by reducing the number of suspects.

I should probably make the case against privacy in a top-level post at some point, but pretty much everything I'll say will be taken from that book. For example, I bet Amanda Knox and Raffaele Sollecito are currently cursing the fact that they don't have a government-timestamped video of themselves at the time of Meredith Kercher's murder.

On the other hand, the recent policies of the American Transportation Safety Administration demonstrate how easy it is to implement policies that infringe on privacy without getting any corresponding reduction in risk.

I think the standard community answer to this question is "Have FAI before then"

This does seem standard, but it isn't very confidence inspiring.

I've thought of this from the angle of the Fermi paradox. Afaik, Fermi thought war was a major filter. Spam is a minor indicator that individual sociopathy could be another filter as individual power increases. How far are we from home build-a-virus kits?

The major hope [1] I can see is that any of the nano or bio tech which could be used to destroy the human race will have a run-up period, and there will be nano and bio immune systems which might be good enough that the human race won't be at risk, even though there may be large disasters.

[1]Computer programs seem much more able to self-optimize than nano and bio systems. Except that of course, a self-optimizing AI would use nano and bio methods if they seem appropriate.

This is not a cheering thought. I think the only reasonably popular ideology which poses a major risk is the "humanity is a cancer on the planet" sort of enviromentalism-- it seems plausible that a merely pretty good self-optimizing AI tasked with eliminating the human race for the sake of other living creatures would be a lot easier to build than an FAI, and it might be possible to pull a group of people together to work on it.

"Planet-cancer" environmentalists don't own server farms or make major breakthroughs in computer science, unless they're several standard deviations above the norm in both logistical competence and hypocrisy. Accordingly, they'd be working with techniques someone else developed. It's true that a general FAI would be harder to design than even a specific UFAI, but an AI with a goal along the lines of 'restore earth to it's pre-Humanity state and then prevent humans from arising, without otherwise disrupting the glorious purity of Nature' probably isn't easier to design than an anti-UFAI with the goal 'identify other AIs that are trying to kill us all and destroy everything we stand for, then prevent them from doing so, minimizing collateral damage while you do so,' while the latter would have more widespread support and therefore more resources available for it's development.

You're adding constraints to the "humanity is a cancer" project which make it a lot harder. Why not settle for "wipe out humanity in a way that doesn't cause much damage and let the planet heal itself"?

The idea of an anti-UFAI is intriguing. I'm not sure it's much easier to design than an FAI.

I think the major barrier to the development of a "wipe out humans" UFAI is that the work would have to be done in secret.

It seems to me that an anti-UFAI that does not also prevent the creation of FAIs would, necessarily, be just as hard to make as an FAI. Identifying an FAI without having a sufficiently good model of what one is that you could make one seems implausible.

Am I wrong?

You're at least plausible.

An anti-UFAI could have terms like 'minimal collateral damage' in it's motivation that would cause it to prioritize stopping faster or more destructive AIs over slower or friendlier ones, voluntarily limit it's own growth, accept ongoing human supervision, and cleanly self-destruct under appropriate circumstances.

An FAI is expected to make the world better, not just keep it from getting worse, and as such would need to be trusted with far more autonomy and long-term stability.

I'd also be worried about:

  • depressed microbiologists

  • religious fanatics who have too much trust that 'God will protect them' from their virus

  • Buddhists who loose their memetic immune system and start taking the 'material existence is inherently undesirable' aspect of their religion seriously, or for that mater a practitioner of an Abrahamic religion who takes the idea of heaven seriously.

Buddhists don't seem to go bad that way. I'm not sure that "material existence is undesirable" is a fair description of the religion-- what people seem to conclude from meditation is that most of what they thought they were experiencing is an illusion.

"At the moment, you still need to be a fairly well informed terrorist in order to do any serious damage. But what happens when any disgruntled Induhvidual can build a weapon of mass destruction by ordering the parts through magazines?" - Scott Adams, The Dilbert Future, 1997

Thirteen years on, I don't think there's a good answer to that question yet.

In seeking to prevent such outcomes, you should focus much more on the technology than on the psychology, because the technology is the essential ingredient in these end-of-the-world scenarios and the specific psychology you describe is not an essential ingredient. Suppose there is a type of nanoreplicator which could destroy all life on Earth. Yes, it might be created and released by a suitably empowered angry person; but it might also be released for some other reason, or even just as an accident.

Sometimes this scenario comes up because someone has been imagining a world where everyone has their own desktop nanofactory, and then they suddenly think, what about the sociopaths? If anyone can make anything, that means anyone can make a WMD, which means a small minority will make and use WMDs - etc. But this just means that the scenario of "first everyone gets a nanofactory, then we worry about someone choosing to end the world" is never going to happen. The possibility of human extinction has been part of the nanotech concept from almost the beginning. This was diluted once you had people hoping to get rich by marketing nanotech, and further still once "nanotech" just became a sexy new name for "chemistry", but the feeling of peril has always hovered over the specific concept of replicating nanomachines, especially free-living ones, and any person or organization who begins to seriously make progress in that direction will surely know they are playing with fire.

There simply never will be a society of free wild-type humans with lasting open access to advanced nanotechnology. It's like giving a box of matches to every child in a kindergarten, the place would burn down very quickly. And maybe that is where we're headed anyway, not because some insane idiot really will give everyone on earth a desktop WMD-factory, but because the knowledge is springing up in too many places at once.

Ordinary monitoring and intervention (as carried out by the state) can't be more than a temporary tactic - it might work for a period of years, but it's not a solution that can define a civilization's long-term response to the challenge of nanotechnology, because in the long run there are just too many ways in which the deadly threat might materialize - designed in secret by a distributed process, manufactured in a similar way.

As with Friendly AI, the core of the long-term solution is to have people (and other intelligent agents) who want to not end the world in this way - so "psychology" matters after all - but we are talking about a seriously posthuman world order then, with a neurotechnocracy which studies your brain deeply before you are given access to civilization's higher powers, or a ubiquitous AI environment which invasively studies and monitors the value systems and real-time plans of every intelligent being. You're a transhumanist, so perhaps you can deal with such scenarios, but all of them are on the other side of a singularity and cannot possibly define a practical political or technical pre-singularity strategy for overcoming this challenge. They are not designed for a world in which people are still people and in which they possess the cognitive privacy, autonomy, and idiosyncrasy that they naturally have, and in which there are no other types of intelligent actor on the scene. Any halfway-successful approach for forestalling nanotechnological (and related) doomsdays in that world will have to be a tactical approach (again, tactical means that we don't care about it being a very long-term solution, it's just crisis management, a holding pattern) which focuses first on the specificities of the technology (what exactly would make it so dangerous, how can that be neutralized), and only secondarily on social and psychological factors behind its potential misuse.

I agree wholeheartedly with your concern. I think a more practical way of reducing risk other than "develop FAI" (which seems certainly 50+ years out, and probably 100+) is to actually take the War on Terror seriously. Sure, angry individuals are bad, but angry organizations are much, much worse, especially competent ones like Al Qaeda.

I suspect biologists should also care much more about bioterrorism than they currently do, as part of their social responsibility.