Posts

Sorted by New

Wiki Contributions

Comments

This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.

Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.

Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.

It's also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It's not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.

In any case, I don't really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you're quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don't think slowing down technology is the right approach.

Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don't think Iran's government is trustworthy enough to deal with these risks. There aren't enough safeguards against unfriendly AI or incentives to develop friendly AI, but that's something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.

Well, ultimately, that was sort of the collective strategy the world used, wasn't it? (Not quite; a lot of low-level Nazis were pardoned after the war.)

And you can't ignore the collective action, now can you?

It's more a relative thing---"not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be".

If so, then we're actually more rational right? Because we're not biased against academia as most people are, and aren't biased toward academia as most academics are.

It's not quite so dire. You can't do experiments from home usually, but you can interpret experiments from home thanks to Internet publication of results. So a lot of theoretical work in almost every field can be done from outside academia.

otherwise we would see an occasional example of someone making a significant discovery outside academia.

Should we all place bets now that it will be Eliezer?

Negative selection may be good, actually, for the vast majority of people who are ultimately going to be mediocre.

It seems like it may hurt the occasional genius... but then again, there are a lot more people who think they are geniuses than really are geniuses.

In treating broken arms? Minimal difference.

In discovering new nanotechnology that will revolutionize the future of medicine? Literally all the difference in the world.

I think a lot of people don't like using percentiles because they are zero-sum: Exactly 25% of the class is in the top 25%, regardless of whether everyone in the class is brilliant or everyone in the class is an idiot.

Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.

This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.

I don't think it's quite true that "fail once, fail forever", but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn't seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)

Load More