Comment author: [deleted] 19 December 2012 08:55:29AM *  4 points [-]

So while it might be premature to discuss actual political issues on Less Wrong, searching for techniques to make such discussions possible would be a very valuable endeavor. Political trends affect the well-being of hundreds of millions of people in substantial ways, so even a modest improvement in the quality of discourse could have a substantial payoff.

I didn't understand this at first but now its clear. Improving the discourse on LessWrong would have an impact on actual policy. Needless to say I fully support anti-democratic coups by rationalists so lets start hoarding weapons and decide which country to start with! Due to geographic convenience and control over Silicon Valley which is vital to existential risk reduction a Protectorate of California sounds nice to me. Maybe we can outsource the boring parts of running the state to Apple.

On the slim chance however that you think a higher level of discourse on LessWrong would lead to us just pointing out the irrational side to the general public or something as silly as us voting the right way actually mattering then the value of such information is remarkably low.

In response to comment by [deleted] on That Thing That Happened
Comment author: ewbrownv 20 December 2012 08:55:04PM 0 points [-]

Wow, look at all the straw men. Is there an actual reasoned position in there among the fashionable cynicism? If so, I can't find it.

One of the major purposes of Less Wrong is allegedly the promotion of more rational ways of thinking among as large a fraction of the general population as we can manage to reach. Finding better ways to think clearly about politics might be an especially difficult challenge, but popularizing the result of such an attempt isn't necessarily any harder than teaching people about the sunk costs fallacy.

But even if you think raising the level of public discourse is hopeless, being able to make accurate predictions of your own can also be quite valuable. Knowing things like "the Green's formula for winning elections forces them to drive any country they control into debt and financial collapse", or "the Blues hate the ethnic group I belong to, and will oppress us as much as they can get away with" can be rather important when deciding where to live and how to manage one's investments, for example.

Comment author: FiftyTwo 18 December 2012 03:48:12PM 5 points [-]

[Serious comment]

This is funny and all but I worry that by mocking political signalling we miss that there are real substantive discussions to be had. Blue/Green values offs are obviously wrong, but there are empirically resolvable issues that come under the realm of "politics" and by rejecting all forms of "political" discussion we remove our ability to talk seriously about it.

E.g. Gun control (which I assume this is in reference to) is a controversial issue in the US, but the question of whether policy X is likely to be effective at producing outcome Y is an empirical one which can be made by referencing comparable past examples. This empirical question is separate from any messy political stuff about values and rights.

Comment author: ewbrownv 18 December 2012 05:32:22PM 1 point [-]

I tend to agree with your concern.

Discussing politics is hard because all political groups make extensive use of lies, propaganda and emotional appeals, which turns any debate into a quagmire of disputed facts and mind-killing argument. It can be tempting to dismiss the whole endeavor as hopeless and ignore it while cynically deriding those who stay involved.

Trouble is, political movements are not all equal. If they gain power, some groups will use it to make the country wealthy so they can pocket a cut of the money. Others will try to force everyone to join their religion, or destroy the economy in some wacky scheme that could never have worked, or establish an oppressive totalitarian regime and murder millions of people to secure their position. These results are not equal.

So while it might be premature to discuss actual political issues on Less Wrong, searching for techniques to make such discussions possible would be a very valuable endeavor. Political trends affect the well-being of hundreds of millions of people in substantial ways, so even a modest improvement in the quality of discourse could have a substantial payoff. At the very least, it would be nice if we could reliably identify the genocidal maniacs before they come into power...

Comment author: DanArmak 17 December 2012 12:02:27PM 7 points [-]

Moody is the avatar of being pessimistic enough that your expectations overshoot and undershoot reality appropriately often

It's funny that Quirrel ought to be that too, because he's hyperrational and reliably cynical about people, and yet his backstory is that he failed to conquer England because he wasn't cynical enough and thought people would follow a Light Lord instead of backstab him.

Comment author: ewbrownv 17 December 2012 07:23:36PM 22 points [-]

Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...

This would neatly explain the confusion Harry noted over how a rational, inventive wizard could have failed to take over England. It leaves open some questions about why he continued his reign of terror after that ploy failed, but there are several obvious possibilities there. The big question would be what actually happened to either A) stop him, or B) make him decide to fake his death and vanish for a decade.

Comment author: CarlShulman 07 December 2012 01:00:18PM *  1 point [-]

Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species. Most worrying aspect: likely to cause total (not partial) human extinction

I agree that AI risk is more likely to be existential given that it is at least catastrophic than the other things you have mentioned. This is especially true in the sense of "most of the accessible universe gets used in ways that fall far short of their potential/astronomical waste point of view."

However, see this discussion of "AI will keep some humans around" arguments (or record data about, and recreate some in experiments and the like).

All solutions proposed so far have turned out to be very inadequate.

Well, none have been tested. Potential problems have been found or suggested, but depending on technological and social factors many might work.

Comment author: ewbrownv 10 December 2012 11:28:21PM 0 points [-]

If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.

Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...

Comment author: Bgoertzel 10 December 2012 04:03:48PM 1 point [-]

Thanks for sharing your personal feeling on this matter. However, I'd be more interested if you had some sort of rational argument in favor of your position!

The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)

To put it differently: What are the properties you think a mind needs to have, in order for the "raise a nice baby AGI" approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?

Comment author: ewbrownv 10 December 2012 09:06:16PM 11 points [-]

Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.

Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it's going to be awhile before we have a complete picture.

An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he's somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn't especially credible.

Comment author: NancyLebovitz 10 December 2012 06:31:09PM 5 points [-]

I believe that part of what caused the rise of sensitivity-based discourse is that some people got tired of discourse that seemed to have a premise of "let's calmly consider the plausible claim that the interests of people like you are dispensable", and lost points for showing anger. (Other motives include quite ordinary power-seeking.)

Comment author: ewbrownv 10 December 2012 06:53:09PM 4 points [-]

As an explanation for a society-wide shift in discourse that seems quite implausible. If such a change has actually happened the cause would most likely be some broad cultural or sociological change that took place within the same time frame.

Comment author: Eugine_Nier 06 December 2012 05:10:03AM 4 points [-]

My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.

The problem is that the grey goo has to out-compete the biosphere, which is hard if you're designing nanites from scratch. If you're basing them of existing lifeforms, that's synthetic biology.

Comment author: ewbrownv 07 December 2012 05:13:18PM 0 points [-]

Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we'll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.

But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can't begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there's a point of maturity at which nanotech systems built with technologies microorganisms can't emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.

So I'd agree that it isn't going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.

Comment author: CarlShulman 05 December 2012 04:39:32PM *  8 points [-]

Most of the capabilities offered for hypothetical Drexlerian technology seem to be just quantitative increases in already existing trends:

  • Production of more nuclear weapons; nuclear arsenals are down from the Cold War, and vastly, vastly, more nuclear weapons could be constructed with existing military budgets
  • More computation enabling AI run amok; cf. Moore's Law
  • Artificial diseases and disruptive organisms/'grey goo'; cf. synthetic biology
  • More conventional weapons; there are already plenty of weapons to kill most people, but the fatality rate would decline as populations fell
  • Some kind of non-AGI robotic weapons that keep killing survivors even as population crashes, and aren't recalled by either side, as in the SF story Second Variety; this is a question of improved robotics and manufacturing productivity, but 'nanotech' isn't that different from very efficient automated factories

I don't see much distinctive 'nanotechnology x-risk' that couldn't be realized by continued ordinary technological progress and much improved automation. So any significance has to come from nanotechnology prospects boosting our expectation of those capabilities on some timescales, which demands some argument that nanotech is going to progress faster than expected and drive those fields ahead of trend..

Comment author: ewbrownv 07 December 2012 05:00:52PM 0 points [-]

The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.

So yes, it's mostly just quantitative increases over existing trends. But it's a bunch of very large increases that would be impossible without something like nanotech, all happening at the same time.

Comment author: JoshuaZ 05 December 2012 07:15:23PM 5 points [-]

"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.

Yes it is. Right now, we can't deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn't know it until something comes along to finish it off.

The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive.

The problem is exactly that: They aren't as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we've consumed most of the coal that was easy to get to).

Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.

All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There's an excellent book which discusses many of these issues - Maggie Koerth-Baker's "Before the Lights Go Out." It focuses more on the current American electric grid, but in that context discusses many of these issues.

Comment author: ewbrownv 06 December 2012 05:39:47PM 0 points [-]

Now you're just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.

I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've bought into a line of political propaganda that has little relation to reality - there's a large body of evidence that we're nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don't expect you to agree, but anyone who's curious should be able to find both sides of this argument with a little googling.

Comment author: Stuart_Armstrong 06 December 2012 01:50:30PM 0 points [-]

Different components in the model can be tested separately. How stratospheric gases disperse can be tested. How black soot rises in the atmosphere, in a variety of heat conditions, can be tested. How black soot affects absorption of the solar radiation can be simulated in laboratory, and tested in indirect ways (as Nornagest mentioned, by comparing with volcanic eruptions).

Comment author: ewbrownv 06 December 2012 05:18:59PM 0 points [-]

Yes, and that's why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn't the case.

When you set out to build a model of a large, non-linear system you're confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can't just take the most important-looking processes and ignore the rest, because the behavior of any non-linear system tends to be dominated by unexpected interactions between obscure parts of the system that seem unrelated at first glance.

So what actually happens is you implement rough approximations of the effects the specialists in the field think are important, and get a model that outputs crazy nonsense. If you're honest, the next step is a long process of trying to figure out what you missed, adding things to the model, comparing the output to reality, and then going back to the drawing board again. There's no hard, known-to-be-accurate physics modeling involved here, because that would take far more CPU power than any possible system could provide. Instead it's all rules of thumb and simplified approximations, stuck together with arbitrary kludges that seem to give reasonable results.

Or you can take that first, horribly broken model, slap on some arbitrary fudge factors to make it spit out results the specialists agree look reasonable, and declare your work done. Then you get paid, the scientists can proudly show off their new computer model, and the media will credulously believe whatever predictions you make because they came out of a computer. But in reality all you've done is build an echo chamber - you can easily adjust such a model to give any result you want, so it provides no additional evidence.

In the case of nuclear winter there was no preexisting body of climate science that predicted a global catastrophe. There was just a couple of scientists who thought it would happen, and built a model to echo their prediction.

View more: Prev | Next