All of ewbrownv's Comments + Replies

ewbrownv100

Good insight.

No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies ... (read more)

0Troshen
"So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology." This is actually one of the major purposes that Christians have had in doing missionary work - to spread tolerance and reduce violence. I assume it's happened in other religions too. For example, the rules of chivalry in the middle ages were an attempt to moderate the violence and abuses of the warriors.

It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.

0Kaj_Sotala
See this comment.
ewbrownv100

I don't actually agree with the assertion, but I can see at least one coherent way to argue it. The thinking would be:

The world is currently very prosperous due to advances in technology that are themselves a result of the interplay between Enlightenment ideals and the particular cultures of Western Europe and America in the 1600-1950 era. Democracy is essentially irrelevant to this process - the same thing would have happened under any moderately sane government, and indeed most of the West was neither democratic nor liberal (in the modern sense) during m... (read more)

Historically it has never worked out that way. When a society gets richer the people eat more and better food, buy more clothes, live in bigger houses, buy cars and appliances, travel more, and so on. Based on the behavior of rich people we can see that a x10 or even x100 increase from current wealth levels due to automation would just continue this trend, with people spending the excess on things like mansions, private jets and a legion of robot servants.

Realistically there's probably some upper limit to human consumption, but it's so far above current pr... (read more)

Because you can't create real, 100% physical isolation. At a minimum you're going to have power lines that breach the walls, and either people moving in and out (while potentially carrying portable electronics) or communication lines going out to terminals that aren't isolated. Also, this kind of physical facility is very expensive to build, so the more elaborate your plan is the less likely it is to get financed.

Military organizations have been trying to solve these problems ever since the 1950s, with only a modest degree of success. Even paranoid, well-funded organizations with a willingness to shoot people have security breaches on a fairly regular basis.

0pedanterrific
1) The generator would be in the isolated area. 2) Lead-lined airlock, and obviously portable electronics wouldn't be allowed in the isolated area. 3) If you have communication lines going to terminals which are not isolated, then you haven't even made an attempt at isolation in the first place. 4) This is a point about practicalities, not possibilities. 5) The relevant comparison would be the CDC, not the military.

Indeed. What's the point of building an AI you're never going to communicate with?

Also, you can't build it that way. Programs never work the first time, so at a minimum you're going to have a long period of time where programmers are coding, testing and debugging various parts of the AI. As it nears completion that's going to involve a great deal of unsupervised interaction with a partially-functional AI, because without interaction you can't tell if it works.

So what are you going to do? Wait until the AI is feature-complete on day X, and then box it? Do y... (read more)

I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.

If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could ... (read more)

2pedanterrific
Maybe I'm missing something obvious, but why wouldn't physical isolation (a lead-lined bank vault, faraday cage, etc) solve these problems?

Your second proposal, trying to restrict what the AI can do after it's made a decision, is a lost cause. Our ability to specify what is and is not allowed is simply too limited to resist any determined effort to find loopholes. This problem afflicts every field from contract law to computer security, so it seems unlikely that we're going to find a solution anytime soon.

Your first proposal, making an AI that isn't a complete AGI, is more interesting. Whether or not it's feasible depends partly on your model of how an AI will work in the first place, and par... (read more)

Actually, this would be a strong argument against CEV. If individual humans commonly have incoherent values (which they do), there is no concrete reason to expect an automated extrapolation process to magically make them coherent. I've noticed that CEV proponents have a tendency to argue that the "thought longer, understood more" part of the process will somehow fix all objections of this sort, but given the complete lack of detail about how this process is supposed to work you might as well claim that the morality fairy is going to descend from ... (read more)

1TheOtherDave
Can you point me to some serious CEV proponents who argue that most people will approve of the results? I agree with you that this seems implausible, but it has never been clear to me that anyone serious actually asserts it. FWIW, it has seemed to me from the beginning that the result of the CEV strategy would likely include at least something that makes me go "Um... really? I'm not entirely comfortable with that." More generally, it seems unlikely to me that the system which best implements my values would feel comfortable or even acceptable to me, any more than the diet that best addresses my nutritional needs will necessarily conform to my aesthetic preferences about food.
ewbrownv180

<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>

See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.

1Ahuizotl
If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building? Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised. Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever. I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI. Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?
4ancientcampus
This is actually a pretty good one. Points for outside the box thinking. rimshot
6handoflixue
Heeeh. Ehehehehe. Bwahahhahaha. Okay, that was a good one. Wow :) recovers Oh. Um. Crap. notices more text on screen, studiously avoids reading it AI DESTROYED. I really wanted to hear the next joke, too :-(

Yes, I'm saying that to get human-like learning the AI has to have the ability to write code that it will later use to perform cognitive tasks. You can't get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.

So that presents you with the problem of figuring out which AI-written code fragments are safe, not just in isolation, but in all their interactions with every other code fragment the AI will ever write. This is the same kind of problem as creating a secure brow... (read more)

0gwern
You can't? The entire genre of security exploits building a Turing-complete language out of library fragments (libc is a popular target) suggests that a hand-coded program certainly could be exploited, inasmuch as pretty much all programs like libc are hand-coded these days. I've found Turing-completeness (and hence the possibility of an AI) can lurk in the strangest places.
0[anonymous]
If I understand you correctly, you're asserting that nobody has ever come close to writing a sandbox in which code can run but not "escape". I was under the impression that this had been done perfectly, many, many times. Am I wrong?

What I was referring to is the difference between:

A) An AI that accepts an instruction from the user, thinks about how to carry out the instruction, comes up with a plan, checks that the user agrees that this is a good plan, carries it out, then goes back to an idle loop.

B) An AI that has a fully realized goal system that has some variant of 'do what I'm told' implemented as a top-level goal, and spends its time sitting around waiting for someone to give it a command so it can get a reward signal.

Either AI will kill you (or worse) in some unexpected way if... (read more)

I thought that too until I spent a few hours thinking about how to actually implement CEV, after which I realized that any AI capable of using that monster of an algorithm is already a superintelligence (and probably turned the Earth into computronium while it was trying to get enough CPU power to bootstrap its goal system).

Anyone who wants to try a "build moderately smart AGI to help design the really dangerous AGI" approach is probably better off just making a genie machine (i.e. an AI that just does whatever its told, and doesn't have explicit... (read more)

0Wei Dai
I don't see how you can build a human-level intelligence without making it at least somewhat consequentialist. If it doesn't decide actions based on something like expected utility maximization, how does it decide actions?

The last item on your list is an intractable sticking point. Any AGI smart enough to be worth worrying about is going to have to have the ability to make arbitrary changes to an internal "knowledge+skills" representation that is itself a Turing-complete programming language. As the AGI grows it will tend to create an increasingly complex ecology of AI-fragments in this way, and predicting the behavior of the whole system quickly becomes impossible.

So "don't let the AI modify its own goal system" ends up turning into just anther way of s... (read more)

0[anonymous]
Are you sure it would have to be able to make arbitrary changes to the knowledge representation? Perhaps there's a way to filter out all of the invalid changes that could possibly be made, the same way that computer proof verifiers have a way to filter out all possible invalid proofs. I'm not sure what you're saying at all about the Turing-complete programming language. A programming language is a map from strings onto computer programs; are you saying that the knowledge representation would be a computer program?

Why would you expect the social dominance of a belief to correlate with truth? Except in the most trivial cases, society has no particular mechanism that selects for true beliefs in preference to false ones.

The Darwinian competition of memes selects strongly for those that provide psychological benefits, or are politically useful, or serve the self-interest of large segments of the population. But truth is only relevant if the opponents of a belief can easily and unambiguously disprove it, which is only possible in rare cases.

3Eugine_Nier
Or if acting on the damage caused by having a bad model of reality is worse than the signaling benefit of the false belief.
ewbrownv100

If true, this is fairly strong evidence that the effort to turn the study of economics into a science has failed. If the beliefs of professional economists about their field of study are substantially affected by their gender, they obviously aren't arriving at those beliefs by a reliable objective process.

Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.

If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of in a manner likely to create criminal liability will be deleted... (read more)

3MugaSofer
Well, there's always coinflips. Much quicker than lists. Of course, that's harder with fanfiction...
Alicorn100

Do you really think HP:MoR would be a better story if EY had spent a few weeks listing all the characters by gender, and trying to tweak the plot and insert details to 'balance' things?

You're strawmanning me. I will reply to you no further.

I do think it would be better if the girls had more varied characteristics-- flaws, virtues, and interests. Who knows, there might be something generated from more interesting characters which would lead to more moments of awesome.

0MugaSofer
Beware! You have summoned the ancient demon of Sexism. You must pay for your hubris ... in blood. (Blood is another word for karma, right? Right.)
8Alicorn
You're missing my point by a long ways. I'm not complaining about the main character. I keep explicitly saying "even if you don't count the protagonist". I'm mostly examining how the not-protagonists stack up against each other. Your remark about variance might be on point, except I'm complaining not only about the ratio of competent males to competent females, but also about the specific sorts of insufficiently varied flaws that are depressing the female characters' abilities/badassery.

Knowing that philosophers are the only people who two-box on Newcomb's problem, and they constitute a vanishingly small fraction of Earth's population, I confidently one-box. Then I rush out to spend my winnings as quickly as possible, before the inevitable inflation hits.

Telling me what X is will have no effect on my action, because I already have that information. Making copies of me has no effect on my strategy, for the same reason.

I think you have a point here, but there's a more fundamental problem - there doesn't seem to be much evidence that gun control affects the ability of criminals to get guns.

The problem here is similar to prohibition of drugs. Guns and ammunition are widely available in many areas, are relatively easy to smuggle, and are durable goods that can be kept in operation for many decades once acquired. Also, the fact that police and other security officials need them means that they will continue to be produced and/or imported into an area with even very strict pr... (read more)

0Peterdjones
If criminals, police and piblic are all disarmed, there's just less bulletts flying around generally. There may still be plenty of crme, but there is a lot less homicide (suicide, innocent bystanders caught in corssfire, etc)
6fubarobfusco
Mass murderers such as school shooters aren't really typical criminals, though. They're very unusual criminals. Do they have access to black-market guns the way that career criminals might? Some school shooters (e.g. Wayne Lo, Seung-Hui Cho) bought their guns legally. Others (e.g. Adam Lanza) used guns belonging to family members. Harris and Klebold had another person purchase guns legally for them, but also purchased one gun illegally. Kip Kinkel was given guns by his parents, but also bought a gun that another student had stolen from a friend's father.

Agreed. Presence or absence of debate on an issue gives information about a nation's culture, but very little about how hard it is to discover the facts of the matter. This is especially true in matters of social science, where the available evidence is never going to be strong enough to convince someone who has already made up his mind.

ewbrownv-10

Wow, look at all the straw men. Is there an actual reasoned position in there among the fashionable cynicism? If so, I can't find it.

One of the major purposes of Less Wrong is allegedly the promotion of more rational ways of thinking among as large a fraction of the general population as we can manage to reach. Finding better ways to think clearly about politics might be an especially difficult challenge, but popularizing the result of such an attempt isn't necessarily any harder than teaching people about the sunk costs fallacy.

But even if you think raisi... (read more)

2HalMorris
I'm probably about to slip on a banana peel by not being ironic here, considering the fantastic positive karma scores people are racking up with irony but fools rush in and maybe I am one. I would like to think this is true because unless we find some way to improve the level of thinking among those people who elect our governments, we will either have to live with their mistakes, or attempt to overcome them through force or secrecy and subtlety (like the nice fantasy of Asimov's 2nd foundation). If we do the latter, we will probably, like most intelligentsia who tried to do the right thing for everybody's sake, sell our souls to the devil, and end up killing each other off as the Jacobins and Bolsheviks did (It's a historical and I think thought-provoking fact that they did just that - I hope I'm not surprising too many people with this statement) . Or maybe we will take it upon ourselves to control things via super technology, thereby bringing on the Singularity before we have any idea what is required for that to be anything but a disaster. But I fear that for most of us it seems that, "as large a fraction of the general population as we can manage to reach" is indeed a tiny minority. If that is so, I don't see how we can avoid the dilemma I mentioned above.

I tend to agree with your concern.

Discussing politics is hard because all political groups make extensive use of lies, propaganda and emotional appeals, which turns any debate into a quagmire of disputed facts and mind-killing argument. It can be tempting to dismiss the whole endeavor as hopeless and ignore it while cynically deriding those who stay involved.

Trouble is, political movements are not all equal. If they gain power, some groups will use it to make the country wealthy so they can pocket a cut of the money. Others will try to force everyone to jo... (read more)

6[anonymous]
I didn't understand this at first but now its clear. Improving the discourse on LessWrong would have an impact on actual policy. Needless to say I fully support anti-democratic coups by rationalists so lets start hoarding weapons and decide which country to start with! Due to geographic convenience and control over Silicon Valley which is vital to existential risk reduction a Protectorate of California sounds nice to me. Maybe we can outsource the boring parts of running the state to Apple. On the slim chance however that you think a higher level of discourse on LessWrong would lead to us just pointing out the irrational side to the general public or something as silly as us voting the right way actually mattering then the value of such information is remarkably low.
ewbrownv320

Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...

This would neatly explain the confusion Harry noted over how a rational, inventive wizard could have failed to take over England. It leaves open some questions about why he continued his reign of terror after that ploy failed, but there are several obvious possibilities there. The big question would be what actually happened to either A) stop him, or B) make him decide to fake his death and vanish for a decade.

7MugaSofer
So, in other words, he lost twice.
1Alsadius
One caveat - while Voldemort did seemingly try to set himself up as a Light Lord, the closest to such that actually existed in the end was Dumbledore. I think it's safe to assume that Voldemort is not Dumbledore.
2Nornagest
I've suspected something like that at least since Quirrell gave his speech at the end of the armies sequence, and 86 just gave me a lot of new evidence for it. By now I'd say my estimate is somewhere in the neighborhood of 80% for him playing both sides in a similar sense, though I don't think we have enough evidence to narrow it down to playing Light Lord as such -- just to set up a situation where a Light Lord would need to arise.

Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...

This is exactly how I read chapter 85, and now 86 confirmed it. My estimate is way over 10%, probably ~60%.

5FiftyTwo
Evil overlord list rule 230 is "I will not procrastinate regarding any ritual granting immortality.". Which he's shown to be aware of. It makes sense, remaining evil overlord allows him access to all the materials of dark rituals and willing assistants, once he's achieved it successfully he has all the time he would like to do anything else.
2DanArmak
This is certainly the obvious or surface theory that the text presents, and I believe in it too. But that doesn't change Quirrel's backstory; he played the role of Light Lord, and people didn't rally round him.

If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.

Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...

ewbrownv160

Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.

Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is t... (read more)

As an explanation for a society-wide shift in discourse that seems quite implausible. If such a change has actually happened the cause would most likely be some broad cultural or sociological change that took place within the same time frame.

Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we'll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.

But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using techn... (read more)

0Eugine_Nier
One difference is that the reproduction rate, and hence rate of evolution, of micro-organisms is much faster.

The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.

S... (read more)

Now you're just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.

I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've b... (read more)

4JoshuaZ
So Nick Bostrom, who seems to be one of the major thinkers about existential risk seems to think that this justifies being discussed in the context of existential risk http://www.nickbostrom.com/existential/risks.html . In 5.1 of that link he writes: Moving on, you wrote: So I agree we need to be careful about keeping focused on existential risk as proximate causes. I had a slightly annoying discussion with someone earlier today who was arguing that "religious fanaticism" should constitute an existential risk. But in some contexts, the line certainly is blurry. If for example, a nuclear war wiped out all but 10 humans, and then they died from lack of food, I suspect you'd say that the existential risk that got them was nuclear war, not famine. In this context, the question has to be asked if something doesn't completely wipe out humanity but leaves humanity in a situation where it is limping along to the point where things that wouldn't normally be existential risk could easily wipe humanity out should that be classified as existential risk? Even if one doesn't want to call that "existential risk", it seems clear that they share the most relevant features of existential risk (e.g. relevant to understanding the Great Filter, likely understudied and underfunded, will still result in a tremendous loss of human value, will prevent us from traveling out among the stars, will make us feel really stupid if we fail to prevent and it happens, etc.). This and the rest of that paragraph seem to indicate that you didn't read my earlier paragraph that closely. Nothing in my comment said that we were running out of fossil fuels, or even that we were running out of fuels with >1 EROEI. There's a lot of fossil fuels left. The issue in this context is that the remaining fossil fuels take technology to efficiently harness, and while we generally have that technology, a society trying to come back from drastic collapse may not have the technology. That's a very different worry than
ewbrownv-10

Yes, and that's why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn't the case.

When you set out to build a model of a large, non-linear system you're confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can't just take the most important-looking processes and ig... (read more)

An uncalibrated sim will typically give crazy results like 'increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees' or 'one large forest fire will trigger a permanent ice age'. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand - it provides evidence about the beliefs of the programmer, but nothing else.

Exactly.

I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.

This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.

If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.

But if you have a complex, non-linear system, or just one... (read more)

1dejb
This really is a pretty un-bayesian way of thinking - the idea that we should totally ignore incomplete evidence. And by extension that we should chose to believe an alternative hypothesis (''no nuclear winter') with even less evidence merely because it is assumed for unstated reasons to be the 'default belief'.
2Stuart_Armstrong
Different components in the model can be tested separately. How stratospheric gases disperse can be tested. How black soot rises in the atmosphere, in a variety of heat conditions, can be tested. How black soot affects absorption of the solar radiation can be simulated in laboratory, and tested in indirect ways (as Nornagest mentioned, by comparing with volcanic eruptions).
6Nornagest
There are particulate events in the climate record that a model of nuclear winter could be calibrated against -- any major volcanic eruption, for example. Some have even approached the level of severity predicted for a mild nuclear winter: the "Year Without A Summer" following the 1815 Tambora eruption is the first one I can think of. This isn't perfect: volcanoes mainly release fine rock ash instead of the wood and hydrocarbon soot that we'd expect from burning cities, which behaves differently in the atmosphere, and while we can get some idea of the difference from looking at events like large-scale forest fires there are limits on how far we can extrapolate. But we should have enough to at least put some bounds on what we could expect.

"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.

It's also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive. Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.

7JoshuaZ
Yes it is. Right now, we can't deal with a variety of basic x-risk that require large technologies. Big asteroids hit every hundred million years or so and many other disasters can easily wipe out a technological non-advanced species. If our tech level is reduced to even late 19th century and is static then civilization is simply dead and doesn't know it until something comes along to finish it off. The problem is exactly that: They aren't as cost competitive, and have much lower EROEI. That makes them much less useful and not even clear if they can be used to actually move to our current tech level. For example, to even get >1 EROEI on oil shale requires a fair bit of advanced technology. Similarly, most of the remaining coal is in much deeper locations than classical coal (we've consumed most of the coal that was easy to get to). All of these require high tech levels to start with or have other problems. Geothermal only works for limited locations. Solar requires extremely high tech levels to even have positive energy return. Nuclear power requires similar issues along with massive processing procedures for enough economies of scale to kick in. Both solar and have terrible trouble with providing consistent power which is important for many uses such as manufacturing. Efficient batteries are one answer to that but they require also advance tech. It may help to keep in mind that even with the advantages we had the first time around, the vast majority of early electric companies simply failed. There's an excellent book which discusses many of these issues - Maggie Koerth-Baker's "Before the Lights Go Out." It focuses more on the current American electric grid, but in that context discusses many of these issues.
8DaFranker
False dilemma. You're also strawmanning my argument. Freedom of religion is trivially equivalent to freedom of anti-epistemology. According to everything we know, it is extremely likely that only one set of beliefs can be true, and if so, there are clearly some that have more evidence supporting them. As such, "freedom to choose" which one set to believe is irrational and somewhat equivalent to trusting word-of-mouth rumours that fire does not harm you when you are naked. Freedom of speech and freedom of association, in their current incarnations, are similarly problematic, though not as obviously so. Absolute enforcement of these three freedoms is not required to avoid the failure modes of society that you enumerate, and I never mentioned that said ideal society would even remotely look like what's contained withing your (apparently very tiny) hypothesis space of possible societies, let alone that my ideology would be the Rule of Law or that this society would even be composed of humans as we know them with all their flawed brains and flimsy squishy bits that give up and die way too fast.

Well, 500 years ago there was plenty of brutal physical oppression going on, and I'd expect that kind of thing to have lots of other negative effects on top of the first-order emotional reactions of the victims.

But I would claim that if you did a big brain-scan survey of, say, Western women from 1970 to the present, you'd see very little correlation between their subjective feeling of oppression and their actual treatment in society.

Such a mechanism may be desirable, but it isn't necessary for the existence of cities. There are plenty of third world countries that don't bother with licensing, and still manage to have major metropolises.

But my point was just that when people talk about 'trades and crafts on which the existence of the modern city depends' they generally mean carpenters, plumbers, electricians and other hands-on trades, not clerks and bureaucrats.

The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven't killed us all yet.

An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn't likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible - turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.

6Eugine_Nier
Artificial plagues can be optimized to for maximum human deaths, something natural plagues aren't. Artificial plagues can contain genes spliced in from unrelated species, including the target. For example, human hormones.
5Stuart_Armstrong
Evolution happens blindingly fast for viruses, which means that humans can co-opt it. Are you really confident that the combination of directed evolution and some centralised design won't reach devastating results? After all, the deadliness, incubation period and transmissibility are already out there in nature; it would just be a question of putting the pieces together.

Calling this an x-risk seems to be a case of either A) stretching the definition considerably, or B) being unduly credulous of the claims of political activists. A few points to consider:

1) During the height of the cold war, when there were about an order of magnitude more nuclear weapons deployed than is currently the case, US military (which had a vested interest in exaggerating the Soviet threat) put estimated casualties from a full-scale nuclear exchange at 30-40% of the US population. While certainly horrific, this falls far short of extinction. Grant... (read more)

7CarlShulman
Have you done this? I've asked random climate physics folk (pretty smart and cynical people) to take a look at the nuclear winter models, and they found them reasonable on the basic shape of the effect, although they couldn't vouch for the fine details of magnitude. So it doesn't look to me like just a narrow clique of activists pushing the idea of nuclear winter.
7Stuart_Armstrong
2) This doesn't take into account anthropic effects - we have to have survived to get to where we are now. Looking at the past and saying "hey, we survived that!" doesn't mean that the probabilities were that high. 3) The idea is sufficiently well developed now that it's origins are irrelevant (there are few hippies pushing the idea currently). 4) They are computer models, based on extensive knowledge of atmospheric condition and science. Are they definitely reliable? No. Are they more likely right than wrong? Probably - it's not like the underlying science is particularly difficult. At what probability of the models being wrong would you say that we can ignore the threat? Are you convinced that the models have at least that probability of being wrong? And if so, based on what - it's not like there's a default position "nuclear winter can't happen" that has a huge prior in its favour, that the models then have to overcome.
JoshuaZ120

The primary problem with nuclear war is that it isn't obvious that humans can get back to our current tech level without the now consumed resources (primarily fossil fuels) that we've used to bootstrap ourselves up to our current tech level. If that's an issue, then any event that effectively pushes the tech level much below 1900 is about the same as an existential risk, it will just take longer for something else to then finish us off. There's been some discussion on LW about how possible it is to get back to current tech levels without the non-renewables... (read more)

One box, of course. Trying to outsmart an AI for a piddly little 0.1% increase in payoff is stupid.

Now if the payoff were reversed a player with high risk tolerance might reasonably go for some clever two-box solution... but the odds of success would be quite low, so one-boxing would still be the conservative strategy.

ewbrownv-20

Not quite. The plumber and electrician are necessary for the existence of the city. The DMV clerk is needed only for the enforcement of a licensing scheme - if his office shut down completely the city would go on functioning with little or no change.

4Kindly
There would need to be some sort of alternate mechanism for ensuring that people learn to drive a car safely before driving a car. Presumably that mechanism would involve some replacement job for the former DMV clerk.
-4Nominull
Is it really a modern city without conservatives whining about poor service at the DMV? Although I guess if you got rid of all the clerks service would probably get even worse.

The problem with this seemingly high-minded ideal is that every intervention has a cost, and they add up quickly. When oppression is blatant, violent and extreme it's relatively easy to identify, the benefits of mitigating it are large, and the cost to society is low. But when the 'oppression' is subtle and weak, consisting primarily of personal conversations or even private thoughts of individuals, the reverse is true. You find yourself restricting freedom of speech, religion and association, creating an expanding maze of ever-more-draconian laws governi... (read more)

9MugaSofer
Do you seriously think that proves we shouldn't try to stop what we asses to be "oppression"? Diminishing returns do not equal zero returns.
8DaFranker
You either missed the point of the grandparent, or are missing some of the prerequisite concepts needed to think clearly about this subject, it seems to me. I'm quite certain that Multiheaded is well aware of the law of diminishing returns and its implications, and has a fairly good grasp of how to do expected utility evaluations. Everything else you said in your post was, AFAICT, already all stated or implied by the grandparent, except: I find this claim dubious. I consider myself a victim of the oppressive historical patriarchy and dominance of gender-typing, and yet I'm fully satisfied with the current, ongoing efforts and measures that people all around the world are doing to fix it, as well as my own personal involvement and the efforts of my close circles. Those are not particularly convincing examples of Good Principles that we'd want to have in an ideal society that we should aspire towards. My own brain is screeching at the first three in particular, and finds the named legal principles crude and unrefined when compared to other ideals to aspire to.
5TimS
That has not been my impression. Some advocates might think things are as bad as they were 5 years ago, but I'm not aware of anyone with influence who thinks things are as bad as 50 years ago. Or any advocate at all who thinks no improvement has happened in the last 500 years.
ewbrownv-10

Oppression? No. Calling these sorts of incidents 'oppression' trivializes the suffering of the disenfranchised millions who live in daily fear of beatings, lynching or rape because of their religion or ethnicity, and must try to survive while knowing that others can rob them and destroy their possessions with impunity and they have no legal recourse. You might as well call having to shake hands with a man you don't like 'rape'.

Incidents on the level of those mentioned here are inevitable in any society that has even the slightest degree of diversity. Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.

I'm not downvoting this comment because I don't want to increase the chance of people being penalized for answering it.

From my point of view, you're punishing Will because he's learning something, but not quite in the way you want him to. He's made himself somewhat vulnerable by asking a question.

Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.

Depends on the venue. In some places, telling the truth about your internal states is valued more highly.

5wedrifid
Look more closely at the context, in particular the description of the experienced internal feeling and the resulting self-suppression of identity. Regarding triviality I refer you to the word "albeit" which prefaces a more than adequate acknowledgement of scope. You may further observe that I explicitly refrained from judging whether the treatment of Will was appropriate or not, much less to what degree it was inappropriate---because getting caught up with how "bad" the people are behaving to the person completely misses the point No. I might not. And not just because the scale of the outrage. Primarily because that implies that the man is a "rapist" when we have no indication that it is him who is forcing the other to have the hand shaking (or have sex). If neither the disliked man nor "you" wishes to have sex but for some reason you are coerced to have sex with each other then he is not raping you.

Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.

This may be the way now, but it doesn't have to be the way always. Max Hastings, my favourite WW2 historian, says in his All Hell Let Loose:

One of the most important truths about the war, as indeed about all human affairs, is that people can interpret what happens to them only in the context of their own circumstances. The fact that, objectively and statistically, the sufferings of some

... (read more)
7JoshuaZ
I was going to upvote this until I got to the last sentence which seems both needlessly inflammatory and not accurate. The essential point you've made does however seem to have some validity: There's a scale difference in different types of mistreatment, and using the same word for all of them is something that can easily cause connotative problems. Yes, but how common are those actions? For example, as someone who is of Ashkenazic Jewish descent in the US, I occasionally get mistreatment based on my obvious ethnic heritage. But such events are extremely rare- I can literally count the ones I remember on one hand. That's distinct from some other groups- for example if I were a black man living in the US I'd likely have a list of incidents orders of magnitude larger.
ewbrownv140

As a data point for the 'inferential distance' hypothesis, I'd like to note that I found nothing in the above quotes that was even slightly surprising or unfamiliar to me. This is exactly what I'd expect it to be like to grow up as a 'geeky' or 'intellectual' woman in the West, and it's also a good example of the sorts of incidents I'd expect women to come up with when asked to describe their experiences. So when I write things that the authors of these anecdotes disagree with, the difference of opinion is probably due to something else.

I think we mean different things by 'brainwashing' and 'social conditioning', which is causing some terminology confusion. The above is perfectly consistent with my thesis, which is simply that a major theme of 20th-century social movements was the belief that you can change individual behavior pretty much however you want by changing the society that people live in.

I call this an incorrect belief because more recent research in cognitive science reveals that there are strong constraints on what kinds of mental adaptations will actually happen in practice,... (read more)

8Nornagest
I feel like I'm explaining this poorly. You can't make arbitrary changes to behavior under the Marxist worldview by making social reforms. You can get people to further the interests of their social class more effectively by changing their perception of class, or get them to further the interests of other social classes by making them aware of common social goals, but to a Marxist this follows preexisting and fairly strict principles of how people relate to the class structure. To an orthodox Marxist, for example, improving social conditions by means of placing constraints on the behavior of socially dominant classes would be doomed to failure without a corresponding increase in the power of socially subordinate classes: other forms of exploitation would be found, and class relations would regress to the mean. It's not that you can do whatever you want by hacking society in a certain way; it's that people's psychology is organized in such a way as to lead to more equitable outcomes if you hack society in particular ways. And even describing this as "hacking" is a little misleading; Marx didn't see any of it as a social project, more as the inevitable result of existing social forces. (Incidentally, this is a main point of divergence between orthodox Marxism and Leninism or Maoism, both of which aimed to produce Marxist revolutions "early".)
ewbrownv-20

I don't see how "people unconsciously act as agents of large-scale social groups" contradicts "the human mind can be arbitrarily re-written by social conditioning". To me it seems that one implies the other.

Isn't the whole Marxist project based on the idea that you can bring about radical changes in human behavior by reorganizing society? "From each according to his ability, to each according to his needs" can only work if humans are so malleable that basic greed, laziness, selfishness and ambition can be eradicated through social programs.

4Nornagest
It's less about social conditioning and more about the extent to which people pursue group interests regardless of social conditioning. To people subscribing to Marxist ideas of class, behaviors which we might perceive as individualistic ambition in fact serve partly -- even primarily -- to further the interests of the social class in which an actor is embedded, unbeknownst to the actor; when a Marxist talks about capitalist greed, they're not talking about the selfishness of individual capitalists, they're accusing capitalists as a group of greed for the resources of other social groups. None of this requires any grand scheme of brainwashing (though social conditioning does come into play when we start talking about "false consciousness" and related ideas); it's all seen as implicit in people's native behavior. It wouldn't be too far wrong to describe Marxism as primarily a theory of group agency; originally it covered only coarse-grained economic classes, but modern descendants of Marxist ideology have extended it to cover other common interests as well. You're probably more likely to encounter the latter these days.
ewbrownv-20

That sounds imminently reasonable, and it might even have worked before the rise of victimization politics. But as anyone who has seriously tried to have this type of discussion before should know, these days it's self-defeating. Almost all of the women who find a statement like the one mentioned offensive will be equally offended no matter how gently you phrase your observations, because it isn't your tone that they object to. Rather, any instance of a male disagreeing with the prevailing world view on gender relations is automatically considered offensiv... (read more)

ewbrownv130

I don't want to death-spiral into a discussion of politics, so I'll refrain from naming specific groups. But in most Western nations there are large, well-funded political activist groups that have consciously, explicitly adopting the tactic of aggressively claiming offense in order to silence their political opponents. While the members of such groups might be honestly dedicated to advancing some social cause, the leaders who encourage this behavior are professional politicians who are more likely to be motivated by issues of personal power and prestige.

S... (read more)

2MugaSofer
You know, it sounds like you're claiming that the fact that certain behaviors - generally accepted to be harmful - are no longer considered acceptable as proof of a conspiracy cynically piggybacking on this change to impose (self?)censorship , furthering some unspecified agenda. This feels like a strawman of your actual beliefs; could you explain what you meant?
Load More