It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.
I don't actually agree with the assertion, but I can see at least one coherent way to argue it. The thinking would be:
The world is currently very prosperous due to advances in technology that are themselves a result of the interplay between Enlightenment ideals and the particular cultures of Western Europe and America in the 1600-1950 era. Democracy is essentially irrelevant to this process - the same thing would have happened under any moderately sane government, and indeed most of the West was neither democratic nor liberal (in the modern sense) during m...
Historically it has never worked out that way. When a society gets richer the people eat more and better food, buy more clothes, live in bigger houses, buy cars and appliances, travel more, and so on. Based on the behavior of rich people we can see that a x10 or even x100 increase from current wealth levels due to automation would just continue this trend, with people spending the excess on things like mansions, private jets and a legion of robot servants.
Realistically there's probably some upper limit to human consumption, but it's so far above current pr...
Because you can't create real, 100% physical isolation. At a minimum you're going to have power lines that breach the walls, and either people moving in and out (while potentially carrying portable electronics) or communication lines going out to terminals that aren't isolated. Also, this kind of physical facility is very expensive to build, so the more elaborate your plan is the less likely it is to get financed.
Military organizations have been trying to solve these problems ever since the 1950s, with only a modest degree of success. Even paranoid, well-funded organizations with a willingness to shoot people have security breaches on a fairly regular basis.
Indeed. What's the point of building an AI you're never going to communicate with?
Also, you can't build it that way. Programs never work the first time, so at a minimum you're going to have a long period of time where programmers are coding, testing and debugging various parts of the AI. As it nears completion that's going to involve a great deal of unsupervised interaction with a partially-functional AI, because without interaction you can't tell if it works.
So what are you going to do? Wait until the AI is feature-complete on day X, and then box it? Do y...
I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.
If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could ...
Your second proposal, trying to restrict what the AI can do after it's made a decision, is a lost cause. Our ability to specify what is and is not allowed is simply too limited to resist any determined effort to find loopholes. This problem afflicts every field from contract law to computer security, so it seems unlikely that we're going to find a solution anytime soon.
Your first proposal, making an AI that isn't a complete AGI, is more interesting. Whether or not it's feasible depends partly on your model of how an AI will work in the first place, and par...
Actually, this would be a strong argument against CEV. If individual humans commonly have incoherent values (which they do), there is no concrete reason to expect an automated extrapolation process to magically make them coherent. I've noticed that CEV proponents have a tendency to argue that the "thought longer, understood more" part of the process will somehow fix all objections of this sort, but given the complete lack of detail about how this process is supposed to work you might as well claim that the morality fairy is going to descend from ...
<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>
See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.
Yes, I'm saying that to get human-like learning the AI has to have the ability to write code that it will later use to perform cognitive tasks. You can't get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.
So that presents you with the problem of figuring out which AI-written code fragments are safe, not just in isolation, but in all their interactions with every other code fragment the AI will ever write. This is the same kind of problem as creating a secure brow...
What I was referring to is the difference between:
A) An AI that accepts an instruction from the user, thinks about how to carry out the instruction, comes up with a plan, checks that the user agrees that this is a good plan, carries it out, then goes back to an idle loop.
B) An AI that has a fully realized goal system that has some variant of 'do what I'm told' implemented as a top-level goal, and spends its time sitting around waiting for someone to give it a command so it can get a reward signal.
Either AI will kill you (or worse) in some unexpected way if...
I thought that too until I spent a few hours thinking about how to actually implement CEV, after which I realized that any AI capable of using that monster of an algorithm is already a superintelligence (and probably turned the Earth into computronium while it was trying to get enough CPU power to bootstrap its goal system).
Anyone who wants to try a "build moderately smart AGI to help design the really dangerous AGI" approach is probably better off just making a genie machine (i.e. an AI that just does whatever its told, and doesn't have explicit...
The last item on your list is an intractable sticking point. Any AGI smart enough to be worth worrying about is going to have to have the ability to make arbitrary changes to an internal "knowledge+skills" representation that is itself a Turing-complete programming language. As the AGI grows it will tend to create an increasingly complex ecology of AI-fragments in this way, and predicting the behavior of the whole system quickly becomes impossible.
So "don't let the AI modify its own goal system" ends up turning into just anther way of s...
Why would you expect the social dominance of a belief to correlate with truth? Except in the most trivial cases, society has no particular mechanism that selects for true beliefs in preference to false ones.
The Darwinian competition of memes selects strongly for those that provide psychological benefits, or are politically useful, or serve the self-interest of large segments of the population. But truth is only relevant if the opponents of a belief can easily and unambiguously disprove it, which is only possible in rare cases.
If true, this is fairly strong evidence that the effort to turn the study of economics into a science has failed. If the beliefs of professional economists about their field of study are substantially affected by their gender, they obviously aren't arriving at those beliefs by a reliable objective process.
Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.
If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of in a manner likely to create criminal liability will be deleted...
Do you really think HP:MoR would be a better story if EY had spent a few weeks listing all the characters by gender, and trying to tweak the plot and insert details to 'balance' things?
You're strawmanning me. I will reply to you no further.
I do think it would be better if the girls had more varied characteristics-- flaws, virtues, and interests. Who knows, there might be something generated from more interesting characters which would lead to more moments of awesome.
Knowing that philosophers are the only people who two-box on Newcomb's problem, and they constitute a vanishingly small fraction of Earth's population, I confidently one-box. Then I rush out to spend my winnings as quickly as possible, before the inevitable inflation hits.
Telling me what X is will have no effect on my action, because I already have that information. Making copies of me has no effect on my strategy, for the same reason.
I think you have a point here, but there's a more fundamental problem - there doesn't seem to be much evidence that gun control affects the ability of criminals to get guns.
The problem here is similar to prohibition of drugs. Guns and ammunition are widely available in many areas, are relatively easy to smuggle, and are durable goods that can be kept in operation for many decades once acquired. Also, the fact that police and other security officials need them means that they will continue to be produced and/or imported into an area with even very strict pr...
Agreed. Presence or absence of debate on an issue gives information about a nation's culture, but very little about how hard it is to discover the facts of the matter. This is especially true in matters of social science, where the available evidence is never going to be strong enough to convince someone who has already made up his mind.
Wow, look at all the straw men. Is there an actual reasoned position in there among the fashionable cynicism? If so, I can't find it.
One of the major purposes of Less Wrong is allegedly the promotion of more rational ways of thinking among as large a fraction of the general population as we can manage to reach. Finding better ways to think clearly about politics might be an especially difficult challenge, but popularizing the result of such an attempt isn't necessarily any harder than teaching people about the sunk costs fallacy.
But even if you think raisi...
I tend to agree with your concern.
Discussing politics is hard because all political groups make extensive use of lies, propaganda and emotional appeals, which turns any debate into a quagmire of disputed facts and mind-killing argument. It can be tempting to dismiss the whole endeavor as hopeless and ignore it while cynically deriding those who stay involved.
Trouble is, political movements are not all equal. If they gain power, some groups will use it to make the country wealthy so they can pocket a cut of the money. Others will try to force everyone to jo...
Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...
This would neatly explain the confusion Harry noted over how a rational, inventive wizard could have failed to take over England. It leaves open some questions about why he continued his reign of terror after that ploy failed, but there are several obvious possibilities there. The big question would be what actually happened to either A) stop him, or B) make him decide to fake his death and vanish for a decade.
Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...
This is exactly how I read chapter 85, and now 86 confirmed it. My estimate is way over 10%, probably ~60%.
If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.
Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is t...
As an explanation for a society-wide shift in discourse that seems quite implausible. If such a change has actually happened the cause would most likely be some broad cultural or sociological change that took place within the same time frame.
Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we'll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.
But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using techn...
The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.
S...
Now you're just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've b...
Yes, and that's why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn't the case.
When you set out to build a model of a large, non-linear system you're confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can't just take the most important-looking processes and ig...
An uncalibrated sim will typically give crazy results like 'increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees' or 'one large forest fire will trigger a permanent ice age'. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand - it provides evidence about the beliefs of the programmer, but nothing else.
Exactly.
I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.
This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.
If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.
But if you have a complex, non-linear system, or just one...
"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.
It's also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive. Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.
Well, 500 years ago there was plenty of brutal physical oppression going on, and I'd expect that kind of thing to have lots of other negative effects on top of the first-order emotional reactions of the victims.
But I would claim that if you did a big brain-scan survey of, say, Western women from 1970 to the present, you'd see very little correlation between their subjective feeling of oppression and their actual treatment in society.
Such a mechanism may be desirable, but it isn't necessary for the existence of cities. There are plenty of third world countries that don't bother with licensing, and still manage to have major metropolises.
But my point was just that when people talk about 'trades and crafts on which the existence of the modern city depends' they generally mean carpenters, plumbers, electricians and other hands-on trades, not clerks and bureaucrats.
The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven't killed us all yet.
An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn't likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible - turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.
Calling this an x-risk seems to be a case of either A) stretching the definition considerably, or B) being unduly credulous of the claims of political activists. A few points to consider:
1) During the height of the cold war, when there were about an order of magnitude more nuclear weapons deployed than is currently the case, US military (which had a vested interest in exaggerating the Soviet threat) put estimated casualties from a full-scale nuclear exchange at 30-40% of the US population. While certainly horrific, this falls far short of extinction. Grant...
The primary problem with nuclear war is that it isn't obvious that humans can get back to our current tech level without the now consumed resources (primarily fossil fuels) that we've used to bootstrap ourselves up to our current tech level. If that's an issue, then any event that effectively pushes the tech level much below 1900 is about the same as an existential risk, it will just take longer for something else to then finish us off. There's been some discussion on LW about how possible it is to get back to current tech levels without the non-renewables...
One box, of course. Trying to outsmart an AI for a piddly little 0.1% increase in payoff is stupid.
Now if the payoff were reversed a player with high risk tolerance might reasonably go for some clever two-box solution... but the odds of success would be quite low, so one-boxing would still be the conservative strategy.
Not quite. The plumber and electrician are necessary for the existence of the city. The DMV clerk is needed only for the enforcement of a licensing scheme - if his office shut down completely the city would go on functioning with little or no change.
The problem with this seemingly high-minded ideal is that every intervention has a cost, and they add up quickly. When oppression is blatant, violent and extreme it's relatively easy to identify, the benefits of mitigating it are large, and the cost to society is low. But when the 'oppression' is subtle and weak, consisting primarily of personal conversations or even private thoughts of individuals, the reverse is true. You find yourself restricting freedom of speech, religion and association, creating an expanding maze of ever-more-draconian laws governi...
Oppression? No. Calling these sorts of incidents 'oppression' trivializes the suffering of the disenfranchised millions who live in daily fear of beatings, lynching or rape because of their religion or ethnicity, and must try to survive while knowing that others can rob them and destroy their possessions with impunity and they have no legal recourse. You might as well call having to shake hands with a man you don't like 'rape'.
Incidents on the level of those mentioned here are inevitable in any society that has even the slightest degree of diversity. Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.
I'm not downvoting this comment because I don't want to increase the chance of people being penalized for answering it.
From my point of view, you're punishing Will because he's learning something, but not quite in the way you want him to. He's made himself somewhat vulnerable by asking a question.
Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.
Depends on the venue. In some places, telling the truth about your internal states is valued more highly.
Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.
This may be the way now, but it doesn't have to be the way always. Max Hastings, my favourite WW2 historian, says in his All Hell Let Loose:
...One of the most important truths about the war, as indeed about all human affairs, is that people can interpret what happens to them only in the context of their own circumstances. The fact that, objectively and statistically, the sufferings of some
As a data point for the 'inferential distance' hypothesis, I'd like to note that I found nothing in the above quotes that was even slightly surprising or unfamiliar to me. This is exactly what I'd expect it to be like to grow up as a 'geeky' or 'intellectual' woman in the West, and it's also a good example of the sorts of incidents I'd expect women to come up with when asked to describe their experiences. So when I write things that the authors of these anecdotes disagree with, the difference of opinion is probably due to something else.
I think we mean different things by 'brainwashing' and 'social conditioning', which is causing some terminology confusion. The above is perfectly consistent with my thesis, which is simply that a major theme of 20th-century social movements was the belief that you can change individual behavior pretty much however you want by changing the society that people live in.
I call this an incorrect belief because more recent research in cognitive science reveals that there are strong constraints on what kinds of mental adaptations will actually happen in practice,...
I don't see how "people unconsciously act as agents of large-scale social groups" contradicts "the human mind can be arbitrarily re-written by social conditioning". To me it seems that one implies the other.
Isn't the whole Marxist project based on the idea that you can bring about radical changes in human behavior by reorganizing society? "From each according to his ability, to each according to his needs" can only work if humans are so malleable that basic greed, laziness, selfishness and ambition can be eradicated through social programs.
That sounds imminently reasonable, and it might even have worked before the rise of victimization politics. But as anyone who has seriously tried to have this type of discussion before should know, these days it's self-defeating. Almost all of the women who find a statement like the one mentioned offensive will be equally offended no matter how gently you phrase your observations, because it isn't your tone that they object to. Rather, any instance of a male disagreeing with the prevailing world view on gender relations is automatically considered offensiv...
I don't want to death-spiral into a discussion of politics, so I'll refrain from naming specific groups. But in most Western nations there are large, well-funded political activist groups that have consciously, explicitly adopting the tactic of aggressively claiming offense in order to silence their political opponents. While the members of such groups might be honestly dedicated to advancing some social cause, the leaders who encourage this behavior are professional politicians who are more likely to be motivated by issues of personal power and prestige.
S...
Good insight.
No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies ... (read more)