ewbrownv
ewbrownv has not written any posts yet.

It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.
I don't actually agree with the assertion, but I can see at least one coherent way to argue it. The thinking would be:
The world is currently very prosperous due to advances in technology that are themselves a result of the interplay between Enlightenment ideals and the particular cultures of Western Europe and America in the 1600-1950 era. Democracy is essentially irrelevant to this process - the same thing would have happened under any moderately sane government, and indeed most of the West was neither democratic nor liberal (in the modern sense) during most of this time period.
The recent outbreak of peace, meanwhile, is due to two factors. Major powers rarely fight because... (read more)
Historically it has never worked out that way. When a society gets richer the people eat more and better food, buy more clothes, live in bigger houses, buy cars and appliances, travel more, and so on. Based on the behavior of rich people we can see that a x10 or even x100 increase from current wealth levels due to automation would just continue this trend, with people spending the excess on things like mansions, private jets and a legion of robot servants.
Realistically there's probably some upper limit to human consumption, but it's so far above current production levels that we don't see much hint of where it would be yet. So for most practical purposes we can assume demand is infinite until we actually see the rich start systematically running out of things to spend money on.
Because you can't create real, 100% physical isolation. At a minimum you're going to have power lines that breach the walls, and either people moving in and out (while potentially carrying portable electronics) or communication lines going out to terminals that aren't isolated. Also, this kind of physical facility is very expensive to build, so the more elaborate your plan is the less likely it is to get financed.
Military organizations have been trying to solve these problems ever since the 1950s, with only a modest degree of success. Even paranoid, well-funded organizations with a willingness to shoot people have security breaches on a fairly regular basis.
Indeed. What's the point of building an AI you're never going to communicate with?
Also, you can't build it that way. Programs never work the first time, so at a minimum you're going to have a long period of time where programmers are coding, testing and debugging various parts of the AI. As it nears completion that's going to involve a great deal of unsupervised interaction with a partially-functional AI, because without interaction you can't tell if it works.
So what are you going to do? Wait until the AI is feature-complete on day X, and then box it? Do you really think the AI was safe on day X-1, when it just had... (read more)
I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.
If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could achieve complete isolation hopelessly naive. In reality you'll discover that there was an... (read more)
Your second proposal, trying to restrict what the AI can do after it's made a decision, is a lost cause. Our ability to specify what is and is not allowed is simply too limited to resist any determined effort to find loopholes. This problem afflicts every field from contract law to computer security, so it seems unlikely that we're going to find a solution anytime soon.
Your first proposal, making an AI that isn't a complete AGI, is more interesting. Whether or not it's feasible depends partly on your model of how an AI will work in the first place, and partly on how extreme the AI's performance is expected to be.
For instance,... (read more)
Actually, this would be a strong argument against CEV. If individual humans commonly have incoherent values (which they do), there is no concrete reason to expect an automated extrapolation process to magically make them coherent. I've noticed that CEV proponents have a tendency to argue that the "thought longer, understood more" part of the process will somehow fix all objections of this sort, but given the complete lack of detail about how this process is supposed to work you might as well claim that the morality fairy is going to descend from the heavens and fix everything with a wave of her magic wand.
If you honestly think you can make an... (read more)
<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>
See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.
Good insight.
No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology.
As for the casualty rate of soldiers, that tends to jump up whenever a new type... (read more)