I think that the most likely doomsday scenario would be somebody/group/thing looking to take advantage of the notability of the day itself to launch some sort of attack. Many people would be more likely to panic and others would initially be suspicious of reports of disasters. The system would less be able to deal effectively with threats. It might represent the best chance for an attacker to start WW3.
While we're on the topic of an Australian meetup, are there any other LW ppl in Brisbane? If there's some we could organise a meetup.
That's mentioned in the article and they claim (believably as far as I am concerned) the proceeds go to the SIAI.
Thanks for pointing this out. I can't believe I didn't actually read the adjacent words. It does however serve to underscore the commercial value represented by this post and the associated project. Online gaming is an area that has some unique constraints on marketing, especially in the US and because of this it's valid to have an increased suspicion of spam. It may be a good idea to have a think about the appropriate level of commerciality in articles before someone finds a clever and entirely reasonable way to link transhumanism with 'Buy Viagra Online'
The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn't spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.
This is critical to your point. But you haven't established this at all. You made one post with a just-so story about males in tribes perceiving those above them as corrupt, and then assumed, with no logical justification that I can recall, that this meant that those above them actually are corrupt. You haven't defined what corrupt means, either.
I think you need to sit down and spell out what 'corrupt' means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.
You might argue that if you have one Sysop AI, it isn't subject to evolutionary forces. This may be true. But if that's what you're counting on, it's very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally 'Friendly' AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the 'young revolutionary' of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
You could phrase it as, "This seems like an amazing idea and a great presentation. I wonder how we could secure the budgeting and get the team for it, because it seems like it'd be a profitable if we do, and it'd be a shame to miss this opportunity."
"This seems like a fantastic example of how to rephrase a criticism. I wonder how it could be delivered in a way that also retained enough of the meaning, because it seems like it would work well if it did, and it'd be a shame not to be able to use it. "
Does this just come of as sarcasm to people of higher intelligence. I guess you've got to alter your message to suit the audience.
A particularly noteworthy issue is the difficulty of applying such a technique to one's own actions, a problem which I believe has a fairly large number of workable solutions.
I have had success working around 'Ugh' reactions to various activities. I took the direct approach. I (intermittently) use nicotine lozenges as a stimulant while exercising. Apart from boosting physical performance and motivation it also happens to be the most potent substance I am aware of for increasing habit formation in the brain.
Perhaps more important than the, you know, chemical sledge hammer, is the fact that the process of training myself in that way brings up "anti-Ugh" associations. I love optimisation in general and self improvement in particular. I am also fascinated by pharmacology and instinctively 'cheeky'. Having never even considered smoking a cigarette and yet using the disreputable substance 'nicotine' in a way that can be expected to have improvements to my health and well-being is exactly the sort of thing I know my brain loves doing.
I (intermittently) use nicotine lozenges as a stimulant while exercising.
I'm curious as to whether you've ever been an addicted cigarette smoker before? For those of us who have I suspect the risks of a total relapse to smoking (as opposed to other delivery methods) would be too great. I can image it could be effective though.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.
If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.
But if you have a complex, non-linear system, or just one that's too big to simulate in complete detail, this is no longer the case. Getting a useful simulation then requires that you make a lot of educated guesses about what factors to include in your simulation, and how to approximate effects you can't calculate in any detail. The probability of getting these guesses right the first time is essentially zero - you're lucky if the behavior of your initial model has even a hazy resemblance to anything real, and it certainly isn't going to come within an order of magnitude of being correct.
The way you get to a useful model is through a repeated cycle of running the simulator, comparing the (wrong) results to reality, making an educated guess about what caused the difference, and trying again. With something relatively simple like, say, turbulent fluid dynamics, you might need a few hundred to a few thousand test runs to tweak your model enough that it generates accurate results over the domain of input parameters that you're interested in.
If you can't run real-world experiments to generate the phenomena you're interested in, you might be able to substitute a huge data set of observations of natural events. Astronomy has had some success with this, for example. But you need a data set big enough to encompass a representative sample of all the possible behaviors of the system you're trying to simulate, or else you'll just gets a 'simulator' that always predicts the few examples you fed it.
So, can you see the problem with the nuclear winter simulations now? You can't have a nuclear war to test the simulation, and our historical data set of real climate changes doesn't include anything similar (and doesn't collect anywhere near as many data points as a simulator needs, anyway). But global climate is a couple of orders of magnitude more complex than your typical physics or chemistry sims, so the need for testing would be correspondingly greater.
The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.
This really is a pretty un-bayesian way of thinking - the idea that we should totally ignore incomplete evidence. And by extension that we should chose to believe an alternative hypothesis (''no nuclear winter') with even less evidence merely because it is assumed for unstated reasons to be the 'default belief'.