Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
dejb10

The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.

This really is a pretty un-bayesian way of thinking - the idea that we should totally ignore incomplete evidence. And by extension that we should chose to believe an alternative hypothesis (''no nuclear winter') with even less evidence merely because it is assumed for unstated reasons to be the 'default belief'.

dejb10

I think that the most likely doomsday scenario would be somebody/group/thing looking to take advantage of the notability of the day itself to launch some sort of attack. Many people would be more likely to panic and others would initially be suspicious of reports of disasters. The system would less be able to deal effectively with threats. It might represent the best chance for an attacker to start WW3.

dejb140

I took it and did most of the bonus questions.

dejb00

Thanks for pointing this out. I can't believe I didn't actually read the adjacent words. It does however serve to underscore the commercial value represented by this post and the associated project. Online gaming is an area that has some unique constraints on marketing, especially in the US and because of this it's valid to have an increased suspicion of spam. It may be a good idea to have a think about the appropriate level of commerciality in articles before someone finds a clever and entirely reasonable way to link transhumanism with 'Buy Viagra Online'

dejb30

as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.

Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally 'Friendly' AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the 'young revolutionary' of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.

Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.

dejb80

You could phrase it as, "This seems like an amazing idea and a great presentation. I wonder how we could secure the budgeting and get the team for it, because it seems like it'd be a profitable if we do, and it'd be a shame to miss this opportunity."

"This seems like a fantastic example of how to rephrase a criticism. I wonder how it could be delivered in a way that also retained enough of the meaning, because it seems like it would work well if it did, and it'd be a shame not to be able to use it. "

Does this just come of as sarcasm to people of higher intelligence. I guess you've got to alter your message to suit the audience.

dejb20

I (intermittently) use nicotine lozenges as a stimulant while exercising.

I'm curious as to whether you've ever been an addicted cigarette smoker before? For those of us who have I suspect the risks of a total relapse to smoking (as opposed to other delivery methods) would be too great. I can image it could be effective though.