milindsmart
milindsmart has not written any posts yet.

milindsmart has not written any posts yet.

There is no guarantee that there exists some way for them to understand.
Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools... maybe?
What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?
So someone has mentioned it on LW after all. Lots of singulatarian ideas depend heavily exponential growth.
Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
Sorry, hadn't seen this (note to self: mail alerts).
Is this really true, even if we pick a similarly restricted set of models? I mean, consider a set of equations which can only contain products of a number of variables : like (x_1)^a (x_2)^b = const1 ,(x_1)^d (x_2)^e = const2 .
Is this nonlinear? Yes. Can it be solved easily? Of course. In fact it is easily transformable to a set of linear equations through logarithms.
That's what I'm kinda getting at : I think there is usually some transform that can convert your problem into a linear, or, in general, easy problem. Am I more correct now?
What's with the downvoting?
I argue that agw is the worst because it is the only one that hits at very deep-seated human assumptions that may well be genetic/inherent.
The first obstacle to agw is, even before coordination, is anchoring - we assume that everything must get better only, and nothing ever gets worse. Further, a lot of systems are built up on the assumption that there will always be a continuously expanding material economy. This is like the case where becoming slightly more rational from a point of relatively complete irrationality is likely to make one less effective : a cluster of irrational beliefs are supporting each other, and removing one exposes other irrationalities. Similarly agw directly impinges on several hidden irrationalities of both humans and economies, and that's why everyone is dragging their feet -- not because they're all totally evil.
I assume you're talking of around 4 degrees warming under business-as-usual conditions?
To pick the most important effect, it's going to impact agriculture severely. Even if irrigation can be managed, untimely heavy rain will still damage crops. And they can't be prevented from affecting crops, unless you build giant roofs.
If you are saying that all these effects can be defended against, I agree. But the key point is that our entire economy is built on a lot of things being extremely cheap. Erecting a giant roof over all sensitive cropland is far less technically challenging than launching a geostationary satellite, but we do the latter and not the former because of the sheer... (read more)
Of course, "leading to global warming" is a subset of "harmful for the environment". Agreed on all counts.
Computing can't harm the environment in any way - it's within a totally artificial human space.
The others ("good") can harm the environment in general, but are much better for AGW.
*Longtime lurker, and I've managed to fight akrasia and geniune shortage of time to put my thoughts down into a post. I think it does deserve a post, but I don't have the karma or the confidence to create a top-level post.
Comments and feedback really welcome and desired : I've gotten tired of being intellectually essentially alone.*
There are many urgent problems in the world yet Anthropogenic Global Warming (AGW) should be considered the defining crisis to humanity. For example, increasing drug-resistance in pathogens , reducing populations of endangered species, an increase in fundamentalism, rapid increase in lifestyle diseases, increasing inequality, etc.
The key difference is that solutions to the other problems... (read 577 more words →)
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
- Resource exhaustion : There is always the risk of using up resources earlier for relatively unimportant things, and facing constraints for later, more important, purposes. This risk ends up materialising more often as we develop faster. Undoing material exhaustion is difficult, while energy is impossible.
- Environmental limits : Excessive global warming, pollution, etc. impose costs on humans
- Economic : Continuous uncoordinated development likely misallocates resources
... (read more)