Which is not something that the typical person, i.e. someone who barely understands the notion of "folders", can do. I despair at the thought of explaining that some of their data exists on the laptop and some of it on this box over there, let alone how and when to move data between them.
There is no guarantee that there exists some way for them to understand.
Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools... maybe?
What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?
Like certain singulatarian futurists.
So someone has mentioned it on LW after all. Lots of singulatarian ideas depend heavily exponential growth.
almost infinitely larger
Actually infinitely larger :-).
Then it will be just as "tractable" as linear models
Not necessarily. Some useful classes of models will not have the specific nice properties that linear models have.
Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
When we define another subset of models suitable to the specific thing being modelled, then we will just as easily be able to come up with a set of explicit symbolic formulae.
Not necessarily. Closed-form solutions are not guaranteed to exist for your particular subset of models and, in fact, often do not, forcing you to use numeric methods with all the associated problems.
Sorry, hadn't seen this (note to self: mail alerts).
Is this really true, even if we pick a similarly restricted set of models? I mean, consider a set of equations which can only contain products of a number of variables : like (x1)^a (x2)^b = const1 ,(x1)^d (x2)^e = const2 .
Is this nonlinear? Yes. Can it be solved easily? Of course. In fact it is easily transformable to a set of linear equations through logarithms.
That's what I'm kinda getting at : I think there is usually some transform that can convert your problem into a linear, or, in general, easy problem. Am I more correct now?
Why do you think "harmful for the environment" means "leading to global warming"? Lots of things are harmful for the environment. Drying swamps to make railroads harm it. Holidaying leads to decreased "old habitat" biodiversity. Building power plants on small mountain rivers leads to decreased biodiversity, too. Yes, these things are good for us. It just has no bearing on whether they are good for nature.
Of course, "leading to global warming" is a subset of "harmful for the environment". Agreed on all counts.
Computing can't harm the environment in any way - it's within a totally artificial human space.
The others ("good") can harm the environment in general, but are much better for AGW.
Wha...? Is that an argument by surface analogy? Does every increase in every value owing to human intervention lead to a catastrophe? How about internet connectivity? Land committed to agriculture? Air respired by humans? Shoes built? Radio waves transmitted?
How do you even measure the reference classes appropriately?
Ah that particular idea of all human pleasures being harmful for the environment is pretty much religious. It's not at all what the impact is like.
Computing is basically blameless in the direct sense for global warming. We should probably enjoy it as much as possible. Electricity is good. Trains are good. Holidaying is good.
Airconditioning is bad. Air travel is bad. Short product lifetime is bad.
The situation is far more positive than some make it out to be. Even the direst climate change predictions necessitates drastic changes in some aspects of life.
AGW can't take away modern medicine or virtual reality from you.
Although I don't have any references handy, I've seen people argue that Kyoto-like changes in our lifestyles are necessary on ethical grounds apart from global warming. More often they'll simply dismiss any sort of technological solution as a "quick fix" or even as the thing that caused the problem in the first place.
There are quite a few people who would like to abdicate control over the physical world.
What do you mean by "abdicate control over the physical world"?
I fit the profile described here quite well. Feel free to ask (I know I'm 6 years late, but that's the point of internet forums).
Apologies for commenting almost a decade after most of the comments here, but this is the exact same reason why "using nonlinear models is harder but more realistic".
The way we were taught math led us to believe that linear models form this space of tractable math, and nonlinear models form this somewhat larger space of mostly intractable math. This is mostly right, but the space of nonlinear models is almost infinitely larger than that of linear models. And that is the reason linear models are mathematically tractable : they form such a small space of possible models. Of course nonlinear models don't have general formulae that always work : they're just defined as what is NOT linear. In other words, linear models are severely restricted in the form they can have. When we define another subset of models suitable to the specific thing being modelled, then we will just as easily be able to come up with a set of explicit symbolic formulae. Then it will be just as "tractable" as linear models, even though it's nonlinear : simply because it has different special properties belonging to its own class of models obeying something just like the law of linearity
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
A lot of singularitarian thought tries to holds human desire to be exogenous and untouchable, which seems to be a rather odd blind-spot to have... we rightly discard the notion that death is desirable because it is natural, but not the notion that desire is sacred and hence should always be fulfilled, fighting against any and all limits?