ReadABook, I would suggest archiving your posts on another site. Nothing posted to this site is safe.
Don't you get the same effect from adding an orderly grid of dots?
In that particular example, yes. Because the image is static, as is the static.
If the static could change over time, you could get a better sense of where the image lies. It's cheaper and easier - and thus 'better' - to let natural randomness produce this static, especially since significant resources would have to be expended to eliminate the random noise.
What about from aligning the dots along the lines of the image?
If we knew where the image was, we wouldn't need the dots.
To be precise, in every case where the environment only cares about your actions and not what algorithm you use to produce them, any algorithm that can be improved by randomization can always be improved further by derandomization.
It's clear this is what you're saying.
It is not clear this can be shown to be true. 'Improvement' depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge.
Caledonian: Yes, I did. So: can't you always do better in principle by increasing sensitivity?
That's a little bit like saying that you could in principle go faster than light if you ignore relativistic effects, or that you could in principle produce a demonstration within a logical system that it is consistent if you ignore Godel's Fork.
There are lots of things we can do in principle if we ignore the fact that reality limits the principles that are valid.
As the saying goes: the difference between 'in principle' and 'in practice' is that in principle there is no difference between them, and in practice, there is.
If you remove the limitations on the amount and kind of knowledge you can acquire, randomness is inferior to the unrandom. But you can't remove those limitations.
Caledonian: couldn't you always do better in such a case, in principle (ignoring resource limits), by increasing resolution?
I double-checked the concept of 'optical resolution' on Wikipedia.Resolution is (roughly speaking) the ability to distinguish two dots that are close-together as different - the closer the dots can be and still distinguished, the higher the resolution, and the greater detail that can be perceived.
I think perhaps you mean 'sensitivity'. It's the ability to detect weak signals close to perceptual threshold that noise improves, not the detail.
But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information - by deliberately throwing away your sensory acuity. This can only degrade the mutual information between yourself and the environment. It can only diminish what in principle can be extracted from the data.
It is certainly counterintuitive to think that, by adding noise, you can get more out of data. But it is nevertheless true.
Every detection system has a perceptual threshold, a level of stimulation needed for it to register a signal. If the system is mostly noise-free, this threshold is a ’sharp’ transition. If the system has a lot of noise, the theshold is ‘fuzzy’. The noise present at one moment might destructively interact with the signal, reducing its strength, or constructively interact, making it stronger. The result is that the threshold becomes an average; it is no longer possible to know whether the system will respond merely by considering the strength of the signal.
When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it - some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively.
You can see this effect demonstrated at science museums. If an image is printed very, very faintly on white paper, just at the human threshold for visual detection, you can stare right at the paper and not see what’s there. But if the same image is printed onto paper on which a random pattern of grey dots has also been printed, we can suddenly perceive some of it - and extrapolate the whole from the random parts we can see. We are very good at extracting data from noisy systems, but only if we can perceive the data in the first place. The noise makes it possible to detect the data carried by weak signals.
When trying to make out faint signals, static can be beneficial. Which is why biological organisms introduce noise into their detection physiologies - a fact which surprised biologists when they first learned of it.
Foraging animals make the same 'mistake': given two territories in which to forage, one of which has a much more plentiful resource and is far more likely to reward an investment of effort and time with a payoff, the obvious strategy is to only forage in the richer territory; however, animals instead split their time between the two spaces as the relative probability of a successful return.
In other words, if one territory is twice as likely to produce food through foraging as the other, animals spend twice as much time there: 2/3rds of their time in the richer territory, 1/3rd of their time in the poorer. Similar patterns hold when there are more than two foraging territories involved.
Although this results in a short-term reduction in food acquisition, it's been shown that this strategy minimizes the chances of exploiting the resource to local extinction, and ensures that the sudden loss of one territory for some reason (blight of the resource, natural diaster, predation threats, etc.) doesn't result in a total inability to find food.
The strategy is highly adaptive in its original context. The problem with humans that we retain our evolved, adaptive behaviors long after the context changes to make them non- or even mal-adaptive.
I would suggest taking a hard look at the elements of your social support network, and trying to determine which would sever their links with you if they knew you were not a Christian.
I do not agree that you are compelled not to lie to people. Truth is a valuable thing, and shouldn't be wasted on those unworthy of it.
Consider that Carl Sagan's protagonist in "Contact", Ellie Arroway, claimed to be a Christian, despite being an atheist. Look carefully at the arguments she offered regarding that claim, and see if they can be adapted to your life.
I would recommend that you refuse to claim beliefs that you do not hold, or participate in actions that suggest you believe those things. Reciting the Creed if you do not accept it is out. Taking Communion if you reject the beliefs that form the basis of fellowship in your church is out. So on and so forth. Don't go to confession if you don't believe you need to confess. Etc. etc.
It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?
If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed *because* they stick together is speculating as to their intended function.
I do not see what is gained by labeling blind entropy-increasing processes as 'intelligence', nor do I see any way in which we can magically infer quality design without having criteria by which to judge configurations.
There is no way to tell that something is made by 'intelligence' merely by looking at it - it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.
A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it's not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Isn't it a bit silly to complain that 'nonwood' is so vague as to be useless, when 'wood' is such a broad category that it alone conveys little useful information?
I don't think it's a major leap to guess that making the wagons out of balsa will be a bad idea, but it's a wood. So is pine - which contains highly flammable resins. If spontaneous combustion is an issue, knowing what kind of wood is used is important.
Likewise, selling nonapples is equivalent to saying we shouldn't sell apples but should continue to sell something. It conveys more than saying we should stop selling apples, which is compatible with ceasing to merchandise.
If vagueness is a valid complaint, how are we to interpret 'Coherent Extrapolated Volition'?