Posts

Sorted by New

Wiki Contributions

Comments

you can go outside the particulars and generalize.

You can't get to the outside. No matter what perspective you are indirectly looking from, you are still ultimately looking from your own perspective. (True objectivity is an illusion - it amounts to you imagining you have stepped outside of yourself.) This means that, for any given phenomenon you observe, you are going to have to encode that phenomenon into your own internal modeling language first to understand it, and you will therefore perceive some lower bound on complexity for the expression of that phenomenon. But that complexity, while it seems intrinsic to the phenomenon, is in fact intrinsic to your relationship to the phenomenon, and your ability to encode it into your own internal modeling language. It's a magic trick played on us by our own cognitive limitations.

Complexity is a property that's observed in the world via senses or data input mechanisms, not just something within the mind.

Senses and data input mechanisms are relationships. The observer and the object are related by the act of observation. You are looking at two systems, the observer and the object, and claiming that the observer's difficulty in building a map of the object is a consequence of something intrinsic to the object, but you forget that you are part of this system, too, and your own relationship to the object requires you, too, to build a map of it. You therefore can't use this as an argument to prove that this difficulty of mapping that object is intrinsic to the object, rather than to the relationship of observation.

For any given phenomenon A, I can make up a language L1 where A corresponds to a primitive element in that language. Therefore, the minimum description length for A is 1 in L1. Now imagine another language, L2, for which A has a long description length in L2. The invariance theorem for Kolmogorov complexity, which I believe is what you are basing your intuition on, can be misinterpreted as saying that there is some minimal encoding length for a given phenomenon regardless of language. This is not what that theorem is actually saying, though. What it does say is that the difficulty of encoding phenomenon A in L2 is at most equal to the difficulty of encoding A in L1 and then encoding L1 in L2. In other words, given that A has a minimum description length of 1 in L1, but a very long description length in L2, we can be certain that L1 also has a long description length in L2. In terms of conceptual distance, all the invariance theorem says is that if L1 is close to A, then it must be far from L2, because L2 is far from A. It's just the triangle inequality, in another guise. (Admittedly, conceptual distance does not have an important property we typically expect of distance measures, that the distance from A to B is the same as the distance from B to A, but that is irrelevant here.)

You should read up on regularization) and the no free lunch theorem, if you aren't already familiar with them.

A theory is a model for a class of observable phenomena. A model is constructed from smaller primitive (atomic) elements connected together according to certain rules. (Ideally, the model's behavior or structure is isomorphic to that of the class of phenomena it is intended to represent.) We can take this collection of primitive elements, plus the rules for how they can be connected, as a modeling language. Now, depending on which primitives and rules we have selected, it may become more or less difficult to express a model with behavior isomorphic to the original, requiring more or fewer primitive elements. This means that Occam's razor will suggest different models as the simplest alternatives depending on which modeling language we have selected. Minimizing complexity in each modeling language lends a different bias toward certain models and against other models, but those biases can be varied or even reversed by changing the language that was selected. There is consequently nothing mathematically special about simplicity that lends an increased probability of correctness to simpler models.

That said, there are valid reasons to use Occam's razor nonetheless, and not just the reasons the author of this essay lists, such as resource constraint optimization. In fact, it is reasonable to expect that using Occam's razor does increase the probability of correctness, but not for the reasons that simplicity alone is good. Consider the fact that human beings evolved in this environment, and that our minds are therefore tailored by evolution to be good at identifying patterns that are common within it. In other words, the modeling language used for human cognition has been optimized to some degree to easily express patterns that are observable in our environment. Thus, for the specific pairing of the human environment with the modeling language used by human minds, a bias towards simpler models probably is indicative of an increased likelihood of that model being appropriate to the observed class of phenomena, despite simplicity being irrelevant in the general case of any arbitrary pairing of environment and modeling language.

In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don't always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new problem, etc. Mutation, selection, mutation, selection... Evolution, of a sort.

Understanding is an interaction between an internal world model and observable evidence. Every world model contains "behind the scenes" components which are not directly verifiable and which serve to explain the more superficial phenomena. This is a known requirement to be able to model a partially observable environment. The irrational beliefs of Descartes and Leibniz which you describe motivated them to search for minimally complex indirect explanations that were consistent with observation. The empiricists were distracted by an excessive focus on the directly verifiable surface phenomena. Both aspects, however, are important parts of understanding. Without intangible behind-the-scenes components, it is impossible to build a complete model. But without the empirical demand for evidence, you may end up modeling something that isn't really there. And the focus on minimal complexity as expressed by their search for "harmony" is another expression of Occam's razor, which serves to improve the ability of the model to generalize to new situations.

A lot of focus is given to the scientific method's demand for empirical evidence of a falsifiable hypothesis, but very little emphasis is placed on the act of coming up with those hypotheses in the first place. You won't find any suggestions in most presentations of the scientific method as to how to create new hypotheses, or how to identify which new hypotheses are worth pursuing. And yet this creative part of the cycle is every bit as vital as the evidence gathering. Creativity is so poorly understood compared to rationality, despite being one of the two pillars, verification with evidence being the other, of scientific and technological advancement. By searching for harmony in nature, they were engaging in a pattern-matching process, searching the hypothesis space for good candidates for scientific evaluation. They were supplying the fuel that runs the scientific method. With a surplus of fuel, you can go a long way even with a less efficient engine. You might even be able to fuel other engines, too.

I would love to see some meta-scientific research as to which variants of the scientific method are most effective. Perhaps an artificial, partially observable environment whose full state and function are known to the meta researchers could be presented to non-meta researchers, as objects of study, to determine which habits and methods are most effective for identifying the true nature of the artificial environment. This would be like measuring the effectiveness of machine learning algorithms on a set of benchmark problems, but with humans in place of the algorithms. (It would be great, too, if the social aspect of scientific research were included in the study, effectively treating the scientific community as a distributed learning algorithm.)

The only way to pretend that human value isn't just another component of how humans historically have done this, is by bestowing some sort of transcendent component to human biology (i.e. a soul or something).

Human values are special because we are human. Each of us is at the center of the universe, from our own perspective, regardless of what the rest of the universe thinks of that. It's the only way for anything to have value at all, because there is no other way to choose one set of values over another except that you happen to embody those values. The paperclip maximizer's goals do not have value with respect to our own, and it is only our own that matter to us.

how can you insist that the universe will have no "point" if these "values" get adjusted to compromise with the existence of an ASI?

A paperclip maximizer could have its values adjusted to want to make staples instead. But what would the paperclip maximizer think of this? Clearly, this would be contrary to its current goal of making paperclips. As a consequence, the paperclip maximizer will not want to permit such a change, since what it would become would be meaningless with respect to its current values. The same principle applies to human beings. I do not want my values to be modified because who I would become would be devalued with respect to my current values. Even if the new me found the universe every bit as rich and meaningful as the old me did, it would be no comfort to me now because the new me's values would not coincide my current values.

Regarding this post and the complexity of value:

Taking a paperclip maximizer as a starting point, the machine can be divided up into two primary components: the value function, which dictates that more paperclips is a good thing, and the optimizer that increases the universe's score with respect to that value function. What we should aim for, in my opinion, is to become the value function to a really badass optimizer. If we build a machine that asks us how happy we are, and then does everything in its power to improve that rating (so long as it doesn't involve modifying our values or controlling our ability to report them), that is the only way we can build a machine that reliably encompasses all of our human values.

Any other route and we are only steering the future by proxy - via an approximation to our values that may be fatally flawed and make it impossible for us to regain control when things go wrong. Even if we could somehow perfectly capture all of our values in a single function, there is still the matter of how that value function is embedded via our perceptions, which may differ from the machine's, the fact that our values may continue to change over time and thereby invalidate that function, and the fact that we each have our own unique variation on those values to start with. So yes, we should definitely keep our hands on the steering wheel.

It is better in the sense that it is ours. It is an inescapable quality of life as an agent with values embedded in a much greater universe that might contain other agents with other values, that ultimately the only thing that makes one particular set of values matter more to that agent is that those are the values that belong to that agent.

We happen to have as one of our values, to respect others' values. But this particular value happens to be self-contradictory when taken to its natural conclusion. To take it to its conclusion would be to say that nothing matters in the end, not even what we ourselves care about. Consider the case of an alien being whose values include disrespecting others' values. Is the human value placed on respecting others' values in some deep sense better than this being's?

At some point you have to stop and say, "Sorry, my own values take precedence over yours when they are incompatible to this degree. I cannot respect this value of yours." And what gives you the justification to do this? Because it is your choice, your values. Ultimately, we must be chauvinists on some level if we are to have any values at all. Otherwise, what's wrong with a sociopath who murders for joy? How can we say that their values are wrong, except to say that their values contradict our own?

I didn't miss the point; I just had one of my own to add. I gave the post a thumbs-up before I made my comment, because I agree with the overwhelming majority of it and have dealt with people who have some of the confusions described therein. Anyway, thanks for explaining.

I guess relevance is a matter of perspective. I was not aware that my ideas were not novel; they were at least my own and not something I parroted from elsewhere. Thanks for taking the time to explain, and no, I feel much better now.

My first comment ever on this site promptly gets downvoted without explanation. If you disagree with something I said, at least speak up and say why.

If evolutionary biology could explain a toaster oven, not just a tree, it would be worthless.

But it can, if you consider a toaster to be an embodied meme. Of course, the evolution that applies to toasters is more Lamarckian than Darwinian, but it's still evolution. Toaster designs that have higher utility to human beings lead to higher rates of reproduction, indirectly by human beings. The basic elements of evolution, namely mutation and reproduction, are all there.

What's interesting is that while natural evolution of biological organisms easily gets stuck in local optima, the backwards retina being an example, artificial evolution of technology often does not, due to the human mind being in the reproductive loop. This is, in part, because we can perform local hill-climbing in the design space after a large potential improvement is introduced, much as described in this article on the use of hill climbing in genetic algorithms. For example, we can imagine making the change to the retina to fix its orientation, and then, holding that change in place, search for improvements in the surrounding design space to make it workable, thereby skipping over poorly designed eyes and going straight to a new and better area in the fitness landscape.

Load More