I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.
I think once you start asking why these things are the way they are, don't you have to start asking why anything exists at all, and what it means for anything to ...
Isn't a silicon chip technically a rock?
Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.
If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.
Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.
Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.
I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like op...
talk as if the simple instruction to "Test ideas by experiment" or the p
I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't...
I think that the "could" idea does not need to be confined to the process of planning future actions.
Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.
Now something that "could" happen corresponds to a state that is reachable from any o...
In other words, the algorithm is,
explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }
which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.
There was an article in some magazine not too long ago that most people here have probably read, about how if you tell kids that they did good work because they are smart, they will not try as hard next time, whereas if you tell kids that they did good work because they worked hard, they will try harder and do better. This matches my own experience very well, because for a long time, I had this "smart person" approach to things, where I would try just hard enough to make a little headway, then either dismiss the problem as easy or give up. I see ...
Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?
By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you ta...
Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?
By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you ta...
iwdw: there has been some thinking about the universe as an actual game of life, Steven Wolfram's New Kind of Science is the one that comes to mind, but I'm sure there are more reputable sources that he stole the idea from. I believe that this thinking runs into trouble with special relativity.
Speaking of which, has anyone ever attempted to actually model space as a graph of relationships between points, in a computer program? Something like the distance-configuration-space in the last post? It occurs to me that this could actually be a more robust represe...
But the main thing that's different about time is that it has a clear direction whereas the space dimensions don't. This is caused by the fact that the universe started out in a very low-entropy state, and since then has been evolving into higher entropy. I don't know if it's even possible to answer the question of why the universe started out the way it did -- it's almost like asking why anything exists at all. But whatever the reason, the universe is very uniform in its space dimensions, but very non-uniform in its time dimension.
Doesn't the Lorentz invariant already pretty much take care of the relativity of time? As long as we're using the Lorentz invariant, we're free to reparameterize the universe any way we want, and our description will be the same. So I don't see what this Barbour guy is going on about, it seems like standard physics. Whether you write your function f(x,t) or f(y) where y = g(x,t) or even just f(x) where t = h(x) is totally irrelevant to the universe. It's just another coordinate transformation just like translating the whole universe by ten meters to the left.
Now, if you have a new invariant to propose, THAT would amount to an actual change in the laws of physics.
By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there. I mean, when your response to an AI researcher's disagreement is "Like, duh! Go read some sci-fi and then we'll talk!" who is really in the wrong here?
Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.
Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic. Furthermore, other people learn to multiply with less effort through tricks. So, I don't think it's really a flaw in our brains, per se.
I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.
First of all, good luck getting research funding for such hypotheses (and it wouldn't be fair to leave out funding from the description of Science if you're including institutional inertia and bias).
And I think we all know that in general, someon...
First, I think this can be said for any field: the textbooks don't tell you what you really need to know, because what you really need to know is a state of mind that you can only arrive at on your own.
And there are many scientists who do in fact spend time puzzling over how to distinguish good hypotheses from bad. Some don't, and they spend their days predicting what the future will be like in 2050. But they need not concern us, because they are just examples of people who are bad at what they do.
There is this famous essay: http://www.quackwatch.com/01Qua...
P(A&B)<=P(A), P(A|B)>=P(A)
Isn't this just ordinary logic? It doesn't really require all of probability theory. I believe that logic is a fairly uncontroversial element of scientific thought, though of course occasionally misapplied.
Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.
So then what good is this Bayes stuff to us exactly, us of the world where the vast majority of things can't be computed?
Nick: Not any more ridiculous than throwing out an old computer or an old car or whatever else. If we dispense with the concept of a soul, then there is really no such thing as death, but just states of activity and inactivity for a particular brain. So if you accept that you are going to be inactive for probably decades, then what makes you think you're going to be worth reactivating?
If you accept that there is no "soul" and your entire consciousness exists only in the physical arrangement of your brain (I more or less believe this), then it would be the height of egotism to require someone to actively preserve your particular brain pattern for an unknown number of years until your body can be reactivated. Simply because better ones are sure to come along in the meantime.
I mean, think about your 70-year-old uncle with his outdated ways of thinking and generally eccentric behavior -- now think of a freezer full of 700-year-old...
I also think you are taking the MWI vs. Copenhagen too literally. The reason why they are called interpretations is that they don't literally say anything about the actual underlying wave function. Perhaps, as Goofus in your earlier posts, some physicists have gotten confused and started to think of the interpretations as reality. But the idea that the wave function "collapses" only makes sense as a metaphor to help us understand its behavior. That is all that a theory that makes no predictions can be -- a metaphor.
MWI and Copenhagen are differen...
Seriously, agreeing with Caledonian.
I remember Eliezer wrote an earlier essay to the effect that GR is a really simple theory, in some information-theoretic sense, and therefore we should optimize our theories based on their information-theoretic complexity. But what's being missed here is that GR (and SR and Newtonian physics and arithmetic . . .) are simple stated on its own terms. That's WHY it's a paradigm shift. If you tried to state GR strictly as a modification of Newtonian mechanics in a global coordinate system, you would either fail, or you would...
1) Can someone tell me to what extent this many-worlds interpretation is really accepted? I mean, nobody told me the news that the collapse interpretation was no longer accepted, and I think I read such things in a recent physics textbook. So, can physicists remark on their experience?
2) I think the notion that the QM equations don't mean anything refers to the fact that nobody knows what the real substrate is in which QM takes place. It's a bit analogous to the pre-QM situation with light. People asked, what does light travel in? But since nobody was able...
As I understand it (someone correct me if I'm wrong), there are two problems with the Born rule: 1) It is non-linear, which suggests that it's not fundamental, since other fundamental laws seem to be linear
2) From my reading of Robin's article, I gather that the problem with the many-worlds interpretation is: let's say a world is created for each possible outcome (countable or uncountable). In that case, the vast majority of worlds should end up away from the peaks of the distribution, just because the peaks only occupy a small part of any distribution.
Rob...
mitchell: As the Buddhists pointed out a long time ago, the flow of time is actually an illusion. All that you actually experience at any given moment is your present sensory input, plus the memories of the past. But there are any number of experiences involving loss of consciousness that will show that the flow of time as we perceive it is completely subjective (not to say that there is no time "out there," just that we don't directly perceive it).
So while I agree that "something is happening," it does not necessarily consist of one th...
Eliezer, you are right, what I really meant to say was, once a person finds a locally optimal solution using whatever algorithm, they then have a threshold for changing their mind, and it is that threshold that is similar to temperature.
The metaphor can be made mathematically precise if we first make the analogy between human decision-making and optimization methods like simulated annealing and genetic algorithms. These optimization methods look for a locally optimal solution, but add some sort of "noise" term to try to find a globally optimal solution. So if we suppose that someone who wants to stay in his own local minimum has a lower "noise" temperature than someone who is open-minded, then the metaphor starts to make sense on a much more profound level.
I am also struck by the correlation-vs.-causation issue in the canadian voters study. Moreover, how do we know that the attractiveness rating isn't actually a reflection of the qualities the voters claim to be looking for? I.e. a more confident, intelligent, eloquent candidate would probably appear more attractive than one who isn't, all other things being equal.