ME3
Message
25
33
I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.
I think once you start asking why these things are the way they are, don't you have to start asking why anything exists at all, and what it means for anything to ...
Isn't a silicon chip technically a rock?
Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.
If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.
Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.
Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.
I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like op...
talk as if the simple instruction to "Test ideas by experiment" or the p
I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't...
I think that the "could" idea does not need to be confined to the process of planning future actions.
Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.
Now something that "could" happen corresponds to a state that is reachable from any o...
In other words, the algorithm is,
explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }
which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.
There was an article in some magazine not too long ago that most people here have probably read, about how if you tell kids that they did good work because they are smart, they will not try as hard next time, whereas if you tell kids that they did good work because they worked hard, they will try harder and do better. This matches my own experience very well, because for a long time, I had this "smart person" approach to things, where I would try just hard enough to make a little headway, then either dismiss the problem as easy or give up. I see ...
Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?
By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you ta...
Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?
By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you ta...
iwdw: there has been some thinking about the universe as an actual game of life, Steven Wolfram's New Kind of Science is the one that comes to mind, but I'm sure there are more reputable sources that he stole the idea from. I believe that this thinking runs into trouble with special relativity.
Speaking of which, has anyone ever attempted to actually model space as a graph of relationships between points, in a computer program? Something like the distance-configuration-space in the last post? It occurs to me that this could actually be a more robust represe...
But the main thing that's different about time is that it has a clear direction whereas the space dimensions don't. This is caused by the fact that the universe started out in a very low-entropy state, and since then has been evolving into higher entropy. I don't know if it's even possible to answer the question of why the universe started out the way it did -- it's almost like asking why anything exists at all. But whatever the reason, the universe is very uniform in its space dimensions, but very non-uniform in its time dimension.
Doesn't the Lorentz invariant already pretty much take care of the relativity of time? As long as we're using the Lorentz invariant, we're free to reparameterize the universe any way we want, and our description will be the same. So I don't see what this Barbour guy is going on about, it seems like standard physics. Whether you write your function f(x,t) or f(y) where y = g(x,t) or even just f(x) where t = h(x) is totally irrelevant to the universe. It's just another coordinate transformation just like translating the whole universe by ten meters to the left.
Now, if you have a new invariant to propose, THAT would amount to an actual change in the laws of physics.
By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there. I mean, when your response to an AI researcher's disagreement is "Like, duh! Go read some sci-fi and then we'll talk!" who is really in the wrong here?
Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.
Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic. Furthermore, other people learn to multiply with less effort through tricks. So, I don't think it's really a flaw in our brains, per se.
I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.
First of all, good luck getting research funding for such hypotheses (and it wouldn't be fair to leave out funding from the description of Science if you're including institutional inertia and bias).
And I think we all know that in general, someon...