ME3
ME3 has not written any posts yet.

I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.
I think once you start asking why these things are the way they are, don't you have... (read more)
I read it as saying, "Suppose there is a mind with an anti-Occamian and anti-Laplacian prior. This mind believes that . . ." but of course saying "there is a possible mind in mind design space" is a much stronger statement than that, and I agree that it must be justified. I don't see how such a mind could possibly do anything that we consider mind-like, in practice.
Really, I don't know if this has been mentioned before, but formal systems and the experimental process were developed centuries ago to solve the very problems that you keep talking about (rationality, avoiding self deception, etc). Why do you keep trying to bring us back... (read more)
Isn't a silicon chip technically a rock?
Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.
If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.
Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.
Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.
I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like optical illusions). In short, I think any guided optimization process will resemble human intelligence... (read more)
talk as if the simple instruction to "Test ideas by experiment" or the p
I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't mean we'll eventually get to O(1).
Aha! You say. But human brains are so inefficient that actually we haven't even gone a... (read more)
I think that the "could" idea does not need to be confined to the process of planning future actions.
Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.
Now something that "could" happen corresponds to a state that is reachable from any of the "possible states" using the state transition matrix (in the linear systems sense of reachable). This applies to the world outside ("A meteor could hit me at any moment") or to my internal state ("I could jump off a cliff") in the sense that given my imperfect knowledge of my own state and other factors, the jump-off-a-cliff state is reachable from this fuzzy cloud of states.
In other words, the algorithm is,
explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }
which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.
You know, I think Caledonian is the only one who has the right idea about the nature of what's being written on this blog. I will miss him because I don't have the energy to battle this intellectual vomit every single day. And yet, somehow I am forced to continue looking. Eliezer, how does your metamorality explain the desire to keep watching a trainwreck?