Posts

Sorted by New

Wiki Contributions

Comments

I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.

I think once you start asking why these things are the way they are, don't you have to start asking why anything exists at all, and what it means for anything to exist? And I'm pretty sure at that point, we are firmly in the province of philosophy and there are no equations to be written, because the existence of the equations themselves is part of the question we're asking.

But I mean, this question has been in my mind since the beginning of the quantum series. I've written a lot of useful software since then, though, without entertaining it much. Do you think maybe it's just better to get on with our lives? It's not a rhetorical question, I really don't know.

Isn't a silicon chip technically a rock?

Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.

Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.

Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.

I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like optical illusions). In short, I think any guided optimization process will resemble human intelligence in some ways (don't know which ones), for reasons that I explained in my response to the last post.

Which actually makes me think of something interesting: possibly, there is no optimal guided search strategy. The reason why humans appear to succeed at it is because there are many of us thinking about the same thing at any given time, and each of us has a slightly differently tuned algorithm. So, one of is likely to end up converging on the solution even though nobody has an algorithm that can find every solution. And people self-select for types of problems that they're good at.

talk as if the simple instruction to "Test ideas by experiment" or the p

I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't mean we'll eventually get to O(1).

Aha! You say. But human brains are so inefficient that actually we haven't even gone a smidgeon of the path to the optimal algorithm and there is a ton more space to go. But computers already overcome many of the inefficiencies of human brains. Our brains do a decent job of pruning the search space up to the near-optimal solution, and computers take care of the work intensive step of going from near-optimal to optimal. And as our software gets better, we have to prune the search space less and less before we give the problem to the computer.

Of course, maybe we still have many orders of magnitude of improvement to go. But you can't just assume that.

I think that the "could" idea does not need to be confined to the process of planning future actions.

Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.

Now something that "could" happen corresponds to a state that is reachable from any of the "possible states" using the state transition matrix (in the linear systems sense of reachable). This applies to the world outside ("A meteor could hit me at any moment") or to my internal state ("I could jump off a cliff") in the sense that given my imperfect knowledge of my own state and other factors, the jump-off-a-cliff state is reachable from this fuzzy cloud of states.

In other words, the algorithm is,

explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }

which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.

Load More