Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ME3 25 July 2008 03:15:30PM 2 points [-]

I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.

I think once you start asking why these things are the way they are, don't you have to start asking why anything exists at all, and what it means for anything to exist? And I'm pretty sure at that point, we are firmly in the province of philosophy and there are no equations to be written, because the existence of the equations themselves is part of the question we're asking.

But I mean, this question has been in my mind since the beginning of the quantum series. I've written a lot of useful software since then, though, without entertaining it much. Do you think maybe it's just better to get on with our lives? It's not a rhetorical question, I really don't know.

Comment author: ME3 01 July 2008 02:03:35PM 0 points [-]

Isn't a silicon chip technically a rock?

Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

In response to The Moral Void
Comment author: ME3 30 June 2008 02:31:21PM -1 points [-]

If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.

Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.

Comment author: ME3 26 June 2008 02:22:13PM 1 point [-]

Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.

Comment author: ME3 24 June 2008 03:09:56PM 1 point [-]

I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like optical illusions). In short, I think any guided optimization process will resemble human intelligence in some ways (don't know which ones), for reasons that I explained in my response to the last post.

Which actually makes me think of something interesting: possibly, there is no optimal guided search strategy. The reason why humans appear to succeed at it is because there are many of us thinking about the same thing at any given time, and each of us has a slightly differently tuned algorithm. So, one of is likely to end up converging on the solution even though nobody has an algorithm that can find every solution. And people self-select for types of problems that they're good at.

Comment author: ME3 23 June 2008 02:48:50PM 1 point [-]

talk as if the simple instruction to "Test ideas by experiment" or the p<0.05 significance rule, were the same order of contribution as an entire human brain.

I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't mean we'll eventually get to O(1).

Aha! You say. But human brains are so inefficient that actually we haven't even gone a smidgeon of the path to the optimal algorithm and there is a ton more space to go. But computers already overcome many of the inefficiencies of human brains. Our brains do a decent job of pruning the search space up to the near-optimal solution, and computers take care of the work intensive step of going from near-optimal to optimal. And as our software gets better, we have to prune the search space less and less before we give the problem to the computer.

Of course, maybe we still have many orders of magnitude of improvement to go. But you can't just assume that.

Comment author: ME3 17 June 2008 02:43:56PM 0 points [-]

I think that the "could" idea does not need to be confined to the process of planning future actions.

Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.

Now something that "could" happen corresponds to a state that is reachable from any of the "possible states" using the state transition matrix (in the linear systems sense of reachable). This applies to the world outside ("A meteor could hit me at any moment") or to my internal state ("I could jump off a cliff") in the sense that given my imperfect knowledge of my own state and other factors, the jump-off-a-cliff state is reachable from this fuzzy cloud of states.

Comment author: ME3 16 June 2008 04:01:50PM 0 points [-]

In other words, the algorithm is,

explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }

which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.

Comment author: ME3 30 May 2008 04:31:34PM 1 point [-]

There was an article in some magazine not too long ago that most people here have probably read, about how if you tell kids that they did good work because they are smart, they will not try as hard next time, whereas if you tell kids that they did good work because they worked hard, they will try harder and do better. This matches my own experience very well, because for a long time, I had this "smart person" approach to things, where I would try just hard enough to make a little headway, then either dismiss the problem as easy or give up. I see a lot of people falling into this trap, and they almost always are the ones who think they are smart, and who are referred to by others as smart.

I think that maybe it's not about choosing problems, even. I think that it's about walking any given path for long enough that you get to a place where nobody else has been, and that's when you achieve some kind of status in other people's eyes.

In response to Timeless Causality
Comment author: ME3 29 May 2008 03:00:41PM 2 points [-]

Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?

By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you talk about frozen states in a timeless universe, they still have to have time derivatives (in our perception of them). In other words, a sequence of still claymation frames and continuous motion may produce the same movie, but they correspond to very different realities.

View more: Next