Comment author: OrphanWilde 08 August 2012 02:22:51PM 2 points [-]

I might be mistaken, but it seems like you're forwarding a theory of consciousness, as opposed to a theory of intelligence.

Two issues with that - first, that's not necessarily the goal of AI research. Second, you're evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.

Comment author: dbc 08 August 2012 03:51:18PM 2 points [-]

I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.

Comment author: Kingoftheinternet 08 August 2012 02:29:49PM 10 points [-]

the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can.

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe. Where does the incompatibility come from? I'm aware that it looks like no human-designed mathematical objects have experienced qualia yet, which is some level of evidence for it being impossible, but not so strong that I think you're justified in saying a materialist/mathematical platonist view of reality can never account for conscious experiences.

Comment author: dbc 08 August 2012 03:17:07PM 6 points [-]

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe.

I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.

Comment author: [deleted] 07 August 2012 10:12:26AM 0 points [-]

It's psychologically understandable, but morally wrong, provided the deserter entered into an uncoerced agreement with the organization he or she is deserting. If you know the terms before you sign up, you shouldn't renege on them.

In cases of coercion or force (e.g. the draft) desertion is quite justified.

In response to comment by [deleted] on Bayesians vs. Barbarians
Comment author: dbc 07 August 2012 10:30:08AM *  0 points [-]

The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don't want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.

I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.

Comment author: [deleted] 07 August 2012 07:10:30AM 0 points [-]

I found this post very disturbing, so I thought for a bit about why. It reads very much like some kind of SF dystopia, and indeed if it were necessary to agree to this lottery to be part of the hypothetical rationalist community/country, then I wouldn't wish to be a part of it. One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same). No government should have the right to compel its citizens to become soldiers, and that's what it would become, after the first generation, unless you're going to choose to exile anyone who reaches adulthood there and then opts out.

Offering financial incentives for becoming a soldier, as has already been discussed in the comments, seems a fairer idea. Consider also that the more objectively evil the Evil Barbarians are, the more people will independently decide that fighting is the better decision. If not enough people support your war, maybe that in itself is a sign that it's not a good idea. If most of the rationalists would rather lose than fight, that tells you something.

It's quite difficult to know the right tone of response to take here - the Evil Barbarians are obviously pure thought-experiment, but presumably most of us would view a rationalist country as a good thing. Not if it made decisions like this, though. Sacrificing the individual for the collective isn't always irrational, but it needs to be the individual who makes that choice based on his or her own values, not due to some perceived social contact. Otherwise you might as well be sacrificed to make more paperclips.

If it was intended as pure metaphor, it's a disquieting one.

In response to comment by [deleted] on Bayesians vs. Barbarians
Comment author: dbc 07 August 2012 09:53:15AM 1 point [-]

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same)

How do you feel about desertion?

Comment author: shminux 30 July 2012 11:44:14PM *  3 points [-]

Really what where doing is looking at features such as hip ratio, symmetry, eye color and various attributes like that and that's what creates the sense of attractiveness. These are all things that the alien could measure and combine also.

Yes, this would define "looks attractive to a certain subset of humans" (i.e. those who find this set of features attractive). However, there is no such thing as "looks attractive to all humans and aliens", which is what Woman.sexiness is supposed to represent.

Comment author: dbc 31 July 2012 01:07:56AM 0 points [-]

This comment was sitting at -2 when I saw it, which makes me think that maybe I don't understand Eliezer's point. I thought the OP was making the point that when we talk about something being "attractive" or "moral" or maybe even "sapient", we project facts about our minds into the real world. "Attractive" really means "attractive to humans", and if we forget this fact, we can end up inadvertently drawing wrong conclusions about the world. If that's wrong, then what was this post actually about?

Comment author: shminux 09 July 2012 06:21:35AM 7 points [-]

I understand all that, I just want a worked example, not only hand-waving. After all, a formalization of Occam's razor is supposed to be useful in order to be considered rational.

Comment author: dbc 09 July 2012 06:40:06AM *  0 points [-]

Remember, the Kolmogorov complexity depends on your "universal Turing machine", so we should expect to only get estimates. Mitchell makes an estimate of ~50000 bits for the new minimal standard model. I'm not an expert on physics, but the mathematics required to explain what a Lagrangian is would seem to require much more than that. I think you would need Peano arithmetic and a lot of set theory just to construct the real numbers so that you could do calculus (of course people were doing calculus for over one hundred years before real numbers existed, but I have a hard time imagining a rigorous calculus without them...) I admit that 50000 bits is a lot of data, but I'm sceptical that it could rigorously code all that mathematics.

F=ma has the same problem, of course. Does the right hand side really make sense without calculus?

ETA: If you want a fleshed out example, I think a much better problem to start off with would be predicting the digits of pi, or the prime numbers.