All of dbc's Comments + Replies

dbc20

I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.

1OrphanWilde
That presumes consciousness can only be understood or recognized from the inside. An AI doesn't have to know what consciousness feels like (or more particularly, what "feels like" even means) in order to recognize it.
dbc60

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe.

I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.

dbc00

The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don't want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.

I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.

0[anonymous]
Yes, I see a common moral framework as a better solution, and I would also assert that a group needs at least a rudimentary version of such a framework in order to maintain cohesion. I assumed that was the case here. The rational solution to the tragedy of the commons is indeed worth discussing. However, in this case the principle behind the parable was obscured due to its rather objectionable content. I focused on the specifics as they remained more fixed in my mind after reading than the underlying principle. A less controversial example such as advertising or over-grazing would have prevented that outcome. I know that's a personal preference, though, and it seems to be a habit of Eliezer's to choose extreme examples on occasion - I ran into the same problem with Three Worlds Collide. It's an aspect of his otherwise very valuable writing that I find detracts from, rather than illuminates the points he's making. I recognize that others may disagree. With that in mind, I'm happy to close this line of discussion on the grounds that it's veering off-topic for this thread.
dbc10

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same)

How do you feel about desertion?

0[anonymous]
It's psychologically understandable, but morally wrong, provided the deserter entered into an uncoerced agreement with the organization he or she is deserting. If you know the terms before you sign up, you shouldn't renege on them. In cases of coercion or force (e.g. the draft) desertion is quite justified.
dbc00

This comment was sitting at -2 when I saw it, which makes me think that maybe I don't understand Eliezer's point. I thought the OP was making the point that when we talk about something being "attractive" or "moral" or maybe even "sapient", we project facts about our minds into the real world. "Attractive" really means "attractive to humans", and if we forget this fact, we can end up inadvertently drawing wrong conclusions about the world. If that's wrong, then what was this post actually about?

1Grognor
The part you highlight about shminux's comment is correct, but this part: is wrong; attractiveness is psychological reactions to things, not the things themselves. Theoretically you could alter the things and still produce the attractiveness response; not to mention the empirical observation that for any given thing, you can find humans attracted to it. Since that part of the comment is wrong but the rest of it is correct, I can't vote on it; the forces cancel out. But anyway I find that to be a better explanation for its prior downvotation than a cadre of anti-shminux voters. Mind you I downvoted JohnEPaton's comment because he got all of this wrong.
0[anonymous]
It may have just been serial downvoting from people who dislike shminux.
dbc00

Remember, the Kolmogorov complexity depends on your "universal Turing machine", so we should expect to only get estimates. Mitchell makes an estimate of ~50000 bits for the new minimal standard model. I'm not an expert on physics, but the mathematics required to explain what a Lagrangian is would seem to require much more than that. I think you would need Peano arithmetic and a lot of set theory just to construct the real numbers so that you could do calculus (of course people were doing calculus for over one hundred years before real numbers... (read more)

[This comment is no longer endorsed by its author]Reply
3Mitchell_Porter
My estimate was 27000 bits to "encode the standard model" in Mathematica. To define all the necessary special functions on a UTM might take 50 times that.