Everett

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Everett150

It's not so much the killing that's an issue as the potential mistreatment. If you want to discover whether people like being burned, "Simulate EY, but on fire, and see how he responds" is just as bad of an option as "Duplicate EY, ignite him, and see how he responds". This is a tool that should be used sparingly at best and that a successful AI shouldn't need.

Everett10

I'm entertained to remember that one of the last things you said to me at Penguicon was that I'm evil. This post reminded me of that.

This is a really interesting post. This was one of my major disagreements with my church back when I was religious.

Everett00
As for the idea of competing AIs, if they can modify each other's code, what's to keep one from just deleting the other?

Or, for that matter, from modifying the other AI to change its values and goals in how the other AI modifies itself? Indirect self-modification?

This problem seems rather harder than directly implementing a FAI.

Everett00
Maybe Weeks is referring to "not wanting" in terms of not finally deciding to do something he felt was wrong, rather than not being tempted?

Not so. Back when I was religious, there were times when I waned to do things that went against my religious teachings, but I refrained from them out of the belief that they would somehow be harmful to me in some undefined-but-compelling way, not because they seemed wrong to me.

I've certainly felt tempted about many things, but the restraining factor is possible negative consequences, not ethical or moral feelings.

I don't recall ever wanting to do something I felt was wrong, or feeling wrong about something I wanted to do. At most I've felt confused or uncertain about whether the benefits would be greater than the possible harm.

The feeling of "wrong" to me is "bad, damaging, negative consequences, harmful to myself or those I care about". The idea of wanting to do something with those qualities seems contradictory, but it's well established by evidence that many people do feel like that about things they want to do. That part wasn't surprising to me.

Everett30

Psy-Kosh: I don't think I have, but I'm not very sure on that point. I don't remember ever wanting to do something that I both felt would be wrong and wouldn't have consequences otherwise. The part that was particularly unusual to me was the idea of something not only being "wrong", but universally unacceptable, as in:

If you have the sense at all that you shouldn't do it, you have the sense that you unconditionally shouldn't do it.
Everett20

This entire post is kind of surreal to me, as I'm pretty confident I've never felt the emotion described here before. I guess this makes some behavior I've seen before seem more understandable, but it's still a strange to see this described as a human universal when I don't seem to have that response.

Is there a standard term for this that I could use to research it? I did some searching on wikipedia with phrases used in the post, but I couldn't find anything.

Everett30

Are there any sources of more information on this convulsive effort that adult religionists go through upon noticing the lack of God?

Everett80

Ian: the issue isn't whether it could determine what humans want, but whether it would care. That's what Eliezer was talking about with the "difference between chess pieces on white squares and chess pieces on black squares" analogy. There are infinitely many computable quantities that don't affect your utility function at all. The important job in FAI is determining how to create an intelligence that will care about the things we care about.

Certainly it's necessary for such an intelligence to be able to compute it, but it's certainly not sufficient.

Everett20

You could always just juxtapose a box and an arrow: □→

Everett30

So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning? Every time I see people asking this question, it's presented as some intuitive, inherently obvious property, but I actually can't see how the world would be different if I do have free will or if I don't. I really don't quite understand what the discussion is about.

Load More