Salamander
Salamander has not written any posts yet.

Salamander has not written any posts yet.

It seems to me that while the terminal values of morality per individuals might be fixed, and per species might be relatively invariant, the things to do to get the most 'points' (utility?) might well seem as though the supposedly friendly AI was behaving in a pretty evil manner to us. I wonder if, whether the friendly ai project succeeds or not, how soon if at all we would really know that it had worked. I suppose, though, that's putting it in terms of human levels of intelligence. To us the only solution to overpopulation for instance might seem having a bunch of us die off so the rest, and future generations, can... (read more)
I am reading through the meta-ethics sequence for the first time. One thing I couldn't help but observe in this dialogue which I thought was interesting: Obert: "Duties, and should-ness, seem to have a dimension that goes beyond our whims. If we want different pizza toppings today, we can order a different pizza without guilt; but we cannot choose to make murder a good thing." It seemed odd to me that Subhan didn't mention regret at having made a difficult choice between competing wants, such as wondering whether you should've taken up piano playing instead of plumbing, or whatever, as being possibly something like the kind of negative feelings we get from guilt. We can't always order a different pizza without some sense of loss.
One thing I kind of like about this idea is the 'confessor' could be faster than the 'horse' simply by being dumber (and taking less code to run).