Anonymous6

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Eliezer,

So you have a form of deontological ethics based on Newcomb's problem? Now that is very unusual. I can't see how that could be plausible, but hope that you will surprise me. Obviously it is something important enough for a post (or many), so I won't ask you to elaborate any further in the comments.

Almost every wonderful (or wondrous, if tha makes the point better) thing I have ever seen or heard about prompted a response "I could have done that!"

Maybe I could have, maybe I couldn't.

The historically important fact is, I didn't.

Related, I've been wondering something else.

Given our current level of technology (TL7 going on 8), is it even possible to simulate a universe computationally (the configuration space of the universe, whatever)?

If the wave equation is the distribution over a configuration space with respect to an arbitrary reference frame (i.e. "time"), then what "really exists" (again, not clear on what that means in this context) is an underlying configuration space. Do we know enough about that to represent one to the extent that we could create a minuscule universe that behaves structurally like our own?

Thomas, close. The point is that the Earth people are a fraction as smart/quick as a Bayesian proto-AI.

Eric, I'm a little embarrassed to have to say 'me too', at least until about half way. The Way is a bitch.

Eliezer, I've read a lot of your writings on the subject of FAI, not just here. I've never seen anything as convincing as the last two posts. Great, persuasive, spine-tingling stuff.

The parable was original with Antony Flew whose Theology and Falsification can be found here http://www.stephenjaygould.org/ctrl/flew_falsification.html

Eliezer,

That is a very interesting question. I'm not sure how to answer it. It would be a good test of a scientific claim as you need to provide falsification conditions. However philosophy does not work the same way. If utilitarianism is true, it is in some way conceptually true. I wouldn't know how to tell you what a good argument against 3 + 3 = 6 would look like (and indeed there are no decisive arguments against it). This does not count against the statement or my belief in it.

My best attempt to say that a good argument would be one that showed that happiness being the only thing that is good for someone is in direct conflict with something that many people find to be clearly false and that this would still be the case after considerable reflection. This last part is important as I find many of its consequences unintuitive before reflection and then see why I was confused (or I think I see why I was conufused...). It has to appeal to what is good for people rather than what they aim at as it is a theory about goodness, not about psychology (though you might be able to use psychological premises in an argument with conclusions about goodness).

Toby.

Now that I think of it, you probably just saw that it was unsigned and assumed it was me, putting my name on it.

Toby.

Eliezer,

Re utilitarianism: its fine to have an intuition that it is incorrect. It is also fine to be sceptical in the presence of a strong intuition against something and no good arguments presented so far in its favor (not in this forum, and presumably not in your experience). I was just pointing out that you have so far offered no arguments against it (just against a related but independant point) and so it is hardly refuted.

Re posts and names: I posted the 7:26pm, 5:56am and the 9:40am posts (and I tried the log in and out trick before the 9:40 post to no avail). I did not post the 1:09pm post that has my name signed to it and is making similar points to those I made earlier. Either the Type Key system is really undergoing some problems or someone is impersonating me. Probably the former. Until this is sorted out, I'll keep trying things like logging in and out and resetting my cookies, and will also sign thus,

Toby.

It seems to me that you are making an error in conflating, or at least not distinguishing, what people in fact prefer/strive at and what is in fact morally desirable.

So long as you are talking about what people actually strive for the only answer is the actual list of things people do. There is unlikely to be any fact of the matter AT ALL about what someones 'real preferences' are that's much less complicated than a description of their total overall behavior.

However, the only reason your arguments seems to be making a nontrivial point is because it talks about morality and utility functions. But the reason people take moral talk seriously and give it more weight then they would arguments about asthetics or what's tasty is because people take moral talk to be descibring objective things out in the world. That is when someone (other than a few philosophically inclined exceptions) says, "Hey don't do that it's wrong" they are appealing to the idea that there are objective moral facts the same way their are objective physical facts and people can be wrong about them just like they can be wrong about physical facts.

Now there are reasonable arguments to the effect that there is no such thing as morality at all. If you find these persuasive that's fine but then talking about moral notions without qualification is just as misleading as talking about angels without explaining you redefined them to mean certain neurochemical effects. On the other hand if morallity is a real thing that's out there in some sense it's perfectly fair to induct on it the same way we induct on physical laws. If you go look at the actual results of physical experiments you see lots of noise (experimental errors, random effects) but we reasonable pick the simple explanation and assume that the other effects are due to measuring errors. People who claim that utilitarianism is the one true thing to maximize aren't claiming that this is what other people actually work towards. They are saying other people are objectively wrong in not choosing to maximize this.

Now despite being (when I believe in morality at all) a utilitarian myself I think programming this safeguard into robots would be potentially very dangerous. If we could be sure they would do it perfectly accurately fine, people sacrificed for the greater good not withstanding. However, I would worry that the system would be chaotic with even very small errors in judgement about utility causing the true maximizer to make horrific mistakes.

Eliezer,

I'm not saying that I have given you convincing reasons to believe this. I think I could give quite convincing reasons (not that I am totally convinced myself) but it would take at least a few thousand words. I'll probably wait until you next swing past Oxford and talk to you a bit about what the last couple of thousands of years of ethical thought can offer the FAI program (short answer: not much for 2,500 years, but more than you may think).

For the moment, I'm just pointing out that it is currently nil all in the argument regarding happiness as an ultimate value. You have given reasons to believe it is not what we aim at (but this is not very related) and have said that you have strong intuitions that the answer is 'no'. I have used my comments pointing out that this does not provide any argument against the position, but have not made any real positive arguments. For what its worth, my intuition is 'yes' it is all that matters. Barring the tiny force of our one bit statements of our intuitions, I think the real question hasn't begun to be debated as far as this weblog is concerned. The upshot of this is that the most prominent reductive answer to the question of what is of ultimate value (which has been developed for centuries by those whose profession is to study this question) is still a very live option, contrary to what your post claims.

Load More