Comment author: RichardKennaway 05 January 2016 10:22:54PM *  1 point [-]

Note that I am not the person making the argument, just clarifying what is meant by "utility", which in its use around here specifically means that which is constructed by the VNM theorem. I am not a particular fan of applying the concept to universal decision-making.

You still end up with zero at the end of things.

Are you arguing that all things end, therefore there is no value in anything?

Well, there is precedent:

All is vanity. What does man gain by all the toil at which he toils under the sun?

I said in my heart, “Come now, I will test you with pleasure; enjoy yourself.” But behold, this also was vanity. I said of laughter, “It is mad,” and of pleasure, “What use is it?”

Then I considered all that my hands had done and the toil I had expended in doing it, and behold, all was vanity and a striving after wind, and there was nothing to be gained under the sun.

The wise person has his eyes in his head, but the fool walks in darkness. And yet I perceived that the same event happens to all of them. Then I said in my heart, “What happens to the fool will happen to me also. Why then have I been so very wise?” And I said in my heart that this also is vanity. For of the wise as of the fool there is no enduring remembrance, seeing that in the days to come all will have been long forgotten. How the wise dies just like the fool! So I hated life, because what is done under the sun was grievous to me, for all is vanity and a striving after wind.

Comment author: kithpendragon 06 January 2016 11:28:31AM 1 point [-]

reviews VNM Theorem

Noted, and thanks for the update. :)

Comment author: casebash 05 January 2016 11:18:33PM 0 points [-]

"If you come back an tell me that "these scenarios assume an unlimited availability of time" or something like that, I'll ask to see if the dragon in your garage is permeable to flour."

Not being realistic is not a valid criticism of a theoretical situation if the theoretical situation is not meant to represent reality. I've made no claims of how it carries over to the real world

Comment author: kithpendragon 06 January 2016 11:11:22AM 0 points [-]

"Not realistic" isn't my objection here so much as "moving the goalpost". The original post (as I recall it from before the edit), made no claim that there was zero cost in specifying arbitrarily large/specific numbers, nor in participating in arbitrarily large numbers of swaps.

Comment author: casebash 06 January 2016 01:42:50AM 0 points [-]

"Insufficient context" - the context is perfectly well defined. How tired do I get considering large numbers? You don't get tired at all! What is the opportunity cost of considering large number? There is no opportunity cost at all. And so on. It's all very well defined.

"Responded that the solution is to not play the game, but for the actor to grab as much utility as it could get within a certain finite time limit according to its stopping function and go about its business." - except that's not a single solution, but multiple solutions, depending on which number you stop at.

"If it does not, then what is the point?" - This is only part 1. I plan to write more on this subject eventually. As an analogy, a reader of a book series can't go to an author and demand that they release volume 2 right now so that they can understand part 1 in its full context. My objective here is only to convince people of this abstract theoretical point, because I suspect that I'll need it later (but I don't know for certain).

Comment author: kithpendragon 06 January 2016 10:48:42AM 1 point [-]

You don't get tired at all... there is no cost at all...

So you have deliberately constructed a scenario, then defined "winning" as something forbidden by the scenario. Unhelpful.

That's multiple solutions.

You have specified multiple games. I have defined a finite set of solutions for each Actor that can all be stated as "use the stopping function". If your Actor has no such function, it is not rational because it can get stuck by problems with the potential to become unbounded. Remember, the Traveling Salesman must eventually sell something or all that route planning is meaningless. This sort of thing is exactly what a stopping function is for, but you seem to have written them out of the hypothetical universe for some (as yet unspecified) reason.

A reader can't go to the author and demand volume 2...

Incorrect. People do it all the time, and it is now easier than ever. Moreover, I object to the comparison of your essay with a book. This context is more like a conversation than a publication. Please get to the point.

My objective is to convince people of this abstract theoretical point...

You have done nothing but remove criteria for stopping functions from unbounded scenarios. I don't believe that is convincing anybody of anything. I suspect the statement "not every conceivable game in every conceivable universe allows for a stopping function that does not permit somebody else to do better" would be given a non-negligible probability by most of us already. That statement seems to be what you have been arguing, and seems to coincide with your title.

Friendly Style Note: I (just now) noticed that you have made some major changes to the article. It might be helpful to isolate those changes structurally to make them more visually obvious. Remember, we may not be rereading the full text very often, so a timestamp might be nice too. :)

Comment author: Decius 06 January 2016 12:27:58AM 0 points [-]

Heat death is a problem that the builders of the game have to deal with. Every time I type out BB(BB(BB(...))) the builder of the game has to figure out how I can get a noncomputable increase to the degree of the function by which the multiple of my preference for the world increases. If there is some conceivable world with no heat death which I prefer any computable amount more than any world with a heat death (and infinity is not a utility!), then by playing this game I enter such a world.

Comment author: kithpendragon 06 January 2016 01:53:36AM 0 points [-]

Not if your current universe ends before you are able to finish specifying the number. Remember: you receive no utility before you complete your input.

Comment author: RichardKennaway 05 January 2016 10:22:54PM *  1 point [-]

Note that I am not the person making the argument, just clarifying what is meant by "utility", which in its use around here specifically means that which is constructed by the VNM theorem. I am not a particular fan of applying the concept to universal decision-making.

You still end up with zero at the end of things.

Are you arguing that all things end, therefore there is no value in anything?

Well, there is precedent:

All is vanity. What does man gain by all the toil at which he toils under the sun?

I said in my heart, “Come now, I will test you with pleasure; enjoy yourself.” But behold, this also was vanity. I said of laughter, “It is mad,” and of pleasure, “What use is it?”

Then I considered all that my hands had done and the toil I had expended in doing it, and behold, all was vanity and a striving after wind, and there was nothing to be gained under the sun.

The wise person has his eyes in his head, but the fool walks in darkness. And yet I perceived that the same event happens to all of them. Then I said in my heart, “What happens to the fool will happen to me also. Why then have I been so very wise?” And I said in my heart that this also is vanity. For of the wise as of the fool there is no enduring remembrance, seeing that in the days to come all will have been long forgotten. How the wise dies just like the fool! So I hated life, because what is done under the sun was grievous to me, for all is vanity and a striving after wind.

Comment author: kithpendragon 06 January 2016 01:49:24AM 0 points [-]

Are you arguing that all things end, therefore there is no value in anything?

My argument was not meant to imply nihilism, though that is an interesting point. (Aside: Where is the quote from?) Rather, I meant to imply the hidden costs (e.g. time for calculation or input) making the exercise meaningless. As has been argued by several people now, having the Agent be able to state arbitrarily large or accurate numbers, or able to wait an arbitrarily large amount of time without losing any utility is... let's say problematic. As much so as the likelyhood of the Game Master being able to actually hand out utility based on an arbitrarily large/accurate number.

Comment author: casebash 05 January 2016 11:45:09PM 0 points [-]

True, everything does exist in context. And the context being considered here, is not the real world, but the behaviour in a purely theoretically constructed world. I have made no claims that it corresponds to the real world as of yet, so claiming that it doesn't correspond to the real world is not a valid criticism.

Comment author: kithpendragon 06 January 2016 01:25:18AM *  0 points [-]

My criticism is that you have either set up a set of scenarios with insufficient context to answer the question of how to obtain maximum utility, or deliberately constructed these scenarios such that attempting to obtain maximum utility leads to the Actor spending an infinite amount of time while failing to ever complete the task and actually collect. You stated that until the specification of the number, or the back-and-forth game was complete no utility was gained. I responded that the solution is to not play the game, but for the actor to grab as much utility as it could get within a certain finite time limit according to its stopping function and go about its business.

I have made no claims that it corresponds to the real world as of yet...

If it does not, then what is the point? How does such an exercise help us to be "less wrong"? The point of constructing beliefs about Rational Actors is to be able to predict how they would behave so we can emulate that behavior. By choosing to explore a subject in this context, you are implicitly making the claim that you believe it does correspond to the real world in some way. Furthermore, your choice to qualify your statement with "as of yet" reinforces that implication. So I ask you to state your claim so we may examine it in full context.

Comment author: casebash 05 January 2016 12:20:59PM 0 points [-]

The point of utilons is to scale linearly, unlike, say dollars. Maybe there's a maximum utility that can be obtained, but they never scale non-linearly. The task where you can name any number below 100, but not 100 itself, avoids these issues though.

I don't understand your objection to the Unlimited Swap scenario, but isn't it plausible that a perfectly rational agent might not exist?

Comment author: kithpendragon 05 January 2016 06:18:52PM 0 points [-]

The task where you can name any number below 100, but not 100 itself, avoids these issues though.

That task still has the issue that the agent incurs some unstated cost (probably time) to keep mashing on the 9 key (or whatever input method). At some point, the gains are nominal and the agent would be better served collecting utility in the way it usually does. Same goes for the Unlimited Swap scenario: the agent could better spend its time by instantly taking the 1 utilon and going about its business as normal, thus avoiding a stalemate (condition where nobody gets any utility) with 100% certainty.

Is it plausible that a perfectly rational agent might not exist? Certainly. But I hardly think these thought exercises prove that one is not possible. Rather, they suggest that when working with limited information we need a sane stopping function to avoid stalemate. Some conditions have to be "good enough"... I suppose I object to the concept of "infinite patience".

Everything exists in context

Comment author: kithpendragon 05 January 2016 12:10:01PM 0 points [-]

My gut response to the unbounded questions is that a perfectly rational agent would already know (or have a good guess as to) the maximum utility that it could conceivably expect to use within the limit of the expected lifespan of the universe.

There is also an economic objection; at some point it seems right to expect the value of every utilon to decrease in response to the addition of more utilons into the system.

In both objections I'm approaching the same thing from different angles: the upper limit on the "unbounded" utility in this case depends on how much the universe can be improved. The question of how to achieve maximum utility in those scenarios is malformed similarly to that of asking the end state of affairs after completing certain supertasks. More context is needed. I suspect the same is also true for the Unlimited Swap scenario.

Comment author: kithpendragon 08 December 2015 01:23:40AM 8 points [-]

I think placebomancy is the best word I've seen in years!

In response to Engineering Religion
Comment author: kithpendragon 07 December 2015 06:31:17PM 4 points [-]

What is success for a religion?

I'd say it seems pretty fundamental that Religion is a kind of meme. One measure of success for memes is their ability to spread (virulence). If that is your only measure, you're likely to have some ethically terrible things going on. It seems (to me) like an obvious constraint that the meme must spread without causing any obvious harm in its wake (except to related memes that may be in competition).

What useful purposes does religion serve?

I consider [create an in-group] to be a pretty central (usually unstated) optimization point (not exactly a purpose) in most religions. This has some well-studied psychological effects on said group that can benefit all its members socially, psychologically, and (further down the causal line) medically. Although it tends to lead to some unfortunate side-effects for the out-group.

How would you design a "rational religion", if such an entity is possible?

Problem is that I've seen more than one source define "religion" as something like "systematic belief in the supernatural". I'm not convinced that such belief can be "rational" (optimal for making more effective decisions). Perhaps as a stepping stone -- use a religious-type belief system as an infection vector, then strip the supernatural elements away bit by bit; but that seems awfully deceitful to me.

View more: Prev | Next