Posts

Sorted by New

Wiki Contributions

Comments

Ah, that clears this up a bit. I think I just didn't notice when N' switched from representing an exploitive agent to an exploitable one. Either that, or I have a different association for exploitive agent than what EY intended. (namely, one which attempts to exploit)

I'm not getting what you're going for here. If these agents actually change their definition of fairness based on other agents definitions then they are trivially exploitable. Are there two separate behaviors here, you want unexploitability in a single encounter, but you still want these agents to be able to adapt their definition of "fairness" based on the population as a whole?

I tried to generalize Eliezer's outcomes to functions, and realized if both agents are unexploitable, the optimal functions to pick would lead to Stuart's solution precisely. Stuart's solution allows agents to arbitrarily penalize the other though, which is why I like extending Eliezer's concept better. Details below, P.S. I tried to post this in a comment above, but in editing it I appear to have somehow made it invisible, at least to me. Sorry for repost if you can indeed see all the comments I've made.


It seems the logical extension of your finitely many step-downs in "fairness" would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other's magical fairness point)

Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much "skew" (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!

Let me know if my reasoning above is transparent. If not, I can clarify, but I'll avoid expending the extra effort revising further if what I already have is clear enough. Also, just simple confirmation that I didn't make a silly logical mistake/post something well known in the community already is always appreciated.

It seems the logical extension of your finitely many step-downs in "fairness" would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other's magical fairness point)

Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much "skew" (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!

Let me know if my reasoning above is transparent. If not, I can clarify, but I'll avoid expending the extra effort revising further if what I already have is clear enough.

I agree with you a lot, but would still like to raise a counterpoint. To illustrate the problem with mathematical calculations involving truly big numbers though, what would you regard as the probability that some contortion of this universe's laws allows for literally infinite computation? I don't give it a particularly high probability at all, but I couldn't in any honesty assign it one anywhere near 1/3^^^3. The naive expected number of minds FAI affects (effects?) doesn't even converge in that case, which at least for me is a little problematic

Try to put meeting location in the title, just to save people not involved a click and better draw in people actually in the area

Please taboo "good". When talking about stories especially, good has more than one meaning, and I think that's part of your disagreement

A couple others have mentioned warnings on doing something only to become attractive (e.g. You will tire of it or become resentful). Something like general fitness with multiple benefits likely isn't a problem, but there's also an alternate perspective that has worked really well for me. Instead of optimizing for attractiveness, consider optimizing for awesomeness. Being awesome will tend to make people attracted to you, but it has the added bonus of improving your self-confidence (which again increases attractiveness) and life-satisfaction.

As far as how to do this, I wouldn't mind tips myself, but the general gist of what I do is just keep that drive to be more awesome at the back of my mind when making decisions (in LW parlance, adopt awesomeness as an instrumental value). Anyone else have ideas?

Well then LW will be just fine; after all we fit quite snugly into that category

Moderately on topic:

I'll occasionally take "drugs" like airborne to boost my immune system if I feel myself coming down with something. I fully know that they have little to no medicinal effect, but I also know the placebo effect is real and well documented. In the end, I take them because I expect them to trigger a placebo effect where I feel better, and expect it to work because the placebo effect is real. This feels silly.

I wonder whether it is possible to switch out the physical action of taking a pill with an entirely mental event and get this same effect. I also wonder if this is just called optimism. Lastly, I wonder if I truly believe that "drugs" like airborne are able to help me, or just believe I believe, and am unsure what impact that has on my expectations given the placebo effect.

Load More