All of Evan Ward's Comments + Replies

It's great to see other people thinking and working on these ideas of efficiently eliciting preferences and very 'subjective' data, and building your own long-term decision support system! I've been pretty frustrated by the seeming lack of tooling for this. Inspired partially by Gwern's Resorter as well, I've started experimenting with my own version, except my goal is to end up with random variables for cardinal utilities (at least across various metrics), and I'm having the inputs for comparisons be quickly-drawn probability distributions.

1bvbvbvbvbvbvbvbvbvbvbv
Very interesting! Could you explain the workflow? Also, do you intend to make the code accessible?

To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.

[This comment is no longer endorsed by its author]Reply
Answer by Evan Ward*10

<Tried to retract this comment since I no longer agree with it, but it doesn't seem to be working>

[This comment is no longer endorsed by its author]Reply
1Evan Ward
To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.

There are trillions of quantum operations occurring in one's brain all the time. Comparatively, we make very few executive-level decisions. Further, these high-level decisions are often based off a relatively small set of information & are predictable given that set of information. I believe this implies that a person in the majority of recently created worlds makes the same high-level decisions. It's hard to imagine numerous different decisions we could make in any given circumstance given the relatively ingrained decision procedures we seem to walk t

... (read more)

Do you think making decisions with the aid of quantum generated bits actually does increase the diversification of worlds?

You make a good point. I fixed it :)

I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.

I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback

I am glad you appreciated this! I'm sorry I didn't respond sooner. I think you are write about the term "decision theory" and have opted for "decision procedure" in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/

I'm sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?

It seems that in a growing market (worlds s

... (read more)
1Donald Hobson
If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best. Optimize A only. Flip a quantum coin, Optimize A in one universe, B in another. Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.) If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is log(#Paperclips across multiverse)+log(#Staples across multiverse) Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. ) However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that. The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly. While this isn't a bad attempt at decision theory, I wouldn't want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work.

I too have been lurking for a little while. I have listened to the majority of Rationality from A to Z by Eliezer and really appreciate the clarity that Bayescraft and similar ideas offer. Hello :)

5Ben Pace
Welcome :) I wish you well in practising the art of Bayescraft.