All of Karl's Comments + Replies

Karl50

Firstly, the important part of my modification to the indifference formalism is not about conditioning on the actual o but it's the fact that in evaluating the expectation of UN it take the action in A2 (for a given pair (a1,o)) which maximize UN instead of the action which maximize U (note that U is equal to US in the case that o is not in Press.).

Secondly an agent which chose a1 by simply maximizing E[UN | NotPress; a1] + E[US | Press; a1] do exhibit pathological behaviors. In partcular, there will still be incentives to manage the news, but from both si... (read more)

4lackofcheese
I think this means "indifference" isn't really the right term any more, because the agent is not actually indifferent between the two sets of observations, and doesn't really need to be. So, how about U(a1, o, a2) = UN(a1, o, a2) + max_b(US(a1, o, b)), if o is not in Press US(a1, o, a2) + max_b(UN(a1, o, b)), if o is in Press or, in your notation, U(a1, o, a2) = g(a1, o) + UN(a1, o, a2) if o is in Press, or US(a1, o, a2) + f(a1, o) if o is in Press.
2lackofcheese
OK, you're right on that point; I misunderstood the "managing the news" problem because I hadn't quite realised that it was about shifting observations between the Press/NotPress sets. As you've said, the only resolution is to select a1 based on E[max_b(UN(a1, O, b) | O; a1] and not E[max_b(UN(a1, O, b) | O not in Press; a1]
Karl30

In the indifference formalism the agent in selecting A1 act like a UN agent that believe that the shutdown button will not be pressed, therefore it create perverse incentives to "manage the news". Which means that if the agent can cause his shutdown button to be pressed in the event of bad news, it will.

My formulation avoid this pathological behavior by instead making the agent select A1 as if it was a UN-agent which believed that it would continue to optimize according to UN even in the vent of the button being pressed which avoid the perverse i... (read more)

Karl60

Why not make it so that the agent in selecting A1 act as a UN-agent that believe that it will continue to optimize according to UN even in the event of the button being pressed rather than a UN agent that believe that the button will never be pressed: that is pick U such that

U(a1,o,a2) = UN(a1,o,a2) if o is in Press or US(a1,o,a2) + f(a1,o) - g(a1,o) if o is not in Press

where f(a1,o) is the maximum value of UN(a1,o,b) for b in A2 and g(a1,o) is the maximum value of US(a1,o,b) for b in A2.

This would avoid the perverse manipulation incentives problem detailed on section 4.2 of the paper.

2interstice
How does this differ from indifference?
Karl20

Apart from the obvious problems with this approach, (The AI can do a lot with the output channel other than what you wanted it to do, choosing an appropriate value for λ, etc.) I don't see why this approach would be any easier to implement than CEV.

Once you know what a bounded approximation of an ideal algorithm is supposed to look like, how the bounded version is supposed to reason about it's idealised version and how to refer to arbitrary physical data, as the algorithm defined in your post assume, then implementing CEV really doesn't seem to be that hard of a problem.

So could you explain why you believe that implementing CEV would be so much harder than what you propose in your post?

6Stuart_Armstrong
This post assume the AI understands physical concepts to a certain degree, and has a reflection principle (and that we have an adequate U). CEV requires that we solve the issue of extracting preferences from current people, have a method for combining them, have a method for extrapolating them, and have an error-catching mechanism to check that things haven't gone wrong. We have none of these things, even in principle. CEV itself is a severely underspecified concept (as far as I know, my attempt here is the only serious attempt to define it; and it's not very good http://lesswrong.com/r/discussion/lw/8qb/cevinspired_models/ ). More simply, CEV requires that we solve moral problems and their grounding in reality; reduced impact requires that we solve physics and position.
Karl40

By that term I simply mean Eliezer's idea that the correct decision theory ought to use a maximization vantage points with a no-blackmail equilibrium.

1cousin_it
Maybe a more scary question isn't whether we can stop our AIs from blackmailing us, but whether we want to. If the AI has an opportunity to blackmail Alice for a dollar to save Bob from some suffering, do we want the AI to do that, or let Bob suffer? Eliezer seems to think that we obviously don't want our FAI to use certain tactics, but I'm not sure why he thinks that.
Karl00

If the NBC (No Blackmail Conjecture) is correct, then that shouldn't be a problem.

2cousin_it
Can you state the conjecture or link to a description?
Karl10

So you wouldn't trade whatever amount of time Frank as left, which is at most measured in decades, against a literal eternity of Fun?

If I was Frank in this scenario, I would tell the other guy to accept the deal.

0linkhyrule5
I see my room needs to be even more "white." ... The answer, I suppose, would be "yes." But this wasn't meant to be an immortal v. mortal life thing, just the comparison of two lives - so the obvious steelman is, what if Frank's immortal, and just very, very bored?
Karl40

Hmm. I'll have to take a closer look at that. You mean that the uncertainties are correlated, right?

No. To quote your own post:

A similar process allows us to arbitrarily set exactly one of the km.

I meant that the utility function resulting from averaging over your uncertainty over the km's will depend on which km you chose to arbitrarily set in this way. I gave an example of this phenomenon in my original comment.

5[anonymous]
Oh sorry. I get what you mean now. Thanks. I'll have to think about that and see where the mistake is. That's pretty serious, though.
Karl60

You can't simply average the km's. Suppose you estimate .5 probability that k2 should be twice k1 and .5 probability that k1 should be twice k2. Then if you normalize k1 to 1, k2 will average to 1.25, while similarly if you normalize k2 to 1, k1 will average to 1.25.

In general, to each choice of km's will correspond a utility function and the utility function we should use will be a linear combination of those utility functions and we will have renormalization parameters k'm and, if we accept the argument given in your post, those k'm ought to be just as d... (read more)

2AlexMennen
You could use a geometric mean, although this might seem intuitively unsatisfactory in some cases.
1[anonymous]
Hmm. I'll have to take a closer look at that. You mean that the uncertainties are correlated, right? Can you show where you got that? My impression was that once we got to the set of (equivalent, only difference is scale) utility functions, averaging them just works without room for more fine-tuning. But as I said, that part is shaky because I haven't actually supported those intuitions with any particular assumptions. We'll see what happens when we build it up from more solid ideas.
Karl30

What do you even mean by "is a possible outcome" here? Do you mean that there is no proof in PA of the negation of the proposition?

The formula of a modal agent must be fully modalized, which means that all propositions containing references to actions of agents within the formula must be within the scope of a provability operator.

Karl50

Proof without using Kripke semantic: Let X be a modal agent and Phi(...) it's associated fully modalized formula. Then if PA was inconsistent Phi(...) would reduce to a truth value independent of X opponent and so X would play the same move against both FairBot and UnfairBot (and this is provable in PA). But PA cannot prove it's own consistency so PA cannot both prove X(FairBot) = C and X(UnfairBot) = D and so we can't both have FairBot(X) = C and UnfairBot(X) = C. QED

1AlexMennen
Oh, I see. Thanks.
Karl20

Proof: Let X be a modal agent, Phi(...) it's associated fully modalized formula, (K, R) a GL Kripke model and w minimal in K. Then, for all statement of the form ◻(...) we have w |- ◻(...) so Phi(...) reduce in w to a truth value which is independent of X opponent. As a result, we can't have both w |- X(FairBot) = C and w |- X(UnfairBot) = D and so we can't have both ◻(X(FairBot) = C) and ◻(X(UnfairBot) = D) and so we can't both have FairBot(X) = C and UnfairBot(X) = C. QED

2AlexMennen
I don't know what that means. Can you prove it without using Kripke semantics? (if that would complicate things enough to make it unpleasant to do so, don't worry about it; I believe you that you probably know what you're doing)
Karl20

no modal agent can get both FairBot and UnfairBot to cooperate with it.

TrollDetector is not a modal agent.

0fractalman
hm. I'm still a bit shaky on the definition of modal agent...does the following qualify? IF(opponentcooperates with me AND I defect is a possible outcome){defect} else{ if (opponent cooperates IFF i cooperate) (cooperate){else defect} (edit: my comment about perfect unfair bots may have been based on the wrong generalizations from an infinite-case). addendum: if what I've got doesn't qualify as a model agent I'll shut up until I understand enough to inspect the proof directly. addendum 2: well. alright then I'll shut up.
Karl40

UnfairBot defect against PrudentBot.

Proof: For UnfairBot to cooperate with PrudentBot, PA would have to prove that PrudentBot defect against UnfairBot which would require PA to prove that "PA does not prove that UnfairBot cooperate with PrudentBot or PA+1 does not prove that UnfairBot defect against DefectBot" but that would require PA to prove it's own consistency which it cannot do. QED

0AlexMennen
Oops, you're right. But how do you prove that every modal agent is defected against by at least one of FairBot and UnfairBot?
Karl40

Here is another obstacle to an optimality result: define UnfairBot as the agent which cooperate with X if and only if PA prove that X defect against UnfairBot, then no modal agent can get both FairBot and UnfairBot to cooperate with it.

3orthonormal
Right! More generally, I claim that if a modal agent provably cooperates (in PA) against some other agent, then it cannot provably defect (in PA) against any other agent: roughly speaking, PA can never prove that any proof search in PA or higher fails, so a modal agent can only provably (in PA) do the action that it would do if PA were inconsistent.
0fractalman
trollDetector-a fundamental part of psychbot-gets both of these to cooperate. TrollDetector tests opponents against DefectBot. If opponent defects, it cooperates. if opponent cooperates, TrollDetector defects. So both UnfairBot and Fairbot cooperate with it, though it doesn't do so well against itself or DefectBot.
0AlexMennen
As you have defined UnfairBot, both FairBot and UnfairBot cooperate with PrudentBot.
Karl20

Both of those agents are modal agents of rank 0 and so the fact that they defect against CooperateBot imply that FairBot defects against them by theorem 4.1.

4Will_Sawin
Yes, this is problematic. But it's not clear to me that it's a problem for my bots, rather than for Fairbot. After all, one can equally say that they defect against FairBot. Edit: I've thought more about why this is. The rational move against FairBot is to cooperate, unless FairBot's reasoning system is inconsistent, in which case FairBot is just CooperateBot, and the rational move is to defect. ADTBot rationally defects against inconsistent ones. Since Fairbot does not know its reasoning system is consistent, it defects. Since ADTBOt is unexploitable, it, too, defects. So FairBot is not quite so Fair as it seems - it unfairly punishes you for defecting against inconsistent FairBots.
Karl00

You're right, and wubbles's agent can easily be exploited by a modal agent A defined by A(X)=C <-> [] (X(PB)=C) (where PB is PrudentBot).

Karl00

The agent defined by wubbles is actually the agent called JustBot in the robust cooperation paper and which is proven to be non-exploitable by modal agents.

[This comment is no longer endorsed by its author]Reply
3Eliezer Yudkowsky
JusticeBot cooperates with anyone who cooperates with FairBot, and is exploitable by any agent which comprehends source code well enough to cooperate with FairBot and defect against JusticeBot. Though I'm going here off the remembered workshop rather than rechecking the paper.
Karl00

Well, depend how different the order is...

If there are theorem in PA of the form "If there are theorems of the form A()≠a and of the form A'()≠a' then the a and the a' such that the corresponding theorem come first in the appropriate ordering must be identical." then you should be okay in the prisoner dilemma setting but otherwise there will be a model of PA in which the two players end up playing different actions and we end up in the same situation as in the post.

More generally, no matter how you try to cut it, there will always be a model of P... (read more)

Karl10

If I understand what you're proposing correctly then that doesn't work either.

Suppose it is a theorem of PA that this algorithm return an error, then all theorems of the form A()=a => U()>=u are demonstrable in PA and so there is no action with the highest u and the algorithm return an error, because that is demonstrable in PA the algorithm must return an error by Löb's theorem.

0Wei Dai
You're right, and I don't know how to fix the problem without adding the equivalent of "playing chicken". But now I wonder why the "playing chicken" step needs to consider the possible actions one at a time. Instead suppose that using an appropriate oracle, the agent looks through all theorems of PA, and returns action a as soon as it sees a theorem of the form A()≠a, for any a. This was my original interpretation of cousin_it's post. Does this have the same problem, if two otherwise identical agents look through the theorems of PA in different orders?
Karl20

I didn't like the way the first paragraph ended. It seemed excessively confrontational and made me not want to read the rest of the post.

My intention was not to appear confrontational. It actually seemed obvious when I began thinking about this problem that the order in which we check actions in step 1 shouldn't matter but that ended up being completely wrong. That was what I was trying to convey though I admit I might have done so in a clumsy manner.

0Larks
I liked the style.
Karl50

To quote step 2 of the original algorithm:

For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair.

Karl20

If something is not ontologically fundamental and doesn't reduce to anything which is, then that thing isn't real.

Karl00

Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.

Can you give a detailed example of this?

This for a start.

1Wei Dai
What "certain moral matters" do you have in mind? As for existence of moral progress, see Konkvistador's draft post Against moral progress. I've always found that post problematic, and finally wrote down why. Any other examples?
Karl00

Are you saying you think qualia is ontologically fundamental or that it isn't real or what?

0latanius
I'm saying that although it isn't ontologically fundamental, our utility function might still build on it (it "feels real enough"), so we might have problems if we try to extrapolate said function to full generality.
Karl00

there still isn't any Scientifically Accepted Unique Solution for the moral value of animals

There isn't any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.

the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?

It certainly appear to uniquely follow.

see the post "The "Scary problem of Qualia".

That seems easy to answ... (read more)

0latanius
but it most likely isn't. "X computes Y" is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don't assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?) (nevertheless, sure, the SAUS might not always exist... but above question still doesn't seem to have any LW Approved Unique Solution (tm) either :))
Karl00

Did the person come into existence:

Ve came into existence whenever a computation isomorphic to verself was performed.

1DaFranker
That seems to trivially follow. It also seems to just push the burden of reduction onto "performed".
Karl110

For Harry had only loaned his Cloak, not given it

That seems like it answer your question: his invisible copies aren't borrowing the cloak from him because they are him.

1Tenek
OK. I'm thinking of this in terms of Harry being able to see Bellatrix because it's his cloak. Harry should then be able to see the other Harrys because they're also wearing his cloak, unless the Cloak distinguishes between "master" and "time-travelled master", or the "loan" part is significant enough that Harry wouldn't be able to see someone under the cloak if they just pick it up without him expressly loaning it to them. If that counts as "stealing" and transfers ownership then you could "loan" the Cloak to everyone and they'd never be able to take it from you. There's something unsettling about a cloak that hides you from everyone, except its Master, unless the Master is also you.
Karl20

You seem to be taking a position that's different from Eliezer's, since AFAIK he has consistently defended this approach that you call "wrong" (for example in the thread following my first linked comment).

Well, Eliezer_2009 do seem to underestimate the difficulty of the extrapolation problem.

If you have some idea of how to "derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments" that doesn't involve just "looking at where people moralities concent

... (read more)
2Wei Dai
In each of those cases the pebblesorter reasoning / human mathematical reasoning is approximating some idealized reasoning system that is "well-behaved" in certain ways, for example not being sensitive to the order in which it encounters arguments. It's also the case that there is only one such system "nearby" so that we don't have to choose from multiple reasoning systems which one we are really approximating. (Although even in math there are some areas, such as set theory, with substantial unresolved debate as to what it is that we're really talking about.) It's unclear to me that either of these holds for human moral reasoning. Can you give a detailed example of this?
Karl00

Quiditch, the lack of adequate protection on time turners before Harry gave them the idea put protective shell on them... Seriously, just reread the fic.

0FiftyTwo
My point is that the examples we've seen are mainly from Harry's perceptions, he hasn't actually tested any of them. The only one was the partial transfiguration which isn't exactly obvious to anyone else.
0Alsadius
Like there's no RL sports with silly rules. And do time-turners actually need protection? The seem to require pretty deliberate action to use, and I assume they're hard to break.
Karl00

Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments).

No. What he actually says is that when we do moral reasoning we are approximating some computation in the same ways that the pebblesorters are approximating primality. What make moral arguments valid or invalid is whether the arguments actually establish what they where trying to establish in the context of the actual co... (read more)

1Wei Dai
You seem to be taking a position that's different from Eliezer's, since AFAIK he has consistently defended this approach that you call "wrong" (for example in the thread following my first linked comment). If you have some idea of how to "derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments" that doesn't involve just "looking at where people moralities concentrate as you present moral arguments to them" then I'd be interested to know what it is. ETA: Assuming "when we do moral reasoning we are approximating some computation", what reasons do we have for thinking that the "some computation" will allow us to fully reduce "pain" to a set of truth conditions? What are some properties of this computation that you can cite as being known, that leads you to think this?
Karl-10

huge, unsolved debates over morality

One shouldn't confuse there being a huge debate over something with the problem being unsolved, far less unsolvable (look at the debate over free will or worse p-zombies). I have actually solved the problem of the moral value of animals to my satisfaction (my solution could be wrong, of course). As for the problem of dealing with peoples having multiple copies this really seems like the problem of reducing "magical reality fluid" which while hard seems like it should be possible.

also in math, it might be p

... (read more)
0A1987dM
Which is your solution, if I may ask?
0latanius
sure, good point. Nevertheless, if I'm correct, there still isn't any Scientifically Accepted Unique Solution for the moral value of animals, even though individuals (like you) might have their own solutions (the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?) (that was just some random example, it's fractional calculus which I heard a presentation about recently. Not especially relevant here though :)) I just found a nice example for the topic of the post that doesn't seem to be reducible to anything else: see the post "The "Scary problem of Qualia". There is no obvious answer, we didn't really encounter the question so far in practice (but we probably will in the future), and other than its impact on our utility functions, it seems to be the typical "tree falls in forest" one, not really constraining anything in the real world. So the extrapolated utility function seems to be at least category 2.
Karl10

Why exactly do you call 1 unlikely? The whole metaethics sequence argue in favor in 1 (If I understand what you mean by 1 correctly), so what part of that argument do you think is wrong specifically?

3latanius
Although I've read the metaethics sequence, that was a long time ago, but I think I'll put reading it again on my todo list then! My intuition behind thinking 1 unlikely (yes, it's just an intuition) comes from the fact that we are already bad at generalizing "people-ness"... (see animal rights for example: huge, unsolved debates over morality, combined with the fact that we just care more about human-looking, cute things than non-cute ones... which seems to be pretty arbitary to me). And things will get worse when we end up with entities that consist of non-integer numbers of non-boolean peopleness (different versions or instances of uploads, for example). Another feeling: also in math, it might be possible to generalize things, but not necessarily, and not always uniquely (integers to real numbers seems to work rather seamlessly, but then if you try to generalize n-th differentials over real numbers... what I've heard, there are a few different formulations that kind of work, but neither of them is the "real one".)
Karl00

There is an entire sequence dedicated to how to define concepts and the specific problem of categories as they matter for your utility function is studied in this post where it is argued those problems should be solved by moral arguments and the whole metaethics sequence argue for the fact that moral arguments are meaningful.

Now if you're asking me if we have a complete reduction of some concept relevant to our utility function all the way down to fundamental physics then the answer is no. That doesn't mean that partial reductions of some concepts potentia... (read more)

1Wei Dai
Something like that, except I don't know enough to claim that moral arguments are meaningless, just that the problem is unsolved. Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments). I've written before about why I find this unsatisfactory. Perhaps more relevant for the current discussion, I don't see much reason to think this kind of "meaning" is sufficient to allow us to fully reduce concepts like "pain" and "self".
Karl20

Actually, dealing with a component of your ontology not being real seems like a far harder problem than the problem of such a component not being fundamental.

According to the Great Reductionist Thesis everything real can be reduced to a mix of physical reference and logical reference. In which case, if every component of your ontology is real, you can obtain a formulation of your utility function in terms of fundamental things.

The case where some components of your ontology can't be reduced because they're not real and where your utility function refer exp... (read more)

5Wei Dai
Have we actually managed to perform any reductions? In order to reduce "apple" for example, So what are these truth conditions precisely? If there is an apple-shaped object carved out of marble sitting on the table, does that satisfy the truth conditions for "some apples on the table"? What about a virtual reality representation of an apple on a table? Usually we can just say "it doesn't matter whether we call that thing an apple" and get on with life, but we can't do that if we're talking about something that shows up (or seems to show up) as a term in our utility function, like "pain".
Karl40

Also I totally think there was a respectable hard problem

So you do have a solution to the problem?

Karl20

Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w' in W such that p is true in the possible world w and false in the possible world w'.

Then the question become: to be able to reason in all generality what collection of possible worlds should one use?

That's a very hard question.

Karl10

It's explained in detail in chapter 25 that the genes that make a person a wizard do not do so by building some complex machinery which allow you to become a wizard; the genes that make you a wizard constitute a marker which indicate to the source of magic that you should be allowed to cast spells.

0TheOtherDave
Whoops! Shows you how long it's been since I've read ch25. Thanks for clarifying that.
Karl00

I don't think taking polyjuice modify your genetic code. If that was the case, using polyjuice to take the form of a muggle or a squib would leave you without your magical powers.

2Armok_GoB
So? It should still create egg cells. There's some lower fertility from the yy possibility, and 66/33% rather than 50/50% of a boy. And maybe some increased risk of chromosomal diseases, but that should be it.
0TheOtherDave
This comment makes no sense to me at all. Are you presuming that genetic code controls the presence of magical powers independent of phenotypic expression?
0pedanterrific
Do we know that it doesn't?
Karl10

I'm confused. What's the distinction between x and here?

2Nisan
Briefly, is a variable, while is a numeral — i.e., the string where occurs x times. I just learned that the standard notation is not but . Less briefly: As I say in this comment, when I write ), what I really ought to write is )), where ) interprets z as the Gödel number of a string of symbols and replaces every occurrence of the first free variable with the string 1+...+1, where 1 occurs x times.
Karl10

On the other hand, as I have shown, if you chose t sufficiently large the algorithm I recommend in my post will necessarily end up one boxing if the formal system used is sound.

This is incorrect, as Zeno had shown more than 2000 years ago. It could be that your inference system generates an infinite sequence of statements of the form A()=1 ⇒ U()≥Si with sequence {Si} tending to, say, 100, but with all Si<100, so that A()=1 loses to A()=2 no matter how large the timeout is.

That's why you enumerate all proofs of statement of the form A()=a ⇒ U()≥u (... (read more)

0Vladimir_Nesov
I think it's a useful trick not so much because it doesn't require oracles, but because it doesn't require a Goedel statement (chicken rule), even though it depends on choosing a time limit, while the algorithm with Goedel statement doesn't.
0Vladimir_Nesov
You're right, there is no problem here, as long as you enumerate everything.
Karl00

That doesn't actually work. Take the Newcomb's problem. Suppose that your formal system quickly prove that A()≠1. Then it conclude that A()=1 ⇒ U()∈[0,0], on the other it end up concluding correctly that A()=2 ⇒ U()∈[1000,1000] so it end up two boxing. This is a possible behavior, even if the formal system used is sound, if one use rational intervals as you recommend. On the other hand, as I have shown, if you chose t sufficiently large the algorithm I recommend in my post will necessarily end up one boxing if the formal system used is sound. (using intervals was actually the first idea I had when coming to term with the problem detailed in the post, but it simply doesn't work.)

1Vladimir_Nesov
Not if we use a Goedel statement/chicken rule failsafe like the one discussed in Slepnev's article you linked to. Edit: This part of my comment is wrong. This is incorrect, as Zeno had shown more than 2000 years ago. It could be that your inference system generates an infinite sequence of statements of the form A()=1 ⇒ U()≥Si with sequence {Si} tending to, say, 100, but with all Si<100, so that A()=1 loses to A()=2 no matter how large the timeout is.
Karl20

Rationals are dense in the reals so there are always a rational value between any two real numbers. So for example, if it can be proven in your formal system that A()=1 ⇒ U()=π and this happen to be the maximal utility attainable, there will be a rational number x greater than the utility that can be achieved by any other action and such that x≤π, so you will be able to prove that A()=1 ⇒ U()≥x and so the first action will end up being taken because by definition of x it is greater than the utility that can be obtained by taking any other action.

2Vladimir_Nesov
I see. Still, time limit is not the criterion you want to use here, it would be better to establish rational interval bounds for utility of each possible action, and once the intervals are disjoint, you act. This way, the action is always optimal if all utilities are different, while if you just use the time limit, it could arrive before different actions could be evaluated with enough precision.
Karl30

I don't think the part about summoning Death is a reference to anything. After all, we already know what the incarnations of Death are in MOR. And it looks like the conterspell to dismiss Death is lost no more thanks to Harry...

1Document
It's a good thing Harry didn't follow that line of conversation; if it were me I could picture wondering out loud why nobody had tried the ritual, suddenly breaking off in realization and thereby giving myself away.
Karl00

What if that square is now occupied?

Karl10

So you're atheists are actually nay theists... If that's the case I have difficulty imagining how a group containing both atheists and theists could work at all...

Karl20

Color me interested.

I think the character creation rules really should be collected together. Has is, I can't figure out how your supposed to determine many of your initial statistics (wealth level, hit points, speed...). Also I don't like the fact that the number of points you have to distribute among the big five and among the skills are random. And of course a system where you simply divide X points among your stats however you wish is completely broken. You should really think about introducing some limit on the number of point you can put into one you... (read more)

Karl10

Here in Spain, France, the UK, the majority of people are Atheists.

I would be interested in knowing where you got your numbers because the statistics I found definitively disagreed with this.

-1Raw_Power
Checks his numbers Forgive me. I should have said the majority of young people (below 30) who, for our uses and purposes, are those who count, and the target demographic. It has come to the point that self-declared Christian kids get bullied and insulted [which is definitely wrong and stupid and not a very good sign that the Sanity Waterline was raised much]. Then again, I have this rule of thumb that I don't count people who don't attend church as believers, and automatically lump them into the "atheist in the making" category, a process that is definitely not legitimate nor fair. I sincerely apologize for this, and retract the relevant bits. Now let's see. For one thing The fact that Jedi outnumber Jews in the UK should be a sign that people don't take that part of the polls very seriously. That said This last bit i found particularly troubling because I do not recall metting a single person, in all my time in Spain, who declared themselves a Christian except in name only (as in, embarassingly confessing they only got baptized or went to Communion to please the grandparents). Some entertained some vague fuzziness, but simply telling them a little about "belief in belief" and some reductionist notions has been enough to throw them in serious doubt. I may very well be mistaken, but my perception is that they are really ripe for the taking, and only need to hear the right words. My perception as a young Arab-European is that the trend is overwhelmingly in the direction of faithlessness, and that it is an accelerating process with no stopping force in sight.
Karl10

Congratulation for raising the expected utility of the future!

Load More