Comment author: ChristianKl 11 January 2016 02:14:50PM 2 points [-]

Essentially, if I wanted to run an evil fundamentalist oppressive state I would look as cuddly as possible at first.

Nobody set's out to run an evil fundamentalist oppressive state but certain people set out to run a evil fundamentalist oppressive state.

Apart from that you ignore the fact of what it means to claim to be a caliphate. ISIS got a lot of power through the act of founding a self-proclaimed caliphate.

Comment author: thakil 11 January 2016 04:06:23PM 0 points [-]

I'm a little confused by your first point (I guess you're pointing out a grammar/spelling error, but the only one I note is that you've used "a" instead of "an", and evil starts with a vowel so, no I don't understand that point).

You're second point is correct, I meant to mention that as a cost. By appearing more moderate I cost myself support. I've sort of hand waved the idea that I can just convince everyone to fight for me in the first place, which is obviously a difficult problem! That said I think you could be a little less obviously evil initially and still attract people to your fundamentalist regime.

Comment author: RichardKennaway 09 January 2016 08:31:33PM 13 points [-]

let us assume, that the top leadership of ISIS is composed of completely rational and very intelligent individuals

Of the sort that casebash assures us cannot exist? The imaginary competence of fictional rational heroes? Top human genius level?

No. These all amount to assuming a falsehood.

  1. The premise of this article is wrong. The ISIS are really just a bunch of idiots, and their apparent successes are only caused by the powers in the region being much more incompetent than ISIS

Another straw falsehood to set beside the first one. All of this rules out from the start any consideration of ISIS as they actually are. They are real people with a mission, no more and no less intelligent than anyone else who succeeds in doing what they have done so far.

There is no mystery about what ISIS wants. They tell the world in their glossy magazine, available in many languages, including English (see the link at the foot of that page). They tell the world in every announcement and proclamation.

"Rationalist", however, seem incapable of believing that anyone ever means what they say. Nothing is what it is, but a signal of something else.

I have not seen any reason to suppose that they do not intend exactly what they say, just as Hitler did in "Mein Kampf". They are fighting to establish a new Caliphate which will spread Islam by the sword to the whole world, Allahu akbar. All else is strategy and tactics. If their current funding model is unsustainable, they will change it as circumstances require. If their recruitment methods falter, they will search for other ways.

More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?

I recommend a reading of Max Frisch's play "The Fire Raisers".

Comment author: thakil 11 January 2016 09:14:21AM 0 points [-]

"More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?"

I think this is an interesting question. If you want to create a new islamic state you could do worse than siezing on the chaos caused by a civil war in Syria, and a weak state in Iraq. You will be opposed by

1)local interests, i.e. the governments of Iraq and Syria 2)The allies of local interests. In the case of Syria, Iran and Russia, Iraq the US and Britain.

I think 2 is quite interesting because the amount other nations intervene will be due in part to how much their population cares. I would argue that the attacks on Russia and France represent a strategic mistake because in both cases it encouraged those nations to be more active in their assault on ISIS.

Arguably the best way to discourage international interests from getting involved is increasing local costs. Make sure that any attacks on you will kill civillians, try to appear as legitimate and as boring as possible.

Essentially, if I wanted to run an evil fundamentalist oppressive state I would look as cuddly as possible at first. In fact, I would probably pretend to be on the side of the less religiously motivated rebels, so I can get guns and arms. Then, when Assad is toppled, make sure that any oil I have is available. My model here will be to look as much as Saudia Arabia as possible, as they can do horrifying things to their own citizens provided they remain a key strategic ally in the region. Real politik will trumph over morality provided you can keep western eyes off of you.

The goal, always, would be to be as non threatening as possible to squeeze as much arms as you can out of western allies (and Russian allies too, if you can work it, but if you topple Assad you probably can't), which puts you in a position to expand your interests. Then you need to provoke other nations to invade you, so you can plausibly claim to be the wronged party in any conflict where the US feels obliged to pick sides.

Comment author: Slider 07 January 2016 09:12:47PM 0 points [-]

What you are saying would be optimising in a universe where the agent gets the utility as it says the number. Then the average utility of a ungoer would be greater than that of a idler.

However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game.

There is also the problem that choosing the continue doesn't yield the utilty with certainty only "almost always". The ongoer strategy hits precicely in the hole in this certainty when no payout happens. I guess you may be able to define a game where concurrently with their actions. But this reeks of "the house" having premonition on what the agent is going to do instead of inferring its from its actions. if the rules are "first actions and THEN payout" you need to be able to do your action to get a payout.

In the ongoing version I could think of rules that an agent that has said "9.9999..." to 400 digits would receive 0.000.(401 zeroes)..9 utility on the next digit. However if the agents get utility assigned only once there won't be a "standing so far". However this behaviour would then be the perfectly rational thing to do as there would be a uniquely determined digit to keep on saying. I am suspecting the trouble is mixing the ongoing and the dispatch version to each other inconsistently.

Comment author: thakil 08 January 2016 08:49:09AM 0 points [-]

"However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game."

The problem with this logic is the assumption that there is a "result" of 0. While it's certainly true that an "idler" will obtain an actual value at some point, so we can assess how they have done, there will never be a point in time that we can assess the ongoer. If we change the criteria and say that we are going to assess at a point in time then the ongoer can simply stop then and obtain the highest possible utility. But time never ends, and we never mark the ongoer's homework, so to say he has a utility of 0 at the end is nonsense, because there is, by definition, no end to this scenario.

Essentially, if you include infinity in a maximisation scenario, expect odd results.

Comment author: Slider 07 January 2016 09:49:00AM 0 points [-]

python code of

while True: pass ohnoes=1/0

doesn't generate a runtime exception when ran

similiarly

utility=0 a=0 while True: a+=1 utility+=a

doesn't assign to utility more than once

in contrast

utility=0 while True: utility+=1

does assign to utility more than once. With finite iterations these two would be quite interchangeable but with non-terminating iterations its not. The iteration doesn't need to terminate for this to be true.

Say you are in a market and you know someone who sells wheat for $5 and someone who buys it for $10 and someone who sells wine for $7 and suppose that you care about wine. If you have a strategy that only consists of buying and selling wheat you don't get any wine. There needs to be a "cashout" move of buying wine atleast once. Now think of a situation that when you buy wine you need to hand over your wheat dealing licence. Well a wheat licence means arbitrary amounts of wine so irrational to ever trade wheat license away for a finite amount of wine right? But then you end up with a wine "maximising strategy" that does so by not ever buying wine.

Comment author: thakil 07 January 2016 11:54:13AM *  1 point [-]

Indeed. And that's what happens when you give a maximiser perverse incentives and infinity in which to gain them.

This scenario corresponds precisely to pseudocode of the kind

newval<-1

oldval<-0

while newval>oldval

{

oldval<-newval

newval<-newval+1

}

Which never terminates. This is only irrational if you want to terminate (which you usually do), but again, the claim that the maximiser never obtains value doesn't matter because you are essentially placing an outside judgment on the system.

Basically, what I believe you (and the op) are doing is looking at two agents in the numberverse.

Agent one stops at time 100 and gains X utility Agent two continues forever and never gains any utility.

Clearly, you think, agent one has "won". But how? Agent two has never failed. The numberverse is eternal, so there is no point at which you can say it has "lost" to agent one. If the numberverse had a non zero probability of collapsing at any point in time then Agent two's strategy would instead be more complex (and possibly uncomputable if we distribute over infinity), but as we are told that agent one and two exist in a changeless universe and their only goal is to obtain the most utility then we can't judge either to have won. In fact agent two's strategy only prevents it from losing, and it can't win.

That is, if we imagine the numberverse full of agents, any agent which chooses to stop will lose in a contest of utility, because the remaining agents can always choose to stop and obtain their far greater utility. So the rational thing to do in this contest is to never stop.

Sure, that's a pretty bleak lookout, but as I say, if you make a situation artificial enough you get artificial outcomes.

Comment author: Slider 06 January 2016 08:06:03PM 0 points [-]

Infinite utility is not a possible utility in the scenario and therefore the behaviour of not stopping is not a highest possible utility. Continue to speak is an improvement only given that you do stop at some time. If you continue by not stopping ever you get 0 utility which is lower than speaking a 2 digit number.

Comment author: thakil 07 January 2016 08:41:59AM *  0 points [-]

But time doesn't end. The criteria of assessment is

1)I only care about getting the highest number possible

2)I am utterly indifferent to how long this takes me

3)The only way to generate this value is by speaking this number (or, at the very least, any other methods I might have used instead are compensated explicitly once I finish speaking).

If your argument is that Bob, who stopped at Grahams number, is more rational than Jim, who is still speaking, then you've changed the terms. If my goal is to beat Bob, then I just need to stop at Graham's number plus one.

At any given time, t, I have no reason to stop, because I can expect to earn more by continuing. The only reason this looks irrational is we are imagining things which the scenario rules out: time costs or infinite time coming to an end.

The argument "but then you never get any utility" is true, but that doesn't matter, because I last forever. There is no end of time in this scenario.

If your argument is that in a universe with infinite time, infinite life and a magic incentive button then all everyone will do is press that button forever then you are correct, but I don't think you're saying much.

Comment author: casebash 06 January 2016 10:48:14AM 0 points [-]

"Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop." - In this scenario, you receive the utility when you stop speaking. You can speak for an arbitrarily long amount of time and it doesn't cost you any utility as you are compensated for any utility that it would cost, but if you never stop speaking you never gain any utility.

Comment author: thakil 06 January 2016 01:51:51PM 1 point [-]

Then the "rational" thing is to never stop speaking. It's true that by never stopping speaking I'll never gain utility but by stopping speaking early I miss out on future utility.

The behaviour of speaking forever seems irrational, but you have deliberately crafted a scenario where my only goal is to get the highest possible utility, and the only way to do that is to just keep speaking. If you suggest that someone who got some utility after 1 million years is "more rational" than someone still speaking at 1 billion years then you are adding a value judgment not apparent in the original scenario.

Comment author: casebash 06 January 2016 12:05:16AM 0 points [-]

"Keep going until all that's left is one extra apple, and the rational thing to do is to wait forever for an apple you'll never end up with" - that doesn't really follow. You have to get the Apple and exit the time loop at some point or you never get anything.

"If you find yourself in a universe without costs, where you can obtain an infinite amount of utility by repeating the number "9" forever, well, keep repeating the number "9" forever, along with everybody else in the universe." - the scenario specifically requires you to terminate in order to gain any utility.

Comment author: thakil 06 January 2016 10:24:36AM 1 point [-]

But apparently you are not losing utility over time? And holding utility over time isn't of value to me, otherwise my failure to terminate early is costing me the utility I didn't take at that point in time? If there's a lever compensating for that loss of utility then I'm actually gaining the utility I'm turning down anyway!

Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop.

Comment author: James_Miller 06 August 2015 03:05:55PM *  0 points [-]

thakil, I have a deal for you:

I offer you an extra .5% probability of your getting to spend a million years in utopia. How much are you willing to pay?

Comment author: thakil 07 August 2015 09:14:07AM 0 points [-]

A fairly small amount. Again, risk aversion says to me that a 1 in 1000 chance isn't worth much if I can only make that bet once.

Comment author: ChristianKl 05 August 2015 08:54:11AM 0 points [-]

These all combine to make the probability of success quite low.

Could you provide a probability value for "quite low"?

Comment author: thakil 05 August 2015 09:18:28AM *  0 points [-]

Less than 1%. I haven't thought hard about these numbers, but I would say 1 has a probability of say 50/60%,2 10% (as 2 allows for societal collapse, not just company collapse) 3 10% (being quite generous there) and 4 40% which gives us 0.60.10.1*0.4=0.0024. If I'm more generous to 3, bumping it up to 80% I get 0.0192. I don't think I could be more generous to 2 though. These numbers are snatched from the air without deep thought, but I don't think they're wildly bad or anything.

Comment author: thakil 05 August 2015 07:50:59AM *  0 points [-]

My argument against cyronics:

The probability of being successfully frozen and then being revived later on is dependent on the following

1)Being successfully frozen upon death (loved ones could interfere, lawyers could interfere, the manner of my death could interfere)

2)The company storing me keeps me in the same (or close to it) condition for however long it takes for revivification technologies to be discovered

3)The revivification technologies are capable of being discovered

4)There is a will to revivify me

These all combine to make the probability of success quite low.

The value of success is obviously high, but it's difficult to assess how high: just because they can revivify me doesn't mean my life will then end up being endless (at the very least, violent death might still lead to death in the future)

This is weighted by the costs. These are

1)The obvious financial ones

2)The social ones. I actually probably value this higher than 1. Explaining to my loved ones my decision, having to endure mockery and possibly quite strong reactions

The final point here is about risk aversion. While one could probably set up the utility calculation above to come up positive, I'm not sure that utility calculation is the correct way to determine whether to make such a risk. That is, if a probability of a one shot event is low enough, the expected value isn't a very useful indicator of my actual returns. That is, if a lottery has a positive gain, it still might not be worth me playing it if the odds are still very much against me making any money from it!

So how would you convince me?

1)Drop the costs, both social and financial. The former is obviously done by making cryonics more mainstream, the latter... well by making cryonics more mainstream, probably

2)Convince me that the probability of all 4 components is higher than I think it is. If the conjoined probability started hitting >5% then I might start thinking about it seriously.

View more: Next