Comment author: DataPacRat 17 September 2015 06:44:53PM 2 points [-]

/A/ reason not to spend much time thinking about the I-am-undetectably-insane scenario is as you describe; however, it's not the /only/ reason not to spend much time thinking about it.

I often have trouble explaining myself, and need multiple descriptions of an idea to get a point across, so allow me to try again:

There is roughly a 30 out of 1,000,000 chance that I will die in the next 24 hours. Over a week, simplifying a bit, that's roughly 200 out of 1,000,000 odds of me dying. If I were to buy a 1-in-a-million lottery ticket a week, then, by one rule of thumb, I should be spending 200 times as much of my attention on my forthcoming demise than I should on buying that ticket and imagining what to do with the winnings.

In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I'm hallucinating all this, and the darned-near-zero odds of a Pascal's Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal's Mugging attempt is true; which works out to darned-near-zero seconds spent bothering with the Mugging, no matter how much or how little time I spend contemplating the Matrix.

(There are, of course, alternative viewpoints which may make it worth spending more time on the low-probability scenarios in each case; for example, buying a lottery ticket can be viewed as one of the few low-cost ways to funnel money from most of your parallel-universe selves so that a certain few of your parallel-universe selves have enough resources to work on certain projects that are otherwise infeasibly expensive. But these alternatives require careful consideration and construction, at least enough to be able to have enough logical weight behind them to counter the standard rule-of-thumb I'm trying to propose here.)

Comment author: Sebastian_Hagen 17 September 2015 08:12:47PM *  2 points [-]

In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I'm hallucinating all this, and the darned-near-zero odds of a Pascal's Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal's Mugging attempt is true

That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.

If I don't know anything about Newcomb's problem and estimate a 10% chance of Omega showing up and posing it to me tomorrow, I'll definitely spend more than 10% of my planning time for tomorrow reading up on and thinking about it. Why? Because I'll be able to make far more money in that possible future than the others, which means that the expected utility differentials are larger, and so it makes sense to spend more resources on preparing for it.

The I-am-undetectably-insane case is the opposite of this, a scenario that it's pretty much impossible to usefully prepare for.

And a PM scenario is (at least for an expected-utility maximizer) a more extreme variant of my first scenario - low probabilities of ridiculously large outcomes, that are because of that still worth thinking about.

Comment author: ike 16 September 2015 05:40:15PM 3 points [-]

So, um:

Which axiom does this violate?

Comment author: Sebastian_Hagen 17 September 2015 01:59:27PM *  5 points [-]

Continuity and independence.

Continuity: Consider the scenario where each of the [LMN] bets refer to one (guaranteed) outcome, which we'll also call L, M and N for simplicity.

Let U(L) = 0, U(M) = 1, U(N) = 10**100

For a simple EU maximizer, you can then satisfy continuity by picking p=(1-1/10**100). A PESTI agent, OTOH, may just discard a (1-p) of 1/10**100, which leaves no other options to satisfy it.

The 10**100 value is chosen without loss of generality. For PESTI agents that still track probabilities of this magnitude, increase it until they don't.

Independence: Set p to a number small enough that it's Small Enough To Ignore. At that point, the terms for getting L and M by that probability become zero, and you get equality between both sides.

Comment author: DataPacRat 16 September 2015 03:19:55PM 7 points [-]

A rule-of-thumb I've found use for in similar situations: There are approximately ten billion people alive, of whom it's a safe conclusion that at least one is having a subjective experience that is completely disconnected from objective reality. There is no way to tell that I'm not that one-in-ten-billion. Thus, I can never be more than one minus one-in-ten-billion sure that my sensory experience is even roughly correlated with reality. Thus, it would require extraordinary circumstances for me to have any reason to worry about any probability of less than one-in-ten-billion magnitude.

There are all sorts of questionable issues with the assumptions and reasoning involved; and yet, it seems roughly as helpful as remembering that I've only got around a 99.997% chance of surviving the next 24 hours, another rule-of-thumb which handily eliminates certain probability-based problems.

Comment author: Sebastian_Hagen 17 September 2015 01:12:16PM *  4 points [-]

Thus, I can never be more than one minus one-in-ten-billion sure that my sensory experience is even roughly correlated with reality. Thus, it would require extraordinary circumstances for me to have any reason to worry about any probability of less than one-in-ten-billion magnitude.

No. The reason not to spend much time thinking about the I-am-undetectably-insane scenario is not, in general, that it's extraordinarily unlikely. The reason is that you can't make good predictions about what would be good choices for you in worlds where you're insane and totally unable to tell.

This holds even if the probability for the scenario goes up.

In response to Meetup : Dublin
Comment author: Sebastian_Hagen 01 May 2015 10:30:56PM 1 point [-]

I'll be there.

Comment author: KatjaGrace 31 March 2015 04:28:45AM 5 points [-]

Are you concerned about AI risk? Do you do anything about it?

Comment author: Sebastian_Hagen 31 March 2015 09:25:42PM 5 points [-]

It's the most important problem of this time period, and likely human civilization as a whole. I donate a fraction of my income to MIRI.

Comment author: KatjaGrace 17 March 2015 01:35:07AM 3 points [-]

What do you think of Kenzi's views?

Comment author: Sebastian_Hagen 17 March 2015 07:16:34PM *  1 point [-]

Which means that if we buy this [great filter derivation] argument, we should put a lot more weight on the category of 'everything else', and especially the bits of it that come before AI. To the extent that known risks like biotechnology and ecological destruction don't seem plausible, we should more fear unknown unknowns that we aren't even preparing for.

True in principle. I do think that the known risks don't cut it; some of them might be fairly deadly, but even in aggregate they don't look nearly deadly enough to contribute much to the great filter. Given the uncertainties in the great filter analysis, that conclusion for me mostly feeds back in that direction, increasing the probability that the GF is in fact behind us.

Your SIA doomsday argument - as pointed out by michael vassar in the comments - has interesting interactions with the simulation hypothesis; specifically, since we don't know if we're in a simulation, the bayesian update in step 3 can't be performed as confidently as you stated. Given this, "we really can't see a plausible great filter coming up early enough to prevent us from hitting superintelligence" is also evidence for this environment being a simulation.

Comment author: KatjaGrace 03 March 2015 02:07:41AM 3 points [-]

First, humanity's cosmic endowment is astronomically large—there is plenty to go around even if our process involves some waste or accepts some unnecessary constraints. (p227)

This is saying that our values are diminishing enough in stuff that much of the universe doesn't matter to us. This seems true under some plausible values and not under others. In particular, if we pursue some kind of proportional aggregative consequentialism, then if each individual has diminishing returns, we should create more individuals so that there is not so much to go around.

Comment author: Sebastian_Hagen 04 March 2015 01:53:50AM 1 point [-]

This issue is complicated by the fact that we don't really know how much computation our physics will give us access to, or how relevant negentropy is going to be in the long run. In particular, our physics may allow access to (countably or more) infinite computational and storage resources given some superintelligent physics research.

For Expected Utility calculations, this possibility raises the usual issues of evaluating potential infinite utilities. Regardless of how exactly one decides to deal with those issues, the existence of this possibility does shift things in favor of prioritizing for safety over speed.

Comment author: PhilGoetz 20 February 2015 07:16:21PM *  2 points [-]

I don't know what you mean by invariants, or why you think they're good, but: If the natural development from this earlier time period, unconstrained by CEV, did better than CEV from that time period would have, that means CEV is worse than doing nothing at all.

Comment author: Sebastian_Hagen 21 February 2015 07:01:04PM *  1 point [-]

I used "invariant" here to mean "moral claim that will hold for all successor moralities".

A vastly simplified example: at t=0, morality is completely undefined. At t=1, people decide that death is bad, and lock this in indefinitely. At t=2, people decide that pleasure is good, and lock that in indefinitely. Etc.

An agent operating in a society that develops morality like that, looking back, would want to have all the accidents that lead to current morality to be maintained, but looking forward may not particularly care about how the remaining free choices come out. CEV in that kind of environment can work just fine, and someone implementing it in that situation would want to target it specifically at people from their own time period.

Comment author: PhilGoetz 17 February 2015 05:28:37AM 1 point [-]

It might not be a problem if we decide to work on the meta-level, and, rather than trying to optimize the universe according to some extrapolation of human values, tried to make sure the universe kept on having conditions that would produce some things like humans.

Comment author: Sebastian_Hagen 17 February 2015 10:59:17PM 2 points [-]

That does not sound like much of a win. Present-day humans are really not that impressive, compared to the kind of transhumanity we could develop into. I don't think trying to reproduce entites close to our current mentality is worth doing, in the long run.

Comment author: Furcas 17 February 2015 02:38:30AM 0 points [-]

Nah, we can just ignore the evil fraction of humanity's wishes when designing the Friendly AI's utility function.

Comment author: Sebastian_Hagen 17 February 2015 10:49:46PM *  2 points [-]

While that was phrased in a provocative manner, there /is/ an important point here: If one has irreconcilable value differences with other humans, the obvious reaction is to fight about them; in this case, by competing to see who can build an SI implementing theirs first.

I very much hope it won't come to that, in particular because that kind of technology race would significantly decrease the chance that the winning design is any kind of FAI.

In principle, some kinds of agents could still coordinate to avoid the costs of that kind of outcome. In practice, our species does not seem to be capable of coordination at that level, and it seems unlikely that this will change pre-SI.

View more: Next