Posts

Sorted by New

Wiki Contributions

Comments

This is an interesting theorem which helps illuminate the relationship between unbounded utilities and St Petersburg gambles. I particularly appreciate that you don't make an explicit assumption that the values of gambles must be representable by real numbers which is very common, but unhelpful in a setting like this. However, I do worry a bit about the argument structure.

The St Petersburg gamble is a famously paradox-riddled case. That is, it is a very difficult case where it isn't clear what to say, and many theories seem to produce outlandish results. When this happens, it isn't so impressive to say that we can rule out an opposing theory because in that paradox-riddled situation it would lead to strange results. It strikes me as similar to saying that a rival theory leads to strange result in variable population-size cases so we can reject it (when actually, all theories do), or that it leads to strange results in infinite population cases (when again, all theories do).

Even if one had a proof that an alternative theory doesn't lead to strange conclusions in the St Petersburg gamble, I don't think this would count all that much in its favour. As it seems plausible to me that various rules of decision theory that were developed in the cleaner cases of finite possibility spaces (or well-behaved infinite spaces) need to be tweaked to account for more pathological possibility spaces. For a simple example, I'm sympathetic to the sure thing principle, but it directly implies that the St Petersburg Gamble is better than itself, because an unresolved gamble is better than a resolved one, no matter how the latter was resolved. My guess is that this means the sure thing principle needs to have its scope limited to exclude gambles whose value is higher than that of any of their resolutions.

Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.

The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even harder to apply in real life than it looked, and (2) it doesn't seem to be able to deal with cases where you learn more about ethics (i.e. update your credence function over moral theories) -- at least not without quite a bit of extra explanation about how that works. I suppose the best answer may well be that the policies over which the representatives are arguing include branches dealing with all ways the credences could change, weighted by their probabilities. This is even more messy.

My guess is that of these two broad options (decide one bounded decision vs decide everything all at once) the latter is better. But either way it is a bit less intuitive than it first appears.

This is a good idea, though not a new one. Others have abandoned the idea of a formal system for this on the grounds that:

1) It may be illegal 2) Quite a few people think it is illegal or morally dubious (whether or not it is actually illegal or immoral)

It would be insane to proceed with this without confirming (1). If illegal, it would open you up to criminal prosecution, and more importantly, seriously hurt the movements you are trying to help. I think that whether or not it turns out to be illegal, (2) is sufficient reason to not pursue it. It may cause serious reputational damage to the movement which I'd expect to easily outweigh the financial benefits.

I also think that the 10% to 20% boost is extremely optimistic. That would only be achieved if almost everyone was using it and they all wanted to spend most of their money funding charities that don't operate in their countries. I'd expect something more like a boost of a few percent.

Note that there are also very good alternatives. One example is a large effort to encourage people to informally do this in a non-matched way by donating to the subset of effective charities that are tax deductable in their country. This could get most of the benefits for none of the costs.

This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.

I'd add that if we are trying to reach the conclusion that "we should be more worried about non-general intelligences than we currently are", then you don't need it to be true that general intelligences are really difficult. It would be enough that "there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one". I'd be inclined to believe that even without any of the theorising about possibility.

I think one reason for the focus on 'general' in the AI Safety community is that it is a stand in for the observation that we are not worried about path planners or chess programs or self-driving cars etc. One way to say this is that these are specialised systems, not general ones. But you rightly point out that it doesn't follow that we should only be worried about completely general systems.

Thanks for bringing this up Luke. I think the term 'friendly AI' has become something of an albatross around our necks as it can't be taken seriously by people who take themselves seriously. This leaves people studying this area without a usable name for what they are doing. For example, I talk with parts of the UK government about the risks of AGI. I could never use the term 'friendly AI' in such contexts -- at least without seriously undermining my own points. As far as I recall, the term was not originally selected with the purpose of getting traction with policy makers or academics, so we shouldn't be too surprised if we can see something that looks superior for such purposes. I'm glad to hear from your post that 'AGI safety' hasn't rubbed people up the wrong way, as feared.

It seems from the poll that there is a front runner, which is what I tend to use already. It is not too late to change which term is promoted by MIRI / FHI etc. I think we should.

This is quite possibly the best LW comment I've ever read. An excellent point with a really concise explanation. In fact it is one of the most interesting points I've seen within Kolmogorov complexity too. Well done on independently deriving the result!

Without good ways to overcome selection bias, it is unclear that data like this can provide any evidence of outsized impact of unconventional approaches. I would expect a list of achievements as impressive as the above whether or not there was any correlation between the two.

Carl,

You are completely right that there is a somewhat illicit factor-of-1000 intuition pump in a certain direction in the normal problem specification, which makes it a bit one-sided. Will McAskill and I had half-written a paper on this and related points regarding decision-theoretic uncertainty and Newcomb's problem before discovering that Nozick had already considered it (even if very few people have read or remembered his commentary on this).

We did still work out though that you can use this idea to create compound problems where for any reasonable distribution of credences in the types of decision theory, you should one-box on one of them and two-box on the other: something that all the (first order) decision theories agree is wrong. So much the worse for them, we think. I've stopped looking into this, but I think Will has a draft paper where he talks about this alongside some other issues.

Load More