This is an interesting theorem which helps illuminate the relationship between unbounded utilities and St Petersburg gambles. I particularly appreciate that you don't make an explicit assumption that the values of gambles must be representable by real numbers which is very common, but unhelpful in a setting like this. However, I do worry a bit about the argument structure.
The St Petersburg gamble is a famously paradox-riddled case. That is, it is a very difficult case where it isn't clear what to say, and many theories seem to produce outlandish results. W...
Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.
The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even...
This is a good idea, though not a new one. Others have abandoned the idea of a formal system for this on the grounds that:
1) It may be illegal 2) Quite a few people think it is illegal or morally dubious (whether or not it is actually illegal or immoral)
It would be insane to proceed with this without confirming (1). If illegal, it would open you up to criminal prosecution, and more importantly, seriously hurt the movements you are trying to help. I think that whether or not it turns out to be illegal, (2) is sufficient reason to not pursue it. It may cause...
This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.
I'd add that if we are trying to reach the conclusion that "we should be more worried about non-general intelligences than we currently are", then you don't need it to be true that general intelligences are really difficult. It would be enough that "there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one". I'd be inclined to believe that ev...
Thanks for bringing this up Luke. I think the term 'friendly AI' has become something of an albatross around our necks as it can't be taken seriously by people who take themselves seriously. This leaves people studying this area without a usable name for what they are doing. For example, I talk with parts of the UK government about the risks of AGI. I could never use the term 'friendly AI' in such contexts -- at least without seriously undermining my own points. As far as I recall, the term was not originally selected with the purpose of getting traction w...
This is quite possibly the best LW comment I've ever read. An excellent point with a really concise explanation. In fact it is one of the most interesting points I've seen within Kolmogorov complexity too. Well done on independently deriving the result!
Without good ways to overcome selection bias, it is unclear that data like this can provide any evidence of outsized impact of unconventional approaches. I would expect a list of achievements as impressive as the above whether or not there was any correlation between the two.
Carl,
You are completely right that there is a somewhat illicit factor-of-1000 intuition pump in a certain direction in the normal problem specification, which makes it a bit one-sided. Will McAskill and I had half-written a paper on this and related points regarding decision-theoretic uncertainty and Newcomb's problem before discovering that Nozick had already considered it (even if very few people have read or remembered his commentary on this).
We did still work out though that you can use this idea to create compound problems where for any reasonable di...
Regarding (2), this is a particularly clean way to do it (with some results of my old simulations too).
http://www.amirrorclear.net/academic/papers/sipd.pdf http://www.amirrorclear.net/academic/ideas/dilemma/index.html
We can't use the universal prior in practice unless physics contains harnessable non-recursive processes. However, this is exactly the situation in which the universal prior doesn't always work. Thus, one source of the 'magic' is through allowing us to have access to higher levels of computation than the phenomena we are predicting (and to be certain of this).
Also, the constants involved could be terrible and there are no guarantees about this (not even probabilistic ones). It is nice to reach some ratio in the limit, but if your first Graham's number of guesses are bad, then that is very bad for (almost) all purposes.
Phil,
It's not actually that hard to make a commitment to give away a large fraction of your income. I've done it, my wife has done it, several of my friends have done it etc. Even for yourself, the benefits of peace of mind and lack of cognitive dissonance will be worth the price, and by my calculations you can make the benefits for others at least 10,000 times as big as the costs for yourself. The trick is to do some big thinking and decision making about how to live very rarely (say once a year) then limit your salary through regular giving. That way you...
I didn't watch the video, but I don't see how that could be true. Occam's razor is about complexity, while the conjunction fallacy is about logical strength.
Sure 'P & Q' is more complex than 'P', but 'P' is simpler than '(P or ~Q)' despite it being stronger in the same way (P is equivalent to (P or ~Q) & (P or Q)).
(Another way to see this is that violating Occam's razor does not make things fallacies).
This certainly doesn't work in all cases:
There is a hidden object which is either green, red or blue. Three people have conflicting opinions about its colour, based on different pieces of reasoning. If you are the one who believes it is green, you have to add up the opponents who say not-green, despite the fact that there is no single not-green position (think of the symmetry -- otherwise everyone could have too great confidence). The same holds true if these are expert opinions.
The above example is basically as general as possible, so in order for your ar...
I don't think I can persuaded.
I have many good responses to the comments here, and I suppose I could sketch out some of the main arguments against anti-realism, but there are also many serious demands on my time and sadly this doesn't look like a productive discussion. There seems to be very little real interest in finding out more (with a couple of notable exceptions). Instead the focus is on how to justify what is already believed without finding out any thing else about what the opponents are saying (which is particularly alarming given that many commenters are pointing out that they don't understand what the opponents are saying!).
Given all of this, I fear that writing a post would not be a good use of my time.
You are correct that it is reasonable to assign high confidence to atheism even if it doesn't have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.
Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don't agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).
Pick your favourite philosophical claim. I'm sure ...
Thanks for looking that up Carl -- I didn't know they had the break-downs. This is the more relevant result for this discussion, but it doesn't change my point much. Unless it was 80% or so in favour of anti-realism, I think holding something like 95% credence in anti-realism this is far too high for non-experts.
Atheism doesn't get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).
You are entirely right that the 56% would split up into many subgroups, but I don't really see how this weakens my point: more philosophers support realist positions than anti-realist ones. For what its worth, the anti-realists are also fragmented in a similar way.
Disagreeing positions don't add up just because they share a feature. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.
If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.
This should be ...
In metaethics, there are typically very good arguments against all known views, and only relatively weak arguments for each of them. For anything in philosophy, a good first stop is the Stanford Encyclopedia of Philosophy. Here are some articles on the topic at SEP:
I think the best book to read on metaethics is:
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with ...
Many posts here strongly dismiss [moral realism and simplicity], effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. [...] For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.
One person's modus ponens is another's modus tollens. You say that professional philosophers' disagreement implies that antirealists shoul...
Among target faculty listing meta-ethics as their area of study moral realism's lead is much smaller: 42.5% for moral realism and 38.2% against.
Looking further through the philpapers data, a big chunk of the belief in moral realism seems to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers...
Toby, I spent a while looking into the meta-ethical debates about realism. When I thought moral realism was a likely option on the table, I meant:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory distinction, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But modern philosophers who call themselves "realists" don...
I am a moral cognitivist. Statements like "ceteris paribus, happiness is a good thing" have truth-values. Such moral statements simply are not compelling or even interesting enough to compute the truth-value of to the vast majority of agents, even those which maximize coherent utility functions using Bayesian belief updating (that is, rational agents) or approximately rational agents.
AFAICT the closest official term for what I am is "analytic descriptivist", though I believe I can offer a better defense of analytic descriptivism than ...
From your SEP link on Moral Realism: "It is worth noting that, while moral realists are united in their cognitivism and in their rejection of error theories, they disagree among themselves not only about which moral claims are actually true but about what it is about the world that makes those claims true. "
I think this is good cause for breaking up that 56%. We should not take them as a block merely because (one component of) their conclusions match, if their justifications are conflicting or contradictory. It could still be the case that 90% of...
Why?
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
(And why do you find it odd, BTW?)
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whethe...
Thanks for the post Wei, I have a couple of comments.
Firstly, the dichotomy between Robin's approach and Nick and mine is not right. Nick and I have always been tempted to treat moral and descriptive uncertainty in exactly the same way insofar as this is possible. However, there are cases where this appears to be ill-defined (eg how much happiness for utilitarians is worth breaking a promise for Kantians?), and to deal with these cases Nick and I consider methods that are more generally applicable. We don't consider the bargaining/voting/market approach to...
Not quite. It depends on your beliefs about how the calculation could go wrong and how much this would change the result. If you are very confident in all parts except a minor correcting term, and are simply told that there is an error in the calculation, then you can still have some kind of rough confidence in the result (you can see how to spell this out in maths). If you know the exact part of the calculation that was mistaken, then the situation is slightly different, but still not identical to reverting to your prior.
You may wish to check out the paper we wrote at the FHI on the problem of taking into account mistakes in one's own argument. The mathematical result is the same as the one here, but the proof is more compelling. Also, we demonstrate that when applied to the LHC, the result is very different to the above analysis.
http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0006/4020/probing-the-improbable.pdf
We talk about this a bit at FHI. Nick has written a paper which is relevant:
Everybody knows blue/green are correct categories, while grue/bleen are not.
Philosophers invented grue/bleen in order to be obviously incorrect categories, yet difficult to formally separate from the intuitively correct ones. There are of course less obvious cases, but the elucidation of the problem required them to come up with a particularly clear example.
You can also listen to an interview with one of Sarah Lichtenstein's subjects who refused to make his preferences consistent even after the money-pump aspect was explained:
http://www.decisionresearch.org/publications/books/construction-preference/listen.html
Calling fuzzy logic "truth functional" sounds like you're changing the semantics;
'Truth functional' means that the truth value of a sentence is a function of the truth values of the propositional variables within that sentence. Fuzzy logic works this way. Probability theory does not. It is not just that one is talking about degrees of truth and the other is talking about probabilities. The analogue to truth values in probability theory are probabilities, and the probability of a sentence is not a function of the probabilities of the variables ...
You can select your "fuzzy logic" functions (the set of functions used to specify a fuzzy logic, which say what value to assign A and B, A or B, and not A, as a function of the values of A and B) to be consistent with probability theory, and then you'll always get the same answer as probability theory.
How do you do this? As far as I understand, it is impossible since probability is not truth functional. For example, suppose A and B both have probability 0.5 and are independent. In this case, the probability of 'A^B' is 0.25, while the probabil...
I agree that there are very interesting questions here. We have quite natural ways of describing uncomputable functions very far up the arithmetical hierarchy, and it seems that they can be described in some kind of recursive language even if the things they describe are not recursive (using recursive in the recursion theory sense both times). Turing tried something like this in Systems of Logic Based on Ordinals (Turing, 1939), but that was with formal logic and systems where you repeatedly add the Godel sentence of a system into the system as an axiom, r...
I outlined a few more possibilities on Overcoming Bias last year:
There are many ways Omega could be doing the prediction/placement and it may well matter exactly how the problem is set up. For example, you might be deterministic and he is precalculating your choice (much like we might be able to do with an insect or computer program), or he might be using a quantum suicide method, (quantum) randomizing whether the million goes in and then destroying the world iff you pick the wrong option (This will lead to us observing him being correct 100/100 times assu...
Thanks — this looks promising.
One thing I noticed is that there is an interesting analogy between your model and a fairly standard model in economics where society consists of a representative agent in each time period (representing something like a generation, but without overlap) each trying to maximise its own utility. They can plan based on the utilities of subsequent generations (e.g. predicting that the next generation will undo this generation's policies on some topic) but they don't inherently value those utilities. This is then understood via the ... (read more)