All of Toby_Ord2's Comments + Replies

the value of this memory card, was worth more than the rest of the entire observable universe minus the card

I doubt this would be true. I think the value of the card would actually be close to zero (though I'm not completely sure). It does let one solve the halting problem up to 10,000 states, but it does so in time and space complexity O(busy_beaver(n)). In other words, using the entire observable universe as computing power and the card as an oracle, you might be able to solve the halting problem for 7 state machines or so. Not that good... The same goes... (read more)

OK. That makes more sense then. I'm not sure why you call it 'Fun Theory' though. It sounds like you intend it to be a theory of 'the good life', but a non-hedonistic one. Strangely it is one where people having 'fun' in the ordinary sense is not what matters, despite the name of the theory.

This is a moral theory about what should be fun

I don't think that can be right. You are not saying that there is a moral imperative for certain things to be fun, or to not be fun, as that doesn't really make sense (at least I can't make sense of it). You are instead say... (read more)

Eliezer,

Are you saying that one's brain state can be identical in two different scenarios but that you are having a different amount of fun in each? If so, I'm not sure you are talking about what most people call fun (ie a property of your experiences). If not, then what quantity are you talking about in this post where you have less of it if certain counterfactuals are true?

I would drop dead of shock

Eliezer, just as it was interesting to ask what probability estimate 'Nuts!' amounted to, I think it would be very useful for the forum of Overcoming Bias to ask what your implicit probability estimate for a 500 state TM being able to solve the halting problem for all TMs of up to 50 states.

I imagine that 'I would drop dead of shock' was intended to convey a probability of less than 1 in 10,000, or maybe 1 in 1,000,000?

Sorry, I didn't see that you had answered most of this question in the other thread where I first asked it.

Toby, if you were too dumb to see the closed-form solution to problem 1, it might take an intense effort to tweak the bit on each occasion, or perhaps you might have trouble turning the global criterion of total success or failure into a local bit-fixer; now imagine that you are also a mind that finds it very easy to sing MP3s...

The reason you think one problem is simple is that you perceive a solution in closed form; you can imagine a short program, ... (read more)

0[anonymous]
Then you're not solving the same optimization problem anymore. If the black box just had two outputs, "good" and "bad", then, yes, a black box that accepts fewer input sequences is going to be one that is harder to make accept. On the other hand, if the black box had some sort of metric on a scale from "bad" going up to "good", and the optimizer could update on the output each time, the sequence problem is still going to be much easier than the MP3 problem.

Eliezer,

I'm afraid that I'm not sure precisely what your measure is, and I think this is because you have given zero precise examples: even of its subcomponents. For example, here are two optimization problems:

1) You have to output 10 million bits. The goal is to output them so that no two consecutive bits are different.

2) You have to output 10 million bits. The goal is to output them so that when interpreted as an MP3 file, they would make a nice sounding song.

Now, the solution space for (1) consists of two possibilities (all 1s, all 0s) out of 2^10000000... (read more)

I agree with David's points about the roughness of the search space being a crucial factor in a meaningful definition of optimization power.

I'm not sure that I get this. Perhaps I understand the maths, but not the point of it. Here are two optimization problems:

1) You have to output 10 million bits. The goal is to output them so that no two consecutive bits are different.

2) You have to output 10 million bits. The goal is to output them so that when interpreted as an MP3 file, they would make a nice sounding song.

Now, the solution space for (1) consists of two possibilities (all 1s, all 0s) out of 2^10000000, for a total of 9,999,999 bits. The solution space for (2) is millions of times wider, ... (read more)

But if you say "Shut up and do what seems impossible!", then that, to me, sounds like dispelling part of the essential message - that what seems impossible doesn't look like it "seems impossible", it just looks impossible.

"Shut up and do what seems impossible!" is the literally correct message. The other one is the exaggerated form. Sometimes exaggeration is a good rhetorical device, but it does turn off some serious readers.

"Don't do it, even if it seems right" sounds merely clever by comparison

This was my point. Th... (read more)

Eliezer,

Crossman and Crowley make very good points above, delineating three possible types of justification for some of the things you say:

1) Don't turn him in because the negative effects of the undermining of the institution will outweigh the benefits

2) Don't turn him in because [some non-consequentialist reason on non-consequentialist grounds]

3) Don't turn him in because you will have rationally/consequentialistly tied yourself to the mast making it impossible to turn him in to achieve greater benefits.

(1) and (3) are classic pieces of consequentialism,... (read more)

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do

Shut up and do the impossible!

As written, both these statements are conceptually confused. I understand that you didn't actually mean either of them literally, but I would advise against trading on such deep-sounding conceptual confusions.

You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do

This sounds less profound, but will actually be true for some value of X, unlike the first sentence or its derivatives. It sounds as profound as it is, and no more. I believe this is the right standard.

Eli:

It is certainly similar to those problems, but slightly different. For example, justifying Occam's Razor requires a bit more than we need here. In our case, we are just looking for a canonical complexity measure for finite strings. For Occam's Razor we also need to show that we have reason to prefer theories expressible by simpler strings to those specified by more complex strings. As an example, we already have such a canonical complexity measure for infinite strings. It is not perfect, as you might want some complexity measure defined with o-machines... (read more)

Shane:

Why not the standard approach of using Shannon's state x symbol complexity for Turing machines?

Why choose a Turing machine? They are clearly not a canonical mathematical entity, just a historical artifact. Their level of power is a canonical mathematical entity, but there are many Turing-equivalent models of computation. This just gets us simplicity relative to Turing machines where what we wanted was simplicity simpliciter (i.e. absolute simplicity). If someone came to you with a seemingly bizarre Turing-complete model, where the shortest program fo... (read more)

Shane:

That's why a tiny reference machine is used.

I think that Tim is pointing out that there is no available mathematical measure for the 'tinyness' of this machine which is not circular. You seem to be saying that the machine looks simple to most people and that all other machines which people class as simple could be simulated on this machine within a few hundred bits. This has two problems. Firstly, it is not provable that all other machines which we class as similarly simple will be simulated within a few hundred bits as it is an empirical question wh... (read more)

Great! Now I can see several points where I disagree or would like more information.

1) Is X really asserting that Y shares his ultimate moral framework (i.e. that they would converge given time and arguments etc)?

If Y is a psychopath murderer who will simply never accept that he shouldn't kill, can I still judge that Y should refrain from killing? On the current form, to do so would involve asserting that we share a framework, but even people who know this to be false can judge that he shouldn't kill, can't they?

2) I don't know what it means to be the solu... (read more)

Eliezer,

I didn't mean that most philosophy papers I read have lots of mathematical symbols (they typically don't), and I agree with you that over-formalization can occur sometimes (though it is probably less common in philosophy than under-formalization). What I meant is the practice of clear and concise statements of the main points and attendant qualifications in the kind of structured English that good philosophers use. For example, I gave the following as a guess at what you might be meaning:

When X judges that Y should Z, X is judging that were she ful... (read more)

Eliezer,

I agree with most of the distinctions and analogies that you have been pointing out, but I still doubt that I agree with your overall position. No-one here can know whether they agree with your position because it is very much underdetermined by your posts. I can have a go at formulating what I see as the strongest objections to your position if you clearly annunciate it in one place. Oddly enough, the philosophy articles that I read tend to be much more technically precise than your posts. I don't mean that your couldn't write more technically pre... (read more)

Eliezer,

Sorry for not being more precise. I was actually asking what a given person's Q_P is, put in terms that we have already defined. You give a partial example of such a question, but it is not enough for me to tell what metaethical theory you are expressing. For example, suppose Mary currently values her own pleasure and nothing else, but that were she exposed to certain arguments she would come to value everyone's pleasure (in particular, the sum of everyone's pleasure) and that no other arguments would ever lead her to value anything else. This is o... (read more)

Thanks for responding to my summary attempt. I agree with Robin that it is important to be able to clearly and succinctly express your main position, as only then can it be subject to proper criticism to see how well it holds up. In one way, I'm glad that you didn't like my attempted summary as I think the position therein is false, but it does mean that we should keep looking for a neat summary. You currently have:

'I should X' means that X answers the question, "What will save my people? How can we all have more fun? How can we get more control over... (read more)

To cover cases where people are making judgments about what others should do, I could also extend this summary in a slightly more cumbersome way:

When X judges that Y should Z, X is judging that were she fully informed, she would want Y to Z

This allows X to be incorrect in her judgments (if she wouldn't want Y to Z when given full information). It allows for others to try to persuade X that her judgment is incorrect (it preserves a role for moral argument). It reduces 'should' to mere want (which is arguably simpler). It is, however, a conception of should that is judger-dependent: it could be the case that X correctly judges that Y should Z, while W correctly judges that Y should not Z.

Eliezer,

I've just reread your article and was wondering if this is a good quick summary of your position (leaving apart how you got to it):

'I should X' means that I would attempt to X were I fully informed.

Here 'fully informed' is supposed to include complete relevant empirical information and also access to all the best relevant philosophical arguments.

If there's a standard alternative term in moral philosophy then do please let me know.

As far as I know, there is not. In moral philosophy, when deontologists talk about morality, they are typically talking about things that are for the benefit of others. Indeed, they even have conversations about how to balance between self-interest and the demands of morality. In contrast, consequentialists have a theory that already accounts for the benefit of the agent who is doing the decision making: it counts just as much as anyone else. Thus for consequentialists, t... (read more)

1Дмитрий Зеленский
I would expect that libertarians' utility comes unexpectedly close to what Mr. Yudkowsky calls morality.
2Vaughn Papenhausen
I imagine you're probably aware of this in the meantime, but for Eliezer's benefit in case he isn't (and hopefully for the benefit of others who read this post and aren't as familiar with moral philosophy): I believe the term "normativity" is the standard term used to refer to the "sum of all valu(ation rul)es," and would probably be a good term for LessWrong to adopt for this purpose.
-2Peterdjones
Typically, maybe, but not necessarily. There is no obvious contradiction in the idea of a rule of self-preservation or self-enhancement. Many consider suicide imorroral, for instance. ie one 6 billionth in the case of humans.

wrongness flows backward from the shooting, as rightness flows backward from the button, and the wrongness outweighs the rightness.

I suppose you could say this, but if I understand you correctly, then it goes against common usage. Usually those who study ethics would say that rightness is not the type of thing that can add with wrongness to get net wrongness (or net rightness for that matter). That is, if they were talking about that kind of thing, they wouldn't use the word 'rightness'. The same goes for 'should' or 'ought'. Terms used for this kind of st... (read more)

There are some good thoughts here, but I don't think the story is a correct and complete account of metamorality (or as the rest of the world calls it: metaethics). I imagine that there will be more posts on Eliezer's theory later and more opportunities to voice concerns, but for now I just want to take issue with the account of 'shouldness' flowing back through the causal links.

'Shouldness' doesn't always flow backwards in the way Eliezer mentioned. e.g. Suppose that in order to push the button, you need to shoot someone who will fall down on it. This wou... (read more)

One thing to be aware of when considering logical fallacies is that there are two ways in which people consider something to be a fallacy. On the strict account, it is a form of argumentation that doesn't rule out all cases in which the conclusion is false. Appeals to authority and considerations of the history of a claim are obviously fallacious in this sense. The loose account is a form of argumentation that is deeply flawed. It is in this sense that appeal to authority and considerations of the history of a claim may not be fallacious, for they sometimes give us some useful reasons to believe or disbelieve in the claim. Certain considerations don't give deductive (logical) validity, but do give Bayesian support.

It all adds up to normality, in all the worlds.

Eliezer, you say this, and similar things a number of times here. They are, of course, untrue. There are uncountably many instances where, for example, all coins in history flip tails every time. You mean that it almost always adds up to normality and this is true. For very high abnormality, the measure of worlds where it happens is equal to the associated small probability.

Regarding average utilitarianism, I also think this is a highly suspect conclusion from this evidence (and this is coming from a utilitarian philosopher). We can talk about this when you are in Oxford if you want: perhaps you have additional reasons that you haven't given here.

Suppose I take two atoms of helium-4 in a balloon, and swap their locations via teleportation.

For a book version, you will definitely want to be more precise here. I assumed they were in different quantum states (this seems a very reasonable assumption failing a specification to the contrary). Perhaps they had different spins, energies, momenta, etc. This means that the swapping did make sense.

Eliezer,

Very minor quibble/question. I assume you mean 2^Aleph_0 rather than Aleph_1. Unless one is doing something with the cardinals/ordinals themselves, it is almost always the numbers Aleph_0, 2^Aleph_0, 2^2^Aleph_0... that come up rather than Aleph_n. You may therefore like the convenient Beth numbers instead, where:

Beth_0 = Aleph_0 Beth_n+1 = 2^Beth_n

I think Anonymous, Unknown and Eliezer have been very helpful so far. Following on from them, here is my take:

There are many ways Omega could be doing the prediction/placement and it may well matter exactly how the problem is set up. For example, you might be deterministic and he is precalculating your choice (much like we might be able to do with an insect or computer program), or he might be using a quantum suicide method, (quantum) randomizing whether the million goes in and then destroying the world iff you pick the wrong option (This will lead to us ... (read more)

Unknown, I agree entirely with your comments about the distinction between the idealised calculable probabilities and the actual error prone human calculations of them.

Nominull, I think you are right that the problem feels somewhat paradoxical. Many things do when considering actual human rationality (a species of 'bounded rationality' rather than ideal rationality). However, there is no logical problem with what you are saying. For most real world claims, we cannot have justifiable degrees of beliefs greater than one minus a billionth. Moreover, I don't h... (read more)

Carl, that is a good point. I'm not quite sure what to say about such cases. One thing that springs to mind though, is that in realistic examples you couldn't have investigated each of those options to see if it was a real option and even if you could, you couldn't be sure of all of that at once. You must know it through some more general principle whereby there is, say, an option per natural number up to a trillion. However, how certain can you be of that principle? That is isn't really up to only a million?

Hmmmm... Maybe I have an example that I can asse... (read more)

"The odds of that are something like two to the power of seven hundred and fifty million to one."

As Eliezer admitted, it is a very bad idea to ascribe probabilities like this to real world propositions. I think that the strongest reason is that it is just too easy for the presuppositions to be false or for your thinking to have been mistaken. For example, if I gave a five line logical proof of something, that would supposedly mean that there is no chance that its conclusion is true given the premisses, but actually the chance that I would make a ... (read more)

There are certainly a lot of people who have been working on this problem for a long time. Indeed, since before computers were invented. Obviously I'm talking about moral philosophers. There is a lot of bad moral philosophy, but there is also a fair amount of very good moral philosophy tucked away in there -- more than one lifetime worth of brilliant insights. It is tucked away well enough that I doubt Eliezer has encountered more than a little of it. I could certainly understand people thinking it is all rubbish by taking a reasonably large sample and com... (read more)

g, you have suggested a few of my reasons. I have thought quite a lot about this and could write many pages, but I will just give an outline here.

(1) Almost everything we want (for ourselves) increases our happiness. Many of these things evidently have no intrinsic value themselves (such as Eliezer's Ice-cream case). We often think we want them intrinsically, but on closer inspection, if we really ask whether we would want them if they didn't make us happy we find the answer is 'no'. Some people think that certain things resist this argument by having some... (read more)

Wei, yes my comment was less clear than I was hoping. I was talking about the distinction between 'psychological hedonism' and 'hedonism' and I also mentioned the many person versions of these theories ('psychological utilitarianism' and 'utilitarianism'). Lets forget about the many person versions for the moment and just look at the simple theories.

Hedonism is the theory that the only thing good for each individual is his or her happiness. If you have two worlds, A and B and the happiness for Mary is higher in world A, then world A is better for Mary. Thi... (read more)

Eliezer,

There is potentially some confusion on the term 'value' here. Happiness is not my ultimate (personal) end. I aim at other things which in turn bring me happiness and as many have said, this brings me more happiness than if I aimed at it. In this sense, it is not the sole object of (personal) value to me. However, I believe that the only thing that is good for a person (including me) is their happiness (broadly construed). In that sense, it is the only thing of (personal) value to me. These are two different senses of value.

Psychological hedonists a... (read more)

1[anonymous]
What use is a system of "morality" which doesn't move you? Often, for me at least, when something I want to do conflicts with what I know is the right thing to do, I feel sad when I don't do the right thing. I would feel almost no remorse, if any, about not taking the pill.

Robin:

Which correlation studies are you talking about? We would actually need quite some evidence to suggest that aid is net harmful, or very inefficient. I haven't seen anything to suggest this. Even if it has net zero financial effect, that doesn't mean it isn't amazingly efficient at health effects etc. I was very unimpressed with the standard of those Spiegel pieces, especially the interview.

I certainly think we need much more focus on efficiency of aid (as you know I'm spending much of my time starting an organization to see to this) and also more ran... (read more)

Jeff Gray:

It is easy to get blinded by large numbers, but trillions of dollars over 50 years over billions of people is not very much -- just $20 per person per year or so. It is not surprising that this hasn't industrialised the rest of the world over that period of time. It is an enormous problem and even if tackled very efficiently, it will take trillions more before the gap closes. I strongly suggest using 'dollars per person per year' as the unit to see the relative scales of things.

Eliezer: I'm afraid you've got this one quite wrong. I can elaborate further in the future, but for now I'll just expand upon what Carl wrote:

Total aid to Sub-Saharan Africa (SSA) from 1950 onwards = $568 billion (according to Easterly)

(I'm just going to look at things up to 1990 as life expectancy data gets skewed by AIDS at that point. Thus $568 billion is a conservative overestimate of money spent until 1990)

Average population in SSA (1950-1990) = 317 million

Life Expectancy in SSA according to World Population Prospects (ie. the UN estimates) = 37.6 in ... (read more)