OK. That makes more sense then. I'm not sure why you call it 'Fun Theory' though. It sounds like you intend it to be a theory of 'the good life', but a non-hedonistic one. Strangely it is one where people having 'fun' in the ordinary sense is not what matters, despite the name of the theory.
This is a moral theory about what should be fun
I don't think that can be right. You are not saying that there is a moral imperative for certain things to be fun, or to not be fun, as that doesn't really make sense (at least I can't make sense of it). You are instead say...
Eliezer,
Are you saying that one's brain state can be identical in two different scenarios but that you are having a different amount of fun in each? If so, I'm not sure you are talking about what most people call fun (ie a property of your experiences). If not, then what quantity are you talking about in this post where you have less of it if certain counterfactuals are true?
I would drop dead of shock
Eliezer, just as it was interesting to ask what probability estimate 'Nuts!' amounted to, I think it would be very useful for the forum of Overcoming Bias to ask what your implicit probability estimate for a 500 state TM being able to solve the halting problem for all TMs of up to 50 states.
I imagine that 'I would drop dead of shock' was intended to convey a probability of less than 1 in 10,000, or maybe 1 in 1,000,000?
Sorry, I didn't see that you had answered most of this question in the other thread where I first asked it.
Toby, if you were too dumb to see the closed-form solution to problem 1, it might take an intense effort to tweak the bit on each occasion, or perhaps you might have trouble turning the global criterion of total success or failure into a local bit-fixer; now imagine that you are also a mind that finds it very easy to sing MP3s...
The reason you think one problem is simple is that you perceive a solution in closed form; you can imagine a short program, ...
Eliezer,
I'm afraid that I'm not sure precisely what your measure is, and I think this is because you have given zero precise examples: even of its subcomponents. For example, here are two optimization problems:
1) You have to output 10 million bits. The goal is to output them so that no two consecutive bits are different.
2) You have to output 10 million bits. The goal is to output them so that when interpreted as an MP3 file, they would make a nice sounding song.
Now, the solution space for (1) consists of two possibilities (all 1s, all 0s) out of 2^10000000...
I agree with David's points about the roughness of the search space being a crucial factor in a meaningful definition of optimization power.
I'm not sure that I get this. Perhaps I understand the maths, but not the point of it. Here are two optimization problems:
1) You have to output 10 million bits. The goal is to output them so that no two consecutive bits are different.
2) You have to output 10 million bits. The goal is to output them so that when interpreted as an MP3 file, they would make a nice sounding song.
Now, the solution space for (1) consists of two possibilities (all 1s, all 0s) out of 2^10000000, for a total of 9,999,999 bits. The solution space for (2) is millions of times wider, ...
But if you say "Shut up and do what seems impossible!", then that, to me, sounds like dispelling part of the essential message - that what seems impossible doesn't look like it "seems impossible", it just looks impossible.
"Shut up and do what seems impossible!" is the literally correct message. The other one is the exaggerated form. Sometimes exaggeration is a good rhetorical device, but it does turn off some serious readers.
"Don't do it, even if it seems right" sounds merely clever by comparison
This was my point. Th...
Eliezer,
Crossman and Crowley make very good points above, delineating three possible types of justification for some of the things you say:
1) Don't turn him in because the negative effects of the undermining of the institution will outweigh the benefits
2) Don't turn him in because [some non-consequentialist reason on non-consequentialist grounds]
3) Don't turn him in because you will have rationally/consequentialistly tied yourself to the mast making it impossible to turn him in to achieve greater benefits.
(1) and (3) are classic pieces of consequentialism,...
You should never, ever murder an innocent person who's helped you, even if it's the right thing to do
Shut up and do the impossible!
As written, both these statements are conceptually confused. I understand that you didn't actually mean either of them literally, but I would advise against trading on such deep-sounding conceptual confusions.
You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do
This sounds less profound, but will actually be true for some value of X, unlike the first sentence or its derivatives. It sounds as profound as it is, and no more. I believe this is the right standard.
Eli:
It is certainly similar to those problems, but slightly different. For example, justifying Occam's Razor requires a bit more than we need here. In our case, we are just looking for a canonical complexity measure for finite strings. For Occam's Razor we also need to show that we have reason to prefer theories expressible by simpler strings to those specified by more complex strings. As an example, we already have such a canonical complexity measure for infinite strings. It is not perfect, as you might want some complexity measure defined with o-machines...
Shane:
Why not the standard approach of using Shannon's state x symbol complexity for Turing machines?
Why choose a Turing machine? They are clearly not a canonical mathematical entity, just a historical artifact. Their level of power is a canonical mathematical entity, but there are many Turing-equivalent models of computation. This just gets us simplicity relative to Turing machines where what we wanted was simplicity simpliciter (i.e. absolute simplicity). If someone came to you with a seemingly bizarre Turing-complete model, where the shortest program fo...
Shane:
That's why a tiny reference machine is used.
I think that Tim is pointing out that there is no available mathematical measure for the 'tinyness' of this machine which is not circular. You seem to be saying that the machine looks simple to most people and that all other machines which people class as simple could be simulated on this machine within a few hundred bits. This has two problems. Firstly, it is not provable that all other machines which we class as similarly simple will be simulated within a few hundred bits as it is an empirical question wh...
Great! Now I can see several points where I disagree or would like more information.
1) Is X really asserting that Y shares his ultimate moral framework (i.e. that they would converge given time and arguments etc)?
If Y is a psychopath murderer who will simply never accept that he shouldn't kill, can I still judge that Y should refrain from killing? On the current form, to do so would involve asserting that we share a framework, but even people who know this to be false can judge that he shouldn't kill, can't they?
2) I don't know what it means to be the solu...
Eliezer,
I didn't mean that most philosophy papers I read have lots of mathematical symbols (they typically don't), and I agree with you that over-formalization can occur sometimes (though it is probably less common in philosophy than under-formalization). What I meant is the practice of clear and concise statements of the main points and attendant qualifications in the kind of structured English that good philosophers use. For example, I gave the following as a guess at what you might be meaning:
When X judges that Y should Z, X is judging that were she ful...
Eliezer,
I agree with most of the distinctions and analogies that you have been pointing out, but I still doubt that I agree with your overall position. No-one here can know whether they agree with your position because it is very much underdetermined by your posts. I can have a go at formulating what I see as the strongest objections to your position if you clearly annunciate it in one place. Oddly enough, the philosophy articles that I read tend to be much more technically precise than your posts. I don't mean that your couldn't write more technically pre...
Eliezer,
Sorry for not being more precise. I was actually asking what a given person's Q_P is, put in terms that we have already defined. You give a partial example of such a question, but it is not enough for me to tell what metaethical theory you are expressing. For example, suppose Mary currently values her own pleasure and nothing else, but that were she exposed to certain arguments she would come to value everyone's pleasure (in particular, the sum of everyone's pleasure) and that no other arguments would ever lead her to value anything else. This is o...
Thanks for responding to my summary attempt. I agree with Robin that it is important to be able to clearly and succinctly express your main position, as only then can it be subject to proper criticism to see how well it holds up. In one way, I'm glad that you didn't like my attempted summary as I think the position therein is false, but it does mean that we should keep looking for a neat summary. You currently have:
'I should X' means that X answers the question, "What will save my people? How can we all have more fun? How can we get more control over...
To cover cases where people are making judgments about what others should do, I could also extend this summary in a slightly more cumbersome way:
When X judges that Y should Z, X is judging that were she fully informed, she would want Y to Z
This allows X to be incorrect in her judgments (if she wouldn't want Y to Z when given full information). It allows for others to try to persuade X that her judgment is incorrect (it preserves a role for moral argument). It reduces 'should' to mere want (which is arguably simpler). It is, however, a conception of should that is judger-dependent: it could be the case that X correctly judges that Y should Z, while W correctly judges that Y should not Z.
the value of this memory card, was worth more than the rest of the entire observable universe minus the card
I doubt this would be true. I think the value of the card would actually be close to zero (though I'm not completely sure). It does let one solve the halting problem up to 10,000 states, but it does so in time and space complexity O(busy_beaver(n)). In other words, using the entire observable universe as computing power and the card as an oracle, you might be able to solve the halting problem for 7 state machines or so. Not that good... The same goes... (read more)