How is it that Solomonoff Induction, and by extension Occam's Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has "worked" in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?
There are more hypotheses with a high complexity than with a low complexity, so it is mathematically necessary to assign lower probabilities to high complexity cases than to low complexity cases (broadly speaking and in general -- obviously you can make particular exceptions) if you want your probabilities to sum to 1, because you are summing an infinite series, and to get it to come to a limit, the terms in the series must be generally decreasing.
When pushed on why Anthony Magnabosco is out interviewing people he responds with, "I like talking to people and finding out what they believe." True enough, but disingenuous. He presents himself as a seeker of the truth and his root goal is he is out to change minds. If the obtaining the truth was your primary motivation, street interviews is an incredibly inefficient method. The interviews come off as incredibly patronising. Questions such as, "If I gave you evidence about a biblical contradiction, and I'm not saying I do, but if I did, would you change your mind?" Of course you have a contradiction up your sleeve.
Honesty and effectiveness appear to be conflicting goals in street epistemology.
I don't see any substantial evidence from the videos (at least the ones I bothered to watch) that he was changing anyone's mind. Once I had a discussion with a group of Mormons. I reduced them to saying repeatedly, "well, I don't know what to say about that." At the end I basically lectured them for 10 minutes about how bad it is to believe a false religion, and they were silent. But I have no reason to believe that any of them changed their minds to even the slightest degree. I would guess that these videos are the same thing.
Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code... if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn't solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
"A few short lines of code..."
AIXI is not computable.
If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
If I correctly read what you're saying, you're basically asserting that you can be sad because your belief structure has changed. That implies to have meta-beliefs, which are extremely dangerous from a rationality point of view.
As a rationalist, you should keep your identity very small and do not presuppose anything about the general structure of how your beliefs should be like, because you will be almost certainly wrong (and disappointed).
Consider this extreme example.
On the other hand, I don't understand the Paul quote: we're exactly saying that you shouldn't be judgemental (about reality, that is). Obviously we're talking about ideals to aspire, as with everything in this forum and self-improvement.
No, it does not imply that you have "meta-beliefs", although everyone does. It implies that your beliefs affect the world, as for example by making you say things. If your beliefs affect the world, changing your beliefs will both change the part of the world which is your beliefs, and also other parts of the world. All of that has the possibility of making you sad.
This is all perfectly obvious, and I should not have to bring up examples from real life.
No, I was making a reference to the Litany of Tarski.
When you ask "how do I forget rationality?", it seems to me that you're asking how to go back to deceiving yourself. After all, rationality is the adherence of beliefs to reality, and there's nothing that subtracts you joy by changing your beliefs so that they are more in tune with reality: after all, reality was there all along.
Perhaps ponderating on the joy of the merely real could help.
"Reality was there all along." The fact that someone believes something is part of reality, and if it changes, then reality is changing. There is no reason that this cannot take away some joy from someone, even if their beliefs end up less accurate.
As St. Paul said, "For in passing judgment on another you condemn yourself, because you, the judge, practice the very same things." Asserting that conforming your beliefs to reality cannot make you less joyful is itself a form of wishful thinking in which you refuse to conform your beliefs to reality.
Well, now you got me curious. What other things a processor is doing when executing a program?
I gave the example of following gravity, and in general it is following all of the laws of physics, e.g. by resisting the pressure of other things in contact with it, and so on. Of course, the laws of physics are also responsible for it executing the program. But that doesn't mean the laws of physics do nothing at all except execute the program -- evidently they do plenty of other things as well. And you are not in control of those things and cannot program them. So they will not all work out to promote paperclips, and the thing will always feel desires that have nothing to do with paperclips.
They want to steer the future in a different direction than what I want, so by definition they have different values (they might be instrumental values, but those are important too).
Ok, but in this sense every human being has different values by definition, and always will.
Of course, everything is a physical object. What I'm curious about your position is if you think that you can put any algorithm inside a piece of hardware, or not.
I'm afraid that your position on the matter is so out there for me that without a toy model I wouldn't be able to understand what you mean. The recursive nature of the comments doesn't help, also.
You can put any program you want into a physical object. But since it is a physical object, it will do other things in addition to executing the algorithm.
Time to rebuild a library
My 5 terabyte harddrive went poof this morning, and silly me hadn't bought data-recovery insurance. Fortunately, I still have other copies of all my important data, and it'll just take a while to download everything else I'd been collecting.
Which brings up the question: What info do you feel it's important to have offline copies of, gathered from the whole gosh-dang internet? A recent copy of Wikipedia and the Project Gutenberg DVD are the obvious starting places... which other info do you think pays the rent of its storage space?
I have two hard drives, one larger than the other, with the smaller being backed up the larger. When the smaller drive is filled, or when it fails, then I simply buy a new drive, still larger, and the previous larger becomes the new smaller. So far there is no end in view in this process.
I also have everything in an encrypted online backup with CrashPlan.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?
gjm's explanation is correct.