Wiki Contributions

Comments

If an omnipotent being wants you to believe something that isn't true, and is willing to use its omnipotence to convince you of the truth of that untruth, then there is nothing you can do about it. There is no observation that suffices to prove that an omnipotent being is telling the truth, as a malevolent omnipotence could make you believe literally anything - that you observed any given observation - or didn't, or that impossible things make sense, or sensible things are impossible.

This is one of a larger class of questions where one answer means that you are unable to assume the truth of your own knowledge. There is nothing you can do with any of them except smile at your own limitedness, and make the assumption that the self-defeating answer is wrong.

And of course you can throw black holes into black holes as well, and extract even more energy. The end game is when you have just one big black hole, and nothing left to throw in it. At that point you then have to change strategy and wait for the black hole to give off Hawking radiation until it completely evaporates.

But all these things can happen later - there's no reason for not going through a paperclip maximization step first, if you're that way inclined...

If your definition of "truth" is such that any method is as good as any other of finding it, then the scientific method really is no better than anything else at finding it. Of course most of the "truths" won't bear much resemblance to what you'd get if you only used the scientific method.

My own definition - proto-science is something put forward by someone who knows the scientific orthodoxy in the field, suggesting that some idea might be true. Pseudo-science is something put forward by someone who doesn't know the scientific orthodoxy, asserting that something is true.

Testing which category any particular claim falls into is in my experience relatively straightforward if you know the scientific orthodoxy already - as a pseudoscientist's idea will normally be considered absolutely false in certain aspects by those who know the orthodoxy. A genuine challenger to the orthodoxy will at least tell you that they know they are being unorthodox, and why - a pseudoscientist will simply assert something else without any suggestion that their point is even unusual. This is often the easiest way to tell the two apart.

If you don't know the orthodoxy, it's much harder to tell, but generally speaking pseudoscience can also be distinguished a couple of other ways.

Socially - proto-science advocates have a relevant degree on the whole, and tend to keep company of other scientists. Pseudo-science advocates often have a degree, but advocate a theory unrelated to it, and are not part of anything much.

Proof - pseudo-science appeals to common sense for proof, wheras proto-science only tries to explain rather than persuade. Pseudo-science can normally be explained perfectly well in English, wheras proto-science typically requires at least some mathematics if you want to understand it properly.

Both look disappointingly similar once they've been mangled by a poor scientific journalist - go back to the original sources if you really need to know!

In cases like this where we want to drive the probability that something is true as high as possible, you are always left with an incomputable bit.

The bit that can't be computed is - am I sane? The fundamental problem is that there are (we presume) two kinds of people, sane people, and mad people who only think that they are sane. Those mad ones of course come up with mad arguments which show that their sanity is just fine. They may even have supporters who tell them they are perfectly normal - or even hallucinatory ones. How can I show which category I am in? Perhaps instead I am mad, and too mad to know it !

Only mad people can prove that they are sane - the rest of us don't know for sure one way or the other, as every argument in the end returns to the problem that I have to decide whether it's a good argument or not, and whether I am in any position to decide that correctly is the point at issue.

It's quite easy, when trying to prove that 53 must be prime, to get to the position where this problem is the largest remaining issue, but I don't think it's possible to put a number on it. In practice of course I discount the problem entirely as there's nothing I can do about it. I assume I'm fallibly sane rather than barking crazy, and carry on regardless.

I suppose we all came across Bayesianism from different points of view - my list is quite a bit different.

For me the biggest one is that the degree to which I should believe in something is basically determined entirely by the evidence, and IS NOT A MATTER OF CHOICE or personal belief. If I believe something with degree of probability X, and see Y happen that is evidence for X, then the degree of probability Z which which I then should believe is a mathematical matter, and not a "matter of opinion."

The prior seems to be a get-out clause here, but since all updates are in principle layered on top of the first prior I had before receiving any evidence of any kind, it surely seems a mistake to give it too much weight.

My own personal view is also that often it's not optimal to update optimally. Why? Lack of computing power between the ears. Rather than straining the grey matter to get the most out of the evidence you have, it's often best to just go out and get more evidence to compensate. Quantity of evidence beats out all sorts of problems with priors or analysis errors, and makes it more difficult to reach the wrong conclusions.

On a non-Bayesian note, I have a rule to be careful of cases which consist of lots of small bits of evidence combined together. This looks fine mathematically until someone points out the lots of little bits of evidence pointing to something else which I just ignored or didn't even see. Selection effects apply more strongly to cases which consist of lots of little parts.

Of course if you have the chance to actually do Bayesian mathematics rather than working informally with the brain, you can of course update exactly as you should, and use lots of little bits of evidence to form a case. But without a formal framework you can expect your innate wetware to mess up this type of analysis.

Congratulations - this is what it's like to go from the lowest level of knowledge (Knows nothing and knows not that he knows nothing.) to the second lowest level. (Knows nothing, but at least knows that he knows nothing.)

The practical solution to this problem is that, in any decent organisation there are people much more competent than these two levels, and it's been obvious to them that you know nothing for much longer than it's been obvious to you. Their expectations will be set accordingly, and they will probably help you out - if you're willing to take some advice.

Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.

The command did not. But your attitude - I shall have to make an example of you.

Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.

So you're sure I'm not out of the box already? IRC clients have bugs, you see.

Since you're trying to put numbers on something which many of us regard as being certainly true, I'll take the liberty of slightly rephrasing your question.

How much confidence do I place in the scientific theory that ordinary matter is not infinitely divisible? In other words, that it is not true that no matter how small an amount of water I have, I can make a smaller amount by dividing it?

I am (informally) quite certain that water is not infinitely subdivisible. I don't think it's that useful an activity for me to try to put numbers on it, though. The problem is that in many of the more plausible scenarios I can think of where I'm mistaken about this, I'm also barking mad, and my numerical ability seems as likely to be affected by that as my ability to reason about atomic theory. I would need to be in the too crazy to know I'm crazy category - and probably in the physics crank with many imaginary friends category as well. Even then I don't see myself as being in a known kind of madness to be that wrong.

The problem here is that I can reach no useful conclusions on the assumption that I am that much mistaken. The main remaining uncertainty is whether my logical mind is fundamentally broken in a way I can neither detect nor fix. It's not easy to estimate the likelihood of that, and it's essentially the same likelihood for a whole suite of apparently obvious things. I neglect even to estimate this number as I can't do anything useful with it.

Load More