I'm very confused. Of course if φ is provable then it's true. That's the whole point of using proofs.
Yes, but it may be true without being provable.
I'm very confused about something related to the Halting Problem. I discussed this on the IRC with some people, but I couldn't get across what I meant very well. So I wrote up something a bit longer and a bit more formal.
The gist of it is, the halting problem lets us prove that, for a specific counter example, there can not exist any proof that it halts or not. A proof that it does or does not halt, causes a paradox.
But if it's true that there doesn't exist a proof that it halts, then it will run forever searching for one. Therefore I've proved that the program will not halt. Therefore a proof that it doesn't halt does exist (this one), and it will eventually find it. Creating a paradox.
Just calling the problem undecidable doesn't actually solve anything. If you can prove it's undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can't know either.
But if it's true that there doesn't exist a proof that it halts, then it will run forever searching for one.
No; provable and true are not the same thing. It may be the case that the program halts, but it is nevertheless impossible to prove that it halts except by "run it and see", which doesn't count.
I have to be honest: I hadn't considered that angle yet (I tend to create ideas first, then hone them and remove issues).
The first point is that this was just an example, the first one to occur to me, and we can certainly find safer examples or improve this one.
The second is that torture is very unlikely - death, maybe painful death, but not deliberate torture.
The third is that I know some people who might be willing to go through with this, if it cured cancer through the world.
But I will have to be more careful in these issues in future, thanks.
I admit I was using the word 'torture' rather loosely. However, unless the AI is explicitly instructed to use anesthesia before any cutting is done, I think we can safely replace it with "extended periods of very intense pain".
As a first pass at a way of safely boxing an AI, though, it's not bad at all. Please continue to develop the idea.
If the excellent simulation of a human with cancer is conscious, you've created a very good torture chamber, complete with mad vivisectionist AI.
Do you still get to do some science?
I sold out to the Dark Side in 2014. This was a move between industry jobs. But, actually, the new one is somewhat more in the direction of data-gathering than the old one was.
I think the point of the quote is that in the first case you have five methods you can use to attack different problems. In the second case you only have one method, and you have to hope every problem is a nail.
Nu, but a method that has already been used on five problems seems to be pretty good at converting problems into nails. :)
It is better to solve one problem five different ways, than to solve five problems one way
George Pólya, or at least attributed to him, as I am unable to find the exact source, despite its being widely quoted in texts related to mathematics education or problem solving in general.
Not sure that generalises outside of math. Is it really better to solve one problem really, really thoroughly, than to have a good-enough fix for five? Depends on the problems, perhaps - but without knowing anything else, I'd rather solve five than one.
That assertion isn't actually true, in the strong form in which he intends it. Even if you rely on the vagueness of "morals", it's certainly not true for legislation.
Bentham is using Enlightenment shorthand; he means "good, just, natural-law-following legislation". He's not talking about the actual sausages that we get from real legislatures.
I got a new job! Which pays better than the old one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
There are two options: Either we have terminal goals that include "having a good time" and "living enjoyable lives", so that a pleasant life is good in itself. Or else we have terminal goals that are finitely achievable, and when we've achieved them we should shut down humanity as useless. In the latter case, we can throw out anything that doesn't advance us towards those finite goals; not in the former.
I think one may hold the first belief without advocating wireheading, in that our terminal goal may be "enjoy a wide variety of pleasant things that exist outside your skull".