Excellent link! Short, clear, interesting, 100% relevant.
Instead, the moral character of an action’s consequences also seems to influence how non-moral aspects of the action – in this case, whether someone did something intentionally or not – are judged.
Stupid Knobe effect. Obviously the subjects' responses were an attempt to pass judgement on the CEO. In one case, he deserves no praise, but in the other he does deserve blame [or so a typical subject would presumably think]. The fact that they were forced to express their judgement of moral character through the word 'intentional', which sometimes is a 'non-moral' quality of an action, doesn't tell us anything interesting.
That is indeed evidence against his credibility, if not particularly strong evidence for me. I don't know enough math to know directly that saying P=NP is a joke; I only believe it is because the math community says so.
Merely saying it wouldn't be so bad, as long as there was some substance behind the assertion.
But basically his argument boils down to this:
"If you dunk two wooden boards with wires poked through them into soapy water and then lift them out, the soaps films between the wires are the solution to an NP-hard problem. But creating the boards and wires and dunking them can be done in polynomial time. So as long as physics is Turing computable, P = NP."
This is a fantastically stupid argument, because you could easily create a simulation of the above process that appeared to be just as good at generating the answers to this problem as the real soap films. But if you gave it a somewhat difficult problem, what would happen is that it would quickly generate something which was nearly but not quite a solution, and there's no reason to think that real soap films would do better.
The fact that Bringsjord got as far as formalising his argument in modal logic and writing it up, without even thinking of the above objection, is quite incredible.
Even if Singularitarianism has no evidence base (which I'm not saying is the case), that's not enough to show that Singularitarians are fideists. According to Wikipedia, "Fideism is an epistemological theory which maintains that faith is independent of reason, or that reason and faith are hostile to each other and faith is superior at arriving at particular truths..."
If Singularitarians believe that they have enough evidence to justify their position (most of the ones on here do believe that, as far as I can tell), then they aren't fideists even if they're wrong. A fideist would believe that ey didn't have evidence and didn't need it; most Singularitarians believe that they do need it and do have it. So they can't be fideists; the worst they can be is wrong.
A very simple measure on the binary strings is the uniform measure and so Solomonoff Induction will converge on it with high probability.
Can you explain why? What's the result saying the Solomonoff distribution "as a whole" often converges on uniform?
I think what mathemajician means is that if the stream of data is random (in that the bits are independent random variables each with probability 1/2 of being 1) then Solomonoff induction converges on the uniform measure with high probability (probability 1, in fact).
I'm sure you knew that already, but you don't seem to realize that it undercuts the logic behind your claim:
The universal prior implies you should say "substantially less than 1 million".
(Here, the log(n) is needed to specify how long the sequence of random bits is).
You don't always need log(n) bits to specify n. The K-complexity of n is enough. For example, if n=3^^^^3, then you can specify n using much fewer bits than log(n). I think this kills your debunking :-)
O(BB^-1) (or whatever it is) is still greater than O(1) though, and (as best I can reconstruct it) your argument relies on there being a constant penalty.
I think you're implicitly assuming that the K complexity of a hypothesis of the form "these n random bits followed by the observations predicted by H" equals n + (K-complexity of H) + O(1). Whereas actually, it's n + (K-complexity of H) + O(log(n)). (Here, the log(n) is needed to specify how long the sequence of random bits is).
So if you've observed a hugely long sequence of random bits then log(n) is getting quite large and 'switching universe' hypotheses get penalized relative to hypotheses that simply extend the random sequence.
This makes intuitive sense - what makes a 'switching universe' unparsimonious is the arbitrariness of the moment of switching.
(Btw, I thought it was a fun question to think about, and I'm always glad when this kind of thing gets discussed here.)
ETA: But it gets more complicated if the agent is allowed to use its 'subjective present moment' as a primitive term in its theories, because then we really can describe a switching universe with only a constant penalty, as long as the switch happens 'now'.
Some people are criticizing this for being obviously true; others are criticizing it for being false.
A particular agent can have wrong information, and make a poor decision as a result of combining the wrong information with the new information. Since we're assuming that the additional information is correct, I think it's reasonable to also stipulate that all previous information is correct.
Also, you need to state the English interpretation in terms of expected value, not as "More information is never a bad thing".
The mathematical result is trivial, but its interpretation as the practical advice "obtaining further information is always good" is problematic, for the reason taw points out.
A particular agent can have wrong information, and make a poor decision as a result of combining the wrong information with the new information. Since we're assuming that the additional information is correct, I think it's reasonable to also stipulate that all previous information is correct.
Actually, I thought of that objection myself, but decided against writing it down. First of all, it's not quite right to refer to past information as 'right' or 'wrong' because information doesn't arrive in the form of propositions-whose-truth-is-assumed, but in the form of sense data.* It's better to talk about 'misleading information' rather than 'wrong information'. When adversary A tells you P, which is a lie, your information is not P but "A told me P". (Actually, it's not even that, but you get the idea.) If you don't know A is an adversary then "A told me P" is misleading, but not wrong.
Now, suppose the agent's prior has got to where it is due to the arrival of misleading information. Then relative to that prior, the agent still increases its expected utility whenever it acquires new data (ignoring taw's objection).
(On the other hand, if we're measuring expectations wrt the knowledge of some better informed agent then yes, acquiring information can decrease expected utility. This is for the same reason that, in a Gettier case, learning a new true and relevant fact (e.g. most nearby barn facades are fake) can cause you to abandon a true belief in favour of a false one.)
* Yes yes, I know statements like this are philosophically contentious, but within LW they're assumptions to work from rather than be debated.
"one" is not general enough. Do you really think what you just said is true for all people?
It's true for anyone who understands random variables and expectations. There's a one line proof, after all.
Or in other words, the expectation of a max of some random variables is always greater or equal to the max of the expectations.
You could call this 'standard knowledge' but it's not the kind of thing one bothers to commit to memory. Rather, one immediately perceives it as true.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Hmm, are you interpreting the results as "boo CEOs" then?
How would you modify the experiment to return information closer to what was sought?
I'm only interpreting the result as "boo this fictional CEO".
Well, what Knobe is looking for is a situation where subjects make their 'is' judgements partly on the basis of their 'ought' judgements. Abstractly, we want a 'moral proposition' X and a 'factual proposition' Y such that when a subject learns X, they tend to give higher credence to Y than when they learn ¬X. Knobe takes X = "The side-effects are harmful to the environment" and Y = "The effect on the environment was intended by the CEO".
(My objection to Knobe's interpretation of his experiment can thus be summarised: "The subjects are using Y to express a moral fact, not a 'factual fact'." After all, if you asked them to explain themselves, in one case they'd say "It wasn't intentional because (i) he didn't care about the effect on the environment, only his bottom line." In the other they'd say "it was intentional because (ii) he knew about the effect and did it anyway." But surely the subjects agree on (i) and (ii) in both cases - the only thing that's changing is the meaning of the word 'intentional', so that the subjects can pass moral judgement on the CEO.)
To answer your question: I'm not sure that genuine examples of this phenomenon exist, except when the 'factual' propositions concern the future. If Y is about a past event, then I think any subject who seems to be exhibiting the Knobe effect will quickly clarify and/or correct themselves if you point it out. (Rather like if you somehow tricked someone into saying an ungrammatical sentence and then told them the error.)