potato comments on Fake Causality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks).
Would you consider such a machine an artificial intelligent agent? Isn't it just a glorified printing press?
I'm not saying that some configurations of memory are physically impossible. I'm saying that intelligent agency entails typicality, and therefore, for any intelligent agent, there are some things it is extremely unlikely to do, to the point of practical impossibility.
I would actually argue the opposite.
Are you familiar with the claim that people are getting less intelligent since modern technology allows less intelligent people and their children to survive? (I never saw this claim discussed seriously, so I don't know how factual it is; but the logic of it is what I'm getting at.) The idea is that people today are less constrained in their required intelligence, and therefore the typical human is becoming less intelligent.
Other claims are that activities such as browsing the internet and video gaming are changing the set of mental skills which humans are good at. We improve in tasks which we need to be good at, and give up skills which are less useful. You gave yet another example in your comment regarding face recognition.
The elasticity of biological agents is (quantitatively) limited, and improvement by evolution takes time. This is where artificial agents step in. They can be better than humans, but the typical agent will only actually be better if it has to. Generally, more intelligent agents are those which are forced to comply to tighter constraints, not looser ones.
That's an empirical inquiry, which I'm sure has been answered within some acceptable error range (it's interesting and easy-ish to test). If you're going to use it as evidence for your conclusion, or part of your worldview, you should really be sure that it's true, because using "logic" that leads to empirically falsifiable claims—is essentially never fruitful.
Check out Stephen Pinker for a start.
Was my disclaimer insufficient? I was using the unchecked claim to convey a piece of reasoning. The claim itself is unimportant in this context, only its reasoning that its conclusion should follow from its premise. Checking the truth of the conclusion may not be difficult, but the premise itself could be false, and I suspect that it is, and that it's much harder to verify.
And even the reasoning, which is essentially mathematically provable, I have repeatedly urged the skeptic reader to doubt until they see a proof.
Did you mean false claims? I sure do hope that my logic (without quotes) implies empirically flasifiable (but unfalsified) claims.
Any set of rules for determining validity, is useless, if even sound arguments have empirically false conclusions every now and again. So my point was, that if it is sound, but has a false conclusion, you should forget about the reasoning altogether.
And yes, I did mean "empirically falsified." My mistake.
(edit):
Actually, it's not a sound or unsound, or valid or invalid argument. The argument points out some pressures that should make us expect that people are getting dumber, and ignores the presence of pressures which should make us expect that we're getting smarter. Either way, if from your "premises" you can derive too much belief for certain false claims, either you are too confident in your premises, or your rules for deriving belief are crappy, i.e., far from approximating Bayesian updating.
That's both obvious and irrelevant.
Are you even trying to have a discussion here? Or are you just stating obvious and irrelevant facts about rationality?
Above you said that you weren't sure if the conclusion of some argument you were using was true, don't do that. That is all the advice I wanted to give.
I'll try to remember that, if only for the reason that some people don't seem to understand contexts in which the truth value of a statement is unimportant.
and
You see no problem here?
Not at all. If you insist, let's take it from the top:
I wanted to convey my reasoning, let's call it R.
I quoted a claim of the form "because P is true, Q is true", where R is essentially "if P then Q". This was a rhetorical device, to help me convey what R is.
I indicated clearly that I don't know whether P or Q are true. Later I said that I suspect P is false.
Note that my reasoning is, in principle, falsifiable: if P is true and Q is false, then R must be false.
While Q may be relatively easy to check, I think P is not.
I expect to have other means of proving R.
I feel that I'm allowed to focus on conveying R first, and attempting to prove or falsify it at a later date. The need to clarify my ideas helped me understand them better, in preparation of future proof.
I stated clearly and repeatedly that I'm just conveying an idea here, not providing evidence for it, and that I agree with readers who choose to doubt it until shown evidence.
Do you still think I'm at fault here?
EDIT: Your main objection to my presentation was that Q could be false. Would you like to revise that objection?
I don't want to revise my objection, because it's not really a material implication that you're using. You're using probabilistic reasoning in your argument,i.e., pointing out certain pressures that exist, which rule out certain ways that people could be getting smarter, and therefor increases our probability that people are not getting smarter. But if people are in fact getting smarter, this reasoning is either too confident in the pressures, or is using far from bayesian updating.
Either way, I feel like we took up too much space already. If you would like to continue, I would love to do so in a private message.