Lumifer comments on Starting University Advice Repository - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
That really depends on what you want to do later on. If you are going into math/CS/etc. grad school, focusing on proofs is a great idea. If you'll get a bachelor's and go get a job, I'd much rather be skilled in Python and linear algebra than in doing proofs.
I think you are wrong about this.
I have no idea whether to downvote. On the one hand, you don't explain why. On the other hand, neither does Lumifer.
But:
I have never in my career needed proofs, and never expect to; I've needed linear algebra on multiple occasions, and expect to many times in the future. I work as a programmer, and have been programming for around twenty years now. I have developed something that twenty years ago might have been called AI. I have managed dozens of projects, and worked on dozens more.
So, where am I going? Proofs are an absolutely worthless skill to have in terms of "getting a bachelor's and going and getting a job". There -might- be a job where you need to write proofs, but if there is, I haven't seen it.
What did you develop 20 years ago?
You should know I ignore karma, btw.
Think about the form of the statement you are making: "I don't know X, and it doesn't seem like I need X." Well, how do you know you don't? You have to compare current world to a counterfactual world where you did know X. How do you know you wouldn't be vastly better off? See also: "I don't need this fancy book learnin' I am doing fine in life."
That's not the statement I am making. I do know X; proofs were required coursework for every CS major at the educational facility I attended. I've never needed it.
Which specific work-related (meta) skill do you think doing proofs develops? It's not going to raise anyone's IQ, I don't see why it would be particularly effective at improving, say, the ability to focus or critical thinking or something like that.
I don't care about IQ, I think it's a fairly uninformative number. Doing proofs eventually gives you a nebulous thing called "mathematical sophistication" (what I sometimes call "metal struts in your brain") that I think helps enormously for adapting to and solving novel technical problems.
When Heinlein said "specialization is for insects" I think he was making a similar point about metaskills.
I don't mean IQ as a number, I mean the underlying g.
And people who graduate college and start working neither do, nor are expected to "solve novel technical problems". The closest to that are programmers who do have to solve problems daily, but for them courses in e.g. data structures or just experience with radically different languages will develop much more useful intuitions than "mathematical sophistication".
If you are going to become a mathematician or a logician, by all means go study proofs. Otherwise I don't think they justify the opportunity costs.
Aren't you suggesting specializing in a particular metaskill?
I think g is sort of a mathematical artifact, not a real thing (but don't really feel like getting into a big thing about this). Factor analysis doesn't tell people what they think it does.
The first principal component of scores on various tests is, of course, a mathematical artifact. But it's not the real thing, it's just an estimate, a finger pointing at the real thing.
I agree that people can be both stupid and smart in very different ways, but at a certain -- and useful! -- level of aggregation, there are generally smart people and generally stupid people. There is a lot of variation around that axis, but I think the axis exists. I'm not arguing that everything should be projected into that one-dimensional space and reduced to a scalar.
Here is how this game works. We have a bunch of observed variables X, and a smaller set of hidden variables Z.
We assume a particular model for the joint distribution p(X,Z). We then think about various facts about this distribution (for example eigenvalues of the covariance matrix). We then try to conclude a causal factor from these facts. This is where the error is. You can't conclude causality that way.
Yes, you can. You can conclude that some causal factor exists. You then define g to be that causal factor.
No you can't conclude that. I am glad we had this chat.
What if there are several such causal factors?
I know how the game works, I've paged through the Pearl book. But here, in this case, I don't care much about causality. I can observe the existence of stupid people and smart people (and somewhat-stupid, and middle-of-the-road, and a bit smart, etc.). I can roughly rank them on the smart - stupid axis. That axis won't capture all the diversity and the variation, but it will capture some. Whether what it captures is sufficient depends, of course. It depends on the purpose of the exercise and in some cases that's all you need and in some cases it's entirely inadequate. However in my experience that axis is pretty relevant to a lot of things. It's useful.
Note that here no prediction is involved. I'm not talking about whether estimates of g (IQ, basically) can/will predict your success in life or any similar stuff. That's a different discussion.
???
To the extent that you view g as what it is, I have no problem. But people think g is (a) a real thing and (b) causal. It's not at all clear it is either. "Real things" involved in human intelligence are super complicated and have to do with brain architecture (stuff we really don't understand well). We are miles and miles and miles away from "real things" in this setting.
The game I was describing was how PCA works, not stuff in Pearl's book. The point was PCA is just relying on a model of a joint distribution, and you have to be super careful with assumptions to extract causality from that.
Do you think IQ has to be a causal factor to be a good predictor/be meaningful?
No I do not. I think IQ can be a useful predictor for some things (as good as one number can be, really). But that isn't the story with g, is it? It is claimed to be a causal factor.
If we want to do prediction, let's just get a ton of features and use that, like they do in machine learning. Why fixate on one number?
Also -- we know IQ is not a causal factor, IQ is a result of a test (so it's a consequence, not a cause).