My own experiences I consider a brilliant corrective for being really smart at the age of 10. I was always praised for my brilliance at young ages, from the neighbor man talking to me telling me "talking to you is not like talking to any other 12 year old" to the special class I was put in that required an IQ test to get in, skimmed the top 2% of the school district in to a special class in a remote school that met at different school hours to make busses available.
One great thing was to make a friend who was at least as smart as me, but way more unconventional. He applied to Caltech and to MIT (he said as his safety school) and that was it.
But the real corrective is to go to a great academic school. My freshman year at Swarthmore was the hardest academic year I had after 4th grade. Being around other smart 1%'ers is the way to go, to get a little normal or integrated. Bell Labs research area as a technician working for nobel prize winners and Caltech for graduate school planted me firmly in the middle of my peer group.
Also, read Feynman. Not his physics books necessarily but everything else. I was lucky enough to sit in the Physics Lecture every week with Feynman for a few years. He asked questions when he didn't understand things. I even got to tell him something that he admitted wasn't trivial when I was a 2nd year graduate student. (It was that photons in a waveguide have a finite rest mass, whereas it is only photons in free space that have zero rest mass and travel at the speed of light.)
If you are always the smartest guy in the room, chances are you have not worked hard enough to find the right room.
Upvoted for this:
If you are always the smartest guy in the room, chances are you have not worked hard enough to find the right room.
If Reagan or FDR or Washington ever caught themselves thinking "I'm the smartest guy in this room" their immediate reaction would have been: "Uh-oh, I'd better get some smarter guys in here, pronto!"
Reagan had Alzheimers throughout his second term, and if he didn't have clinical alzheimers during his first term, it's not difficult to demonstrate that a pre-Alzheimers condition isn't much better. http://www.youtube.com/watch?v=ONNMiuWI4Fo&feature=related http://www.washingtonmonthly.com/archives/individual/2011_01/027551.php
I read
If you are always the smartest guy in the room, chances are you have not worked hard enough to find the right room.
A month later it is still I maxim I hold. Thank you.
THank you very much for letting me know this means something to you! I hope you get a chance to go to some of the lesswrong meetups, the ones I have gone to so far have definitely been the right rooms!
Mike
I was lucky enough to sit in the Physics Lecture every week with Feynman for a few years.
Ohh, please tell us more stories about Feynman!
He asked questions when he didn't understand things. I even got to tell him something that he admitted wasn't trivial when I was a 2nd year graduate student. (It was that photons in a waveguide have a finite rest mass, whereas it is only photons in free space that have zero rest mass and travel at the speed of light.)
Huh, didn't think about it before, but now it seems obvious. I've been explaining to people that only single idealized photons are massless, and any system containing more than one photon (except maybe as a coherent state) has rest mass, but never thought to apply it to light in a medium with n>1.
Ohh, please tell us more stories about Feynman!
This is not one I observed myself, but it circulated when I was around.
Feynman was sitting on the qualifying exam of a theoretical physics PhD candidate. If you have read Feynman you know he always realated theory to reality. So he asked this student "what is the wavelength of visible light?" The student said he had no idea. As is common in an exam like this, Feynman invited the student to consider what he knew about the real world and see if he could come up with something. The student suggested "the wavelength is 1 in normalized units" which is just pure BS anyway. Feyman asked the student to show him what he thought the wavelength might be by holding up his fingers. The student indicated with his thumb and forefinger about an inch apart.
Here's the punchline: Feynman says to the student and says "Do I look fuzzy to you?"
I found this hilariously funny when I first heard it, which was after I had been working on millimeter-wave quasioptical systems. I'll come back tomorrow and 'splain the punchline for those who don't care to learn quasioptics between now and then.
Here's the punchline: Feynman says to the student and says "Do I look fuzzy to you?"
Well, at this point in the exam everything must have looked fuzzy to the hapless candidate. Seriously, though, it is hard to believe that someone at that level would not know basic stuff like that.
The wavelength of light must be smaller than something you can see using light. So the student should have known the wavelength of light must be smaller than the width of the hairs on his knuckles, < ~50 um. While this might seem to be a very specialized knowledge, the wave mechanics behind this limit feed right in to things like the heisenberg uncertainty principle and many of the properties of fourier transforms, really very basic stuff in lots of physics. If light had a wavelength of 1 inch, the best images we could make of the people we saw would have about an inch of "blurring" on them, looking like an out of focus picture.
The wavelengths and frequencies of all the sections of the em waves were required knowledge in my high school physics course.
Besides, these days everyone has a laser, and you can remember its wavelength. (My green one is 531 nm.)
Here's the punchline: Feynman says to the student and says "Do I look fuzzy to you?"
"Actually, now that you mention it, I think I might need some new eyeglasses..."
Huh, didn't think about it before, but now it seems obvious. I've been explaining to people that only single idealized photons are massless, and any system containing more than one photon (except maybe as a coherent state) has rest mass, but never thought to apply it to light in a medium with n>1.
Its not because the index is > 1. Interestingly, we do say photon velocity = c/n so presumably in some real sense a photon in an index n>1 medium must have a finite rest mass, but that is not what I figured out with waveguides.
With waveguides, there is a cutoff frequency, and the group-velocity of radiation in the waveguide gets lower as frequency drops until at the cutoff frequency the group-velocity is zero (and the phase velocity is infinite.) So we have a photon at cutoff frequency with zero group velocity but still finite energy hf_c (planck's constant h times the cutoff frequency f_c). so its restmass expressed in units of energy is hf_c. Impressively, as you "accelerate" the photon, and calculate its total energy, you do find that it is hf_c * ( 1 + (v_g^2)/2 + higher order terms ) which is identical for regular particles with rest mass in the theory of relativity.
In principle, a low enough loss waveguide held vertically in earth's magnetic field would have hf_c stationary photons from its top end accelerated by gravity come out the bottom end with finite group velocity and therefore higher frequency f > f_c (becasue hf is the energy of any photon, so if its energy increased from falling through a gravity potential, its frequency must have increased too). This is a wierd example of the well-known general relativistic blue-shift from light falling down a gravitational well.
Its been years since I thought about this stuff.
I'm another classic brilliant-at-age-ten kid. The biggest problem I experienced related to being considered smart rather young was that a lot of my sense of self-worth got tied up in being the smartest kid in the room. This is suboptimal-- not only does it lead to the not asking stupid questions issue, but it also means that as soon as I was in a situation in which I wasn't smart about something, I felt like I had no worth as a human being whatsoever. (Possible confounding variable: I had depression.)
The closest thing to a solution I've found is to try to derive my self-worth from multiple sources. I am worth something as a human being not simply because of intellectual achievements, but also because I have friends who like me, I give to charity, I refused to give up. I don't know how well this will work for other people, though.
The other big problem I encountered is that I tended to automatically give up if I wasn't immediately good at something; this is why, among other reasons, I have a roughly ninth-grade understanding of math, even though I've taken calculus. (I've read studies that suggest that that's common among children praised for traits instead of actions; I'm away from JSTOR and my psych textbooks at the moment, but if someone would like a citation then I can dig it up in a week or so.) My solution was to grade myself on process instead of achievement: I defined success not as "learning two new songs" but as "practicing guitar for half an hour every week." My other solution was to work to overcome the ugh fields around activities I'm generally not good at, and to redefine those fields within my brain as "cool stuff I haven't learned yet", not "stuff I can't do."
My solution was to grade myself on process instead of achievement: I defined success not as "learning two new songs" but as "practicing guitar for half an hour every week."
I would have thought that would be quite a bad idea, as it rewards you for attempting to do something, as opposed to succeeding. Kaj talked about this here.
I would have thought that would be quite a bad idea, as it rewards you for attempting to do something, as opposed to succeeding.
When done well (in particular with a focusing the practice on specific techniques) this is actually the right approach. You then transition to success focus once you get to a fairly high standard. Science says so, with randomised, controlled studies. (Source, Cambridge Handbook of Expertise etc., via memory.)
Okay. I suspect that the focus on particular techniques is the main reason that you're right. Thanks for pointing this out.
You can optimize for being smart, you can optimize for seeming smart, but sometimes you need to pick which one to optimize for.
It's important to note that, sometimes, seeming smart can be the right choice. If you need people to trust you in the future on a very important issue, you might want to always seem smart. It's kind of Dark Arts, though.
It may make you uncomfortable to admit to not knowing something. It may make you feel like the people around you will stop thinking you're all-knowing. But if you don't know how to ask stupid questions, and you just keep pretending to understand, you'll fall behind and eventually be outed as being really, really stupid, instead of just pretty normal. Which sounds worse?
This is mainly a social issue. You can "solve" it in your social environment by pointing out that asking questions is indeed the smart move. If you indeed are smart, the quality of your questions in the long term will be a sufficient proof of it.
P.S. Note that I'm answering ignoring the "10 years old" part: I don't think I'm sufficiently qualified to answer it, but the issue is still relevant for grown ups.
so, what tags go on this post and how would I know?
I usually look at the tag cloud in the sidebar and put in anything that looks right and appears there. I started a new tag when I did the luminosity sequence.
And it's pretty common for that to really mess you up, and then you don't end up reaching your full potential.
Is it? How do you measure the shortfall in someone's potential?
By... measuring the distance between the Optimal Way Things Could Have Been Done in Retrospect and what really happened in the past n years? (I guess it's a common phenomenon, I regularly observe it on myself.)
Since you didn't live the counterfactual, how do you know it was optimal? The planning fallacy doesn't just apply to future actions, you know. Plans for what Ought To Have Been Done are just as prone to friction and "meeting the enemy" as plans for What Wll Be Done, and have the additional problems of hindsight bias and never being tested.
Warning, Anecdotal Evidence: In my case there was a noticeable fall in my school results. When I was around 13-15 years old I was so convinced of being extra smart that I thought I didn't need to work hard to achieve results. When my marks started to go down, I didn't immediately think that I needed to study more, but I was instead puzzled and kind of pissed off. It took me a couple of years to realize the obvious truth and correct my behavior. This is the closest to an evidence I can think of.
Since you eventually did recover, how non-optimal was that? Maybe a few years of slacking off generated more utilons than the transient satisfactions of jumping through the hoops we hold up for teenagers.
I don't know for sure, of course, but I see many negative effects that appear to be related to the overconfidence I was made to have in that period. It took me a lot of time and effort to get back on track, especially with math, and I probably didn't fully recover until I began my bachelor. Plus, I was being a real dick for some time, and this possibly made me lose some potential good friends. I don't have the counterfactual, of course, but since we don't have 10 000 smart kids to use in an actual experiment, this kind of evidence has to be taken into account. Unless you happen to have a clone army in your secret evil lab of course... ;)
Of course imagining the Optimal Way is a fallacy, but people (including me) still imagine it... (by the way, this being an instance of the planning fallacy is a good point)
You can optimize for being smart, you can optimize for seeming smart, but sometimes you need to pick which one to optimize for.
I think there is a third option. Make yourself seem smart but especially concentrated in particular subjects and thus naive about most other topics.
There are plenty of precedents for this sort of caricature; for example, Sherlock Holmes.
There are of course costs and benefits to this. For example, if you incorporate such an idea into your identity you may avoid learning too much about topics you consider yourself to be naive about.
Another method with similar effects: being a newcomer, and instead of trying to hide it, using this position to ask lots of stupid questions. In a surprisingly big percentage of the cases, it turns out that the question wasn't that stupid at all, and not even the "established" practicioners of the topic can answer.
A small-scale example: I joined a group of students studying for our Complex Analysis exam next day (they were already studying for a while). As the whole subject was about complex functions, I started with a really stupid question: what are complex functions, anyway? Well, nobody knew for sure, and eventually it was me who explained it for everyone else... (but only after asking further stupid questions, of course.)
So, it seems like most people here are really smart. And a lot of us, I'm betting, will have been identified as smart when we were children, and gotten complimented on it a lot.
I guess I am the big exception here. I completely failed at school. I once took part in an IQ test done by the local jobs employment agency and received a score low enough that the executive didn't want to tell me exactly how I scored to not discourage me (he said much below average but I shouldn't bother and just try my best).
Just a few days ago I finished reading my first non-fiction book, at age 27 (which will hopefully be the first of hundreds to come).
Okay, here is the first stupid question: I once read that some people don't vote because they believe that they can't influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don't understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn't it be implied by what we want and the laws of thought?
Third stupid question: I still don't get how expected utility maximization doesn't lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don't hit diminishing marginal returns. Bodily sensations like happiness for example don't seem to run into diminishing returns. I don't see how you can avoid world-states where wireheading is the favored outcome. Anecdotal evidence for this outcome is the behavior of people on lesswrong with respect to saving the world, a goal that does currently outweigh most other human values due to its enormous expected utility. I don't see why this wouldn't be the case for wireheading as well, or other narrow goals like leaving the universe or hacking the matrix instead of satisfying all human values like signaling games or procrastination. If you are willing to contribute your money to an existential risk charity now, even given the low probability of its success, then why wouldn't you do the same after the singularity by contributing the computational resources you would be using until the end of the universe to the FAI so that it can figure out how to create a pocket universe or travel back in time to gain more resources to support many more human beings?
Whatever morality is or is not, shouldn't it be implied by what we want and the laws of thought?
This is basically the EY/lukeprog school of thought on metaethics, isn't it? Your preferences, delicately extrapolated to better match the laws of logic, probability theory and (advanced) decision theory, are the ideal form of what reductionists mean when they talk about morality.
Now, not everyone on LW agrees with this contention, which is why ethics is a perennial topic of discussion here.
This is basically the EY/lukeprog school of thought on metaethics, isn't it?
If so, I've overestimated EY's agreement with my take on it. I see both the preferences of extrapolated-me and actual-me as effects of partly common causes, some (my case) or all (his case) reflecting my good. What extrapolated-me seeks is not good because he seeks it, but because (for examples) it promotes deep personal relationships, or fun, or autonomy. These are the not-so-strange attractors (dumb question: does chaos theory literally apply here?) that explain the evolution of my values with increasing knowledge and experience.
I think I remember EY saying something along the same lines, so maybe we don't differ.
This sounds exactly like what EY believes. Even the language is similar, which is nontrivial due to the difficulty of expressing this idea clearly in standard English. Did you start believing this after reading the metaethics sequence?
No, but maybe we were inspired by some of the same sources. I think it was David Zimmerman's dissertation which got me started thinking along these lines.
Well, the concepts get messy, but I think we're speaking of the same thing. It's the bit of data in volition-space to which my current brain is a sort of pointer, but as it happens there are a lot of criteria that correspond to it; it's not a random point in volition-space, most other human brains point to fairly similar bits, etc.
I once read that some people don't vote because they believe that they can't influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you've still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there's a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for "charity in general."
It's also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don't direct their giving thoughtfully and there's a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don't understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn't it be implied by what we want and the laws of thought?
Yes, it should. That's what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn't equal to what we want, so it's worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Third stupid question: I still don't get how expected utility maximization doesn't lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don't hit diminishing marginal returns. Bodily sensations like happiness for example don't seem to run into diminishing returns.
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can't maximize your utility.
As for the second part of your question: hadn't thought of that. I'll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that's what I'll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don't particularly care to get as many resources as possible to create as many humans as possible. I also don't really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren't stupid questions, by the way; they're relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don't matter that I've encountered.
Breaking the false dichotomy here.
Asking questions does not make you seem un-smart. In fact, asking good or genuine questions makes you seem intelligent enough to recognize your known unknowns, confident enough to admit it, and implies that you understood everything else they said. (if someone says Confusing Things A, B, C, and D, and you ask an intelligent question about C, then they assume that you understand A, B, and D much more than they otherwise would)
Other benefits:
Asking questions gives the other person the opportunity to teach you something, which will make them like you more.
The other person may feel grateful that you asked something they also wanted the answer to.
People like to have fun. Being good-natured about being laughed at sometimes (in a non-meanspirited way), means that people associate you with positive feelings.
That's not to say that asking questions always has these benefits. I think what's mainly important is HOW you ask a question.
Example- Someone is explaining their thesis. Normally people just nod along, and say "interesting". Instead, do your best to follow what they are talking about. Ask questions to clarify. When you think you understand, state what you think they are saying as a question: "Oh, so the genes you are putting into the bacteria will cause them to stop producing that protein, and you'll know if it works because then they'll turn blue?"
Now, not only does the person actually think that you are smart, but they also are happy because they got a chance to explain their work to someone who understood it, and they will like you more.
Asking questions does not make you seem un-smart. In fact, asking good or genuine questions makes you seem intelligent enough to recognize your known unknowns, confident enough to admit it, and implies that you understood everything else they said.
While this is plausibly true of a fair chunk of the experiences people on LW have, from the human norm it is a rather odd situation where this holds.
Different cultures do things differently.
Asking questions gives the other person the opportunity to teach you something, which will make them like you more.
An extensive answer to a question is a form of investment. It won't make them like you more. It will make them more likley to invest in you further, much like generally speaking a person is more willing to do you a large favour after they did you a small one.
This is not the same as them liking you more.
The other person may feel grateful that you asked something they also wanted the answer to.
True, but most of the warm fuzzies will go to the person answering the question.
People like to have fun. Being good-natured about being laughed at sometimes (in a non-meanspirited way), means that people associate you with positive feelings.
True. But your relative status matters here a lot.
Adding to the ideas about asking stupid questions and mwengler's anecdote about being the smartest guy in the room (upvoted btw), I found that the thing I hated most about school was the fact that many of the teachers tended to possess numerous delusions of their own intelligence or other personality malfunctions that made learning (or bothering to go to school at all) quite painful to commit to. They tended to be things that could be solved if the perpetrator exhibitied a little bit more humility (or if the school could afford better qualified teachers, either way...)
Just as examples I had:
A pretentious art teacher that would say "Art is a talent, and thus can not be taught"
A married couple of music teachers who didn't think any child could appreciate music
An english teacher who would rant about her failed dream to be a journalist (-her only qualification to teach English)
Three foreign language teachers who each shouted at their students for being stupid (because they couldn't grasp new concepts)
A vegan Biology teacher who's lessons consisted seemingly of three cycles: "Don't smoke, don't drink, don't cook your food"
A religious ed teacher who once gave me and two of my peers books when she decided we were 'intellectually gifted'. All three books were poorly argued dissertations on the benefits of Christianity which tried to achieve its aims by villifying Judaism and Secularism in particularly unsavoury ways.
Anyway to bring it back on topic, I think that when some people experience characters like these who put them down, or try to show them up, or just distract them from the joy or learning, they begin to fear asking silly questions. And then they equate 'silly questions' with 'questions they don't know the answer to' and then they fear asking any questions at all and then never discover the answer to any of them! (or never discover the confidence to try and answer them themselves).
(I should add that the best loved and respected teachers at the school were all physics, maths and computer science teachers. Perhaps I am lucky that even though it was my linguistic skill for which I was "identified as being smart" at school, poor support in the areas that I was originally interested in (which continue to leave a lingering distaste) pushed me more towards physics, maths and computer science once I left school (something which I'm optimistic will lead to a more satisfying life than any 'romantic notions' I had might have done. Or to clarify, lest we be accused of fitting into the 'zero fun' rationalist stereotype, I have new romantic notions that revolve around the wonders of physics, maths and computer science.)
I think you should be reluctant to generalize from your experiences here. I had the occasional terrible teacher in school (including one who was diagnosed with a brain tumor a couple years later, to the surprise of none of their students), but overall my school experience sounds nothing like yours.
I really, really think you shouldn't generalize from:
I should add that the best loved and respected teachers at the school were all physics, maths and computer science teachers.
The science department in my high school was by far the worst department. I suspect that my experience is more typical because competent people with science degrees have many more higher-pay-and-prestige career options than have competent people with English literature or art degrees.
Adding to the idea of asking stupid questions and mwengler's smartest guy in the room anecdote (upvoted btw), I'd say that what I hated about school was that so many of the teachers seemed to suffer from numerous delusions of their own intelligence or other personal malfunctions that made learning (or even bothering to show up at school) a painful experience. Something that could have been solved if they'd simply practicised a bit more humility (or if the school had managed to afford better qualified teachers, either way...)
Just as examples I had: A pretentious art teacher who claimed, "Art is a talent and thus can't be taught" A married couple of music teachers who thought no child could truly appreciate music A dysfunctional English teacher who would rant about her failed stint in journalism (-her only qualification to teach English) A vegan Biology teacher who's lessons were an endless cycle of "Don't smoke, don't do drugs, don't cook your food" A French teacher a German teacher and a Spanish teacher (three seperate people) who shouted at younger children for failing to understand new concepts An religious ed teacher who once gave me and two of my peers books to read after she determined us as 'intellectually gifted'. All three books were poorly argued dissertations on the benefits of becoming a Christian which tried to achieve this by villifying Judaism and secularism in particularly unsavoury ways.
Anyway to bring this back on-topic, having characters like this that search out for ways to show up students, put them down or at least distract them from the joy of learning can really stop people from asking questions that might make them look silly later in life. And then they equate asking questions that are silly with asking questions that they don't know the answer to, and then they never find out!
(I should add that the best loved and respected teachers consisted of the physics, maths and computer science teachers. Perhaps even though I was "identified as smart" when I was a child for my linguistic ability, am lucky to have had such poor support in areas I was originally interested in (that now leave a lingering distaste) and have now been pushed into a world of physics, maths and computer science (which I feel will lead me to a much more satisfying life than any notions of romanticism I had would have done.)
Man, you must have gone to a really shitty school. My teachers were usually more subtly unhelpful or dumb, and rarely in such condemnable ways.
My worst teacher was a science teacher who insisted that the sun isn't a star because the sun is the sun.
My school wasn't nearly that bad in general.
This being said, teachers who don't know the material isn't all that rare a problem, and doesn't seem to get a lot of attention compared to topics which seem like more fun (unsolvable issues which can be related to morals, perhaps), like arguing about class size, teachers' unions, or whether poor educational outcomes are the fault of the parents, the students, or the schools.
This is where I think the prevailing attitude on Lesswrong that politics is a pointless activity is particularly unfortunate. In the US, education policy is largely set at the state and (except for CA and TX) the local level, and a small group of highly motivated individuals can have a meaningful impact on hiring and curriculum policy.
This seems to pass unnoticed except when people complain about creationists and the like doing it.
In practice, tags are only useful in the narrow case where you have a sequence or other set of closely-related posts, and want to be able to provide a link to the whole sequence. People sometimes like to put in lots of tags, on the theory that this gives their post better visibility or makes things better organized somehow, but this doesn't have much of any useful effect.
Tags do make searching for posts easier. So, what would someone who could benefit from seeing this post type into the search box? Signaling is the only term that comes to my mind. What else?
For tags, I'd suggest "advice" and possibly "social". I got that answer by looking at the tag cloud on the bottom right of this page and telling you the ones that seem to fit the topic. You can also make up you own tags if you want to, but not too many and make sure they're intuitively obvious--hopefully to more than just you. Just in case you don't know this already, you can put them on by clicking the edit button and typing them into the box at the bottom of the edit post page.
As one of the top students in my class, other students are kinda surprised when I ask questions in class. I'm already following this advice, but good job bringing this up.
I have been following it for a while, too, and it was me who was surprised why is it me who asks the most questions, being one of the top students... I guess it's something like "if I'm one of the best and still don't understand it, then it must be a valid question what I just want to ask".
So, anyone else know of any similar things to do, to get back to optimizing for being smart instead of for seeming smart?
Tentatively offered: Learn to distinguish really understanding a thing rather than just knowing enough to talk about it plausibly. I'm tentative on this one because I'm not sure how to formalize the process, but I suspect the latter is a real temptation for me.
Not tentative at all: Keep track of how you learned what's in your mind. Even if you can't remember an exact source, learn the difference between "I experienced it", "it was in fiction", "I checked it carefully", "everyone says so", "I heard it from a trustworthy source (and why did you think it was trustworthy?)", etc..
So, anyone else know of any similar things to do, to get back to optimizing for being smart instead of for seeming smart?
Arugments are usually understood to be about conflict. Agruments are attacks on beliefs, defend cherished values or otherwise have connations of anger and war. This is grossly in error. Arguments are better seen as a session of debugging. You start in with the assumption that something must be wrong somewhere in your chain and either you prove it right, you correct some small error, or your belief crashes and needs to be replaced. Not only does this metaphore impy more productive arguments, they also fit the classical understanding as all arguments suffer from a halting problem.
What you described might as well be construed as the http://en.wikipedia.org/wiki/Dialectics , may it not? But yes, I agree, dialectics is my prefered method of reaching new insights. I tend to prepare my argument in advance so that it's worthy of the my and the other debatant's time by having a possible direction to move in. If anything should be encountered along the way, then every nook and cranny will be explored during the back-and-forth argument.
I remember back in elementary school, all the teachers would so "there's no such thing as a stupid question. They even had posters of that on the doors.
Ironically, most of my class (IIRC) never bothered to ask questions or clear up confusion during class. They preferred to ask peers. If they went to ask the teacher during some other time, I wouldn't know. I was a frequent go-to person for math and science; this covered my other poor grades (social studies, art) via Halo Effect and made me appear "smart".
I took to Google for Social Studies.
Somewhere between that and now (Junior year) I figured out that nobody actually remembers when someone asks a stupid question in class. Generally, anyway; every now and then there's something ridiculously funny.
My point being is that if one is truly smart, they most likely appear to be too.
There's not much Utility in only seeming smart, anyways.
I've always thought that that was a bad way of framing that advice. Of course there's such a thing as a stupid question! "But isn't two plus two five?": stupid question. What kids need to understand instead is that stupid questions are the ones that most greatly need to be asked! That's how you fix the stupid!
Asking the teacher after class is an acceptable face-saving alternative to speaking out in front of everybody, but in the long run it may not be necessary. I suspect I'm not the only one who remembers "that guy who always asked stupid questions in our engineering classes" but mostly what I remember about him a decade later is that he kept acing those classes too: it turned out that nobody understood all the material; he was just the only one who was really dedicated to fixing his misunderstandings before getting tested on them.
The thing that struck me about "stupid" questions was that my high school chemistry class had a student who kept asking questions that I didn't quite have the nerve to raise. The teacher was also very solid (he didn't ask us to memorize atomic weights because he thought it was a waste of time, and, while he wasn't flashy, he went through the material very efficiently), but I think it was the student who asked the questions contributed to extraordinary scores for the class on the subject-based SAT.
The highest possible score was 800. The class had about 28 students. 6 of them got 800s. I was one of the 2 who got 795.
My problem when I was younger was that teachers would get frustrated very quickly with me- I liked asking questions, and they would call my parents claiming I disrupted the class. I probably did, in some way, but I wanted to actually understand what was being taught, and the teachers either were not used to that, or found that they could not answer me adequately and became annoyed.
In regards to being smart, I remember many adults telling me I was very "articulate" and "smart" for a 12 year-old. I do not think that was necessarily a good thing. Unfortunately, I was not exposed to any real science nor did I have anyone explain math to me in a way that was easily accessible at the time. I would quickly give up on subjects if I did not grasp the material right away. I think part of the problem was that I liked asking questions related to the topics I was learning, but only if I understood what was being taught, like if I were to ask about other stars in the galaxy outside of the sun. However I would not ask the teacher to re-explain something to me. It felt embarrassing and awkward to not know.
Now, I usually look up the subject before I ask questions. If I do not understand something in math, I go back to my dorm room and do the odd problems (which have answers in the back of the book) and see what I am failing to do right. If I still do not understand, I email the teacher, or go to my college's resource center.
I originally thought this post was about material which was too obvious to be worth saying (I didn't downvote it, though), but it has led to some interesting discussion and was probably the inspiration for Stupid Questions Open Thread, which I hope will be an ongoing feature at LW. Upvoted.
It might help to join a group where you're not the smartest. For example, I was the smartest kid around when I was 10. Then I joined a top math school and was freaked out to find myself at the bottom of the pack. Started working hard, won some competitions, got into university and finished it with honors.
(If I do anything wrong here, please tell me. I don't know what I'm doing and would benefit from being told what I've got wrong, if anything. I've never made a top-level post here before.)
So, it seems like most people here are really smart. And a lot of us, I'm betting, will have been identified as smart when we were children, and gotten complimented on it a lot. And it's pretty common for that to really mess you up, and then you don't end up reaching your full potential. Admittedly, maybe only people who've gotten past all that read Less Wrong. Maybe I'm the exception. But somehow I doubt that very much.
So here's the only thing I can think of to say if this is your situation: ask stupid questions.
Seriously, even if it shows that you have no clue what was just said. (Especially if it shows that. You don't want to continue not understanding.) You can optimize for being smart, you can optimize for seeming smart, but sometimes you need to pick which one to optimize for. It may make you uncomfortable to admit to not knowing something. It may make you feel like the people around you will stop thinking you're all-knowing. But if you don't know how to ask stupid questions, and you just keep pretending to understand, you'll fall behind and eventually be outed as being really, really stupid, instead of just pretty normal. Which sounds worse?
Here, let me demonstrate: so, what tags go on this post and how would I know?
So, anyone else know of any similar things to do, to get back to optimizing for being smart instead of for seeming smart?