(If I do anything wrong here, please tell me. I don't know what I'm doing and would benefit from being told what I've got wrong, if anything. I've never made a top-level post here before.)
So, it seems like most people here are really smart. And a lot of us, I'm betting, will have been identified as smart when we were children, and gotten complimented on it a lot. And it's pretty common for that to really mess you up, and then you don't end up reaching your full potential. Admittedly, maybe only people who've gotten past all that read Less Wrong. Maybe I'm the exception. But somehow I doubt that very much.
So here's the only thing I can think of to say if this is your situation: ask stupid questions.
Seriously, even if it shows that you have no clue what was just said. (Especially if it shows that. You don't want to continue not understanding.) You can optimize for being smart, you can optimize for seeming smart, but sometimes you need to pick which one to optimize for. It may make you uncomfortable to admit to not knowing something. It may make you feel like the people around you will stop thinking you're all-knowing. But if you don't know how to ask stupid questions, and you just keep pretending to understand, you'll fall behind and eventually be outed as being really, really stupid, instead of just pretty normal. Which sounds worse?
Here, let me demonstrate: so, what tags go on this post and how would I know?
So, anyone else know of any similar things to do, to get back to optimizing for being smart instead of for seeming smart?
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you've still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there's a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for "charity in general."
It's also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don't direct their giving thoughtfully and there's a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Yes, it should. That's what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn't equal to what we want, so it's worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can't maximize your utility.
As for the second part of your question: hadn't thought of that. I'll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that's what I'll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don't particularly care to get as many resources as possible to create as many humans as possible. I also don't really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren't stupid questions, by the way; they're relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don't matter that I've encountered.