At any one time I usually have between 1 and 3 "big ideas" I'm working with. These are generally broad ideas about how some thing works with many implications for how the rest of the whole world works. Some big ideas I've grappled with over the years, in roughy historical order:
- evolution
- everything is computation
- superintelligent AI is default dangerous
- existential risk
- everything is information
- Bayesian reasoning is optimal reasoning
- evolutionary psychology
- Getting Things Done
- game theory
- developmental psychology
- positive psychology
- phenomenology
- AI alignment is not defined precisely enough
- everything is control systems (cybernetics)
- epistemic circularity
- Buddhist enlightenment is real and possible
- perfection
- predictive coding grounds human values
I'm sure there are more. Sometimes these big ideas come and go in the course of a week or month: I work the idea out, maybe write about it, and feel it's wrapped up. Other times I grapple with the same idea for years, feeling it has loose ends in my mind that matter and that I need to work out if I'm to understand things adequately enough to help reduce existential risk.
So with that as an example, tell me about your big ideas, past and present.
I kindly ask that if someone answers and you are thinking about commenting, please be nice to them. I'd like this to be a question where people can share even their weirdest, most wrong-on-reflection big ideas if they want to without fear of being downvoted to oblivion or subject to criticism of their reasoning ability. If you have something to say that's negative about someone's big ideas, please be nice and say it as clearly about the idea and not the person (violators will have their comments deleted and possibly banned from commenting on this post or all my posts, so I mean it!).
This seems like a misleading summary of what g is.
g is the shared principal component of various subsets of IQ tests. As such, it measures the shared variance between your performance on many different tasks, and so is the thing that we expect to generalize most between different tasks. But in most psychometric contexts I've seen, we split g into 3-5 different components, which tends to add significant additional predictive accuracy (at the cost of simplicity, obviously).
To describe it as "the real thing" requires defining what our goal with IQ testing is. Results on IQ tests have predictive power over income and life-outcomes even beyond the variance that is explained by g, and predictive power over outcomes on a large variety of different tasks beyond only g.
The goal of IQ tests is not to measure g, it isn't even clear whether g is a single thing that can be "measured". The goal of IQ tests historically has been to assess aptitude for various jobs and roles (such as whether you should be admitted to the military, which is where a large fraction of our IQ-score data comes from). For those purposes, we've often found that solely focusing on trying to measure aptitude that generalizes between tasks is a bad idea, since there is still significant task-specific variance that we care about, and would have to give up on measuring in the case of defining g as the ultimate goal of measurement.