The big three:
A few sub-big-ideas:
Regarding economic progress:
>Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
Upstream: setting the ontology that allows interoperability aka computer interface design = largest companies in the world. Hell, you can throw a GUI on IRC and get billions of dollars. That's how early in the game things are.
Have you read any of Cosma Shalizi's stuff on computational mechanics? Seems very related to your interests.
In October, 1991 an event of such profound importance happened in my life that I wrote the date and time down on a yellow sticky. That yellow sticky has long been lost, but I remember it; it was Thursday, October 17th at 10:22 am. The event was that I had plugged a Hayes modem into my 286 computer and, with a copy of Procomm, logged on to the Internet for the first time. I knew that my life had changed forever.
At about that same time I wanted to upgrade my command line version of Word Perfect to their new GUI version. But the software was something crazy like $495, which I could not afford.
One day I had an idea: "Wouldn't it be cool if you could log on to the Internet and use a word processing program sitting on a main frame or something located somewhere else? Maybe for a tiny fee or something."
I mentioned this to the few friends I knew who were computer geeks, and they all scoffed. They said that software prices would eventually be so inexpensive as to make that idea a complete non-starter.
Well, just look around. How many people are still buying software for their desktops and laptops?
I've had about a dozen somewhat similar ideas over the years (although none of that magnitude). What I came to realize was that if I ever wanted to make anything like that happen, I would need to develop my own technical and related skills.
So I got an MS in Information Systems Development, and a graduate certification in Applied Statistics, and I learned to be an OK R programmer. And I worked in jobs -- e.g., knowledge management -- where I thought I might have more "Ah ha!" ideas.
The idea that eventually emerged -- although not in such an "Ah ha!" fashion -- was that the single biggest challenge in my life, and perhaps most peoples' lives, is the absolute deluge of information out there. And not just out there, but in our heads and in our personal information systems. The word "deluge" doesn't really even begin to describe it.
So the big idea I am working on is what I call the "How To Get There From Here" project. And it's mainly about how to successfully manage the various information and knowledge requirements necessary to accomplish something. This ranges from how to even properly frame the objective to begin with...how to determine the information necessary to accomplish it...how to find that information...how to filter it...how to evaluate it...how to process it...how to properly archive it...etc., etc., etc.
Initially I thought this might end up a long essay. Now it's looking more like a small book. It's very interesting to me because it involves pulling in so many different ideas from so many disparate domains and disciplines -- e.g., library science, decision analysis, behavioral psychology -- and weaving everything together into a cohesive whole.
Anyway, that's the current big idea I'm working on.
Of ideation, prioritization, and implementation, I agree that prioritization is the most impactful, tractable, and neglected.
Please see my post below. My current big idea is very similar to yours. I believe we may be able to exchange notes!
"Let's finish what Engelbart started"
1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.
2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.
3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.
In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.
See: Ought
We should make thousands of clones of John von Neumann from his DNA. We don't have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann's productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones' IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.
There was some previous discussion of this idea in Modest Superintelligences and its comments. I'm guessing nobody is doing it due to a combination of weirdness, political correctness, and short-term thinking. This would require a government effort and no government can spend this much resources on a project that won't have any visible benefits for at least a decade or two, and is also weird and politically incorrect.
What exactly is the secret ingredient of "being John von Neumann"? Is it mostly biological, something like unparalleled IQ; or rather a rare combination of very high (but not unparalleled) IQ with very good education?
Because if it's the latter, then you could create a proper learning environment, where only kids with sufficiently high IQ would be allowed. The raw material is out there; you would need volunteers, but a combination of financial incentives and career opportunities could get you some. (The kids would get paid for going there and...
Do you think it would make a big difference though? Isn't it likely that a bunch of John von Neumanns are already running around given the world's population? Aren't we just running out of low-hanging fruits for von Neumanns to pick?
The negative principle: it seems like in a huge number of domains people are often defaulting to positivist accounts or representations of things, yet when we look at the history of big ideas in STEM I think we see a lot of progress happening from people thinking about whatever the inverse of the positivist account is. The most famous example I know of is information theory, where Shannon solved a long standing confusion by thinking in terms of uncertainty reduction. I think language tends to be positivist in its habitual forms which is why this is a recurring blind spot.
Levels of abstraction: Korzybski, Marr, etc.
Everything is secretly homeostasis
Modal analysis: what has to be true about the world for a claim to have any meaning at all i.e. what are its commitments
Type systems for uncertainty
A lot of these are quite controversial:
Very few of these are controversial here. The only ones that seem controversial to me are
...
That's all, actually. And I'm not even incredulous about that one, just a bit curious.
Although aging and death is terrible, I don't think there's much point in building a movement to stop it. AGI will almost certainly be solved before even half of the processes of aging are.
Everyone has his pet subject which he thinks everybody in society ought to know and thus ought to be added to the school curriculum. Here on LessWrong, it tends to be rationality, Bayesian statistics and economics, elsewhere it might be coding, maths, the scientific method, classic literature, history, foreign languages, philosophy, you name it.
And you can always imagine a scenario where one of these things could come in handy. But in terms of what's universally useful, I can hardly think of anything beyond reading/writing and elementary school maths, that's it. It makes no economic sense to drill so much knowledge into people's heads; division of labor is like the whole point of civilization.
It's also morally wrong to put people through needless suffering. School is a waste, or rather theft, of youthful time. I wish I had played more video games and hung out with friends more. I wish I scored lower on all the exams. If your country's children speak 4 languages and rank top 5 in PISA tests, that's nothing to boast about. I waited for the day when all the misery would make sense; that day never came. The same is happening to your kids.
Education is like code - the less the better; strip down to the bare essentials and discard the rest.
Edit: Sorry for the emotion-laden language, the comment turned into a rant half-way through. Just something that has affected me personally.
My past big ideas mostly resemble yours, so I'll focus on those of my present:
Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.
Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)
Bidding wars are a fairly large subclass of avoidable wars. The corresponding negotiation, for an auction, would be for the players to try to measure their wills out of band, then for those found to have the least will to commit to abstaining from the auction. (People would stop running auctions if bidders could coordinate well enough to do this, of course, but I'm not sure how bad a world without auctions would be, I think auctions benefit sellers more than they benefit markets as a whole, most of the time. A market that serves both buyer and seller should generally consider switching to Vickrey Auctions, in the least.)
[1] Regarding intensification; my impression so far is that there is nothing especially natural about land price increase as a promoter of density. It doesn't do the job as fast as we would like it to. The benefits of density go to the commons. Those common benefits of density correlate with the price of the individual dense building, but don't seem to be measured accurately by it.
Another Big Idea is "Average Utilitarianism is more true than Sum Utilitarianism", but I'm not sure whether the world is ready to talk about that. I don't think I've digested it fully yet. I'm not sure that rock needs to be turned over...
I also have a big idea about the evolutionary telos of paraphilias, but it's very hard to talk about.
Oh, this might be important: I studied logic for four years so that I could tell you that there are no fundamental truths, and all math and logic just consists of a machine that we evolved and maintained just because it happened to work. There's no transcendent beauty at the bottom of it all, it's all generally kind of ugly even after we've cut the ugliest parts away, and there may be better alternatives (consider CDT and FDT for an example of a deposition of seemingly fundamental elegance)
The usual Georgist story is that the problem of allocating land can be solved by taxing away all unimproved value of land (or equivalently by the government owning all land and renting it out to the highest bidder), and that won't distort the economy, but the people who profit from current land allocation are disproportionately powerful and will block this proposal. Is that related to the problem you're trying to solve?
I don't think this is addressable because of the taboo tradeoffs in current culture around money and class. Some people produce more negative externalities than others in ways our legal system can not address, therefore people sequester themselves via money gating since that is still acceptable in practice even though it is decried explicitly.
One thing I'm thinking about these days:
Often times, when people make decisions, they don't explicitly model how they themselves will respnd to the outcomes; they instead use simplified models of themselves to quickly make guesses about the things that they like. These guesses can often act as placebos which turn the expected benefits of a given decision into actual benefits solely by virtue of the expectation. In short if you have the psychological architecture that makes it physically feasible to experience a benefit, you can hack your simplified models of yourself to make yourself get that benefit.
This isn't quite a dark art of rationality since it does not need to actually hurt your epistemology but it does leverage the possibility of changing who you are (or more explicitly, changing who you are by changing who you think you are). I'm currently using this as a way to make myself into the kind of person who is a writer.
Humans prefer mutual information. Further, I suspect that this is the same mechanism that drives our desire to reproduce.
The core of my intuition is that we instinctively want to propagate our genetic information, and also seem to want to propagate our cultural information (e.g. the notion of not being able to raise my daughter fills me with horror). If this is true of both kinds of information, it probably shares a cause.
This seems to have explanatory power for a lot of things.
This quickly condensed into considering how important shared experiences are, and therefore also coordinated groups. This is because actions generate shared experiences, which contain a lot of mutual information. Areas of investigation for this include military training, asabiyah, and ritual.
What I haven't done yet is really link this to what is happening in the brain; naively it seems consistent at first blush with the predictive processing model, and also seems like maybe-possibly Fristonian free energy applied to other humans.
We experience and learn so many things over years. However, our memories may fail us. They fail in recalling a relevant fact that could have been very useful for accomplishing an immediate task at hand. e.g. My car tire has punctured on a busy street, but I cannot recall how to change it -- though I remember reading about it in the manual.
It is likely that the memory is still alive somewhere in the deep corner of my brain. In this case, I maybe able to think hard and push myself to remember it. Such a process is bound to be slow and people on the street would yell at me for blocking it!
Sometimes our memories fail us "silently". We don't know that somewhere in our brain is information we can bring to bear on accomplishing a task on hand. What if I don't even know that I have read a manual on changing car tires?!
Long term memory accessibility is thus an issue.
Now our short term memory is also very very limited (4-7 chunks at a time). In fact, short-cache of working memory might be a barrier to intellectual progress. It is then very crucial to inject relevant information in this limited working=memory space if we are to give a task our best, most intelligent shot.
Thus, I think about memory systems that can artificially augment the brain. I think of them from point of view of storing more information and indexing it better. I think of them for faster and more relevant retrieval.
I think of them as importable and exportable -- I can share them with my friends (and learn how to change tires instantaneously). A pensieve like memory bank.
I thus think of "digital memories" that augment our relatively superior and creative compute brain processes. That is my (current) big idea.
This is basically the long-term goal of Neuralink as stated by Elon Musk. I am however very skeptical because of two reasons:
Nearly all education should be funded by income sharing agreements.
E1 = student's expected income without the credential / training (for the next n years).
E2 = student's expected income with the credentia / training (over the next n years). Machine learning can estimate this separately for each student.
C = cost of the program
R = Percent of income above E1 that student must pay back = (E2-E1)/C
Give students a list of majors / courses / coaches / apprenticeships, etc. with an estimate of expected income E2 and rate of repayment R.
Benefits:
Obviously, rich students could still pay out of pocket up front (since they are nearly guaranteed a high income, they might not want to give a percent away).
I like this idea, but I'm still pretty negative about the entire idea of college as a job-training experience, and I'm worried that this proposal doesn't really address what I see as a key concern with that framework.
I agree with Bryan Caplan that the reason why people go to college is mainly to signal their abilities. However, it's an expensive signal -- one that could be better served by just getting a job and using the job to signal to future employers instead. Plus, then there would be fewer costs on the individual if they did that,...
I tend to keep three on mind and in rotation, as they move from "under inspection" to "done for now" and all the gradations between. In the past, this has included the likes of:
Currently I'm working on:
The last of those has been in the works for the longest and current evidence (anecdotal and journal studies) suggests to me that we researching "apathy for self-betterment" are looking too high up the abstraction ladder. So it's time to dig a little deeper
Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.
How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques
Ethics aside, this seems to be a tall order. You're basically trying to hack into someone else's mind through very limite...
At any one time I usually have between 1 and 3 "big ideas" I'm working with. These are generally broad ideas about how some thing works with many implications for how the rest of the whole world works. Some big ideas I've grappled with over the years, in roughy historical order:
I'm sure there are more. Sometimes these big ideas come and go in the course of a week or month: I work the idea out, maybe write about it, and feel it's wrapped up. Other times I grapple with the same idea for years, feeling it has loose ends in my mind that matter and that I need to work out if I'm to understand things adequately enough to help reduce existential risk.
So with that as an example, tell me about your big ideas, past and present.
I kindly ask that if someone answers and you are thinking about commenting, please be nice to them. I'd like this to be a question where people can share even their weirdest, most wrong-on-reflection big ideas if they want to without fear of being downvoted to oblivion or subject to criticism of their reasoning ability. If you have something to say that's negative about someone's big ideas, please be nice and say it as clearly about the idea and not the person (violators will have their comments deleted and possibly banned from commenting on this post or all my posts, so I mean it!).