This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them.
This thread itself is such an idea. Or rather the tangent of such an idea which I post below as a seed for this thread.
Rules for this thread:
- Each crazy idea goes into its own top level comment and may be commented there.
- Voting should be based primarily on how original the idea is.
- Meta discussion of the thread should go to the top level comment intended for that purpose.
If this should become a regular thread I suggest the following :
- Use "Crazy Ideas Thread" in the title.
- Copy the rules.
- Add the tag "crazy_idea".
- Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be ideas or similar'
- Add a second top-level comment with an initial crazy idea to start participation.
In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don't always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new problem, etc. Mutation, selection, mutation, selection... Evolution, of a sort.
Understanding is an interaction between an internal world model and observable evidence. Every world model contains "behind the scenes" components which are not directly verifiable and which serve to explain the more superficial phenomena. This is a known requirement to be able to model a partially observable environment. The irrational beliefs of Descartes and Leibniz which you describe motivated them to search for minimally complex indirect explanations that were consistent with observation. The empiricists were distracted by an excessive focus on the directly verifiable surface phenomena. Both aspects, however, are important parts of understanding. Without intangible behind-the-scenes components, it is impossible to build a complete model. But without the empirical demand for evidence, you may end up modeling something that isn't really there. And the focus on minimal complexity as expressed by their search for "harmony" is another expression of Occam's razor, which serves to improve the ability of the model to generalize to new situations.
A lot of focus is given to the scientific method's demand for empirical evidence of a falsifiable hypothesis, but very little emphasis is placed on the act of coming up with those hypotheses in the first place. You won't find any suggestions in most presentations of the scientific method as to how to create new hypotheses, or how to identify which new hypotheses are worth pursuing. And yet this creative part of the cycle is every bit as vital as the evidence gathering. Creativity is so poorly understood compared to rationality, despite being one of the two pillars, verification with evidence being the other, of scientific and technological advancement. By searching for harmony in nature, they were engaging in a pattern-matching process, searching the hypothesis space for good candidates for scientific evaluation. They were supplying the fuel that runs the scientific method. With a surplus of fuel, you can go a long way even with a less efficient engine. You might even be able to fuel other engines, too.
I would love to see some meta-scientific research as to which variants of the scientific method are most effective. Perhaps an artificial, partially observable environment whose full state and function are known to the meta researchers could be presented to non-meta researchers, as objects of study, to determine which habits and methods are most effective for identifying the true nature of the artificial environment. This would be like measuring the effectiveness of machine learning algorithms on a set of benchmark problems, but with humans in place of the algorithms. (It would be great, too, if the social aspect of scientific research were included in the study, effectively treating the scientific community as a distributed learning algorithm.)