I suspect that your model has been built to serve the hypothesis you started with.
First of all, I'm not sure what measure you're using for "rigorous thought". Is it a binary classification? Are there degrees of rigor? I can infer from some of your examples what kind of pattern you might be picking up on, but if we're going to try and say things like "there's a correlation between rigor and volume of publication", I'd like to at least see a rough operational definition of what you mean by rigor. It may seem obvious to you what you mean, and it may seem like a subject many people on this site devoted to refining human rationality will have opinions on. That makes it more important to define your terms rigorously, not less, because your results shouldn't explain variation in everyone's definition of rigor.
For the sake of argument, we could use something like "ratio of bits of information implied by factual claims to bits of information contained in presented evidence supporting factual claims" if we want something vaguely quantifiable. It seems your initial set of examples uses a more heuristic approach, with the rigorous group consisting mostly of well-known scientists, artists, and philosophers who are well-liked and whose findings/writings are considered well-founded/meaningful/influential in our current era, and your non-rigorous group consisting of mostly philosophers and some scientists who are at least partially discredited in our current era. I suspect that this might not be a very predictive heuristic, as I think it implicitly relies on some hindsight and also would be vulnerable to exactly the effect you claim if your claim turns out to be true.
Also, I suspect that academic publication and publication of e.g. novels, self-help books, poetry, philosophical treatises, etc. would follow very different rules with respect to rigor versus volume of publication; there are structures in place to make them do exactly that. While journal publication and peer review rules are obviously far from perfect, I suspect that producing a large volume of non-rigorous work is a much better strategy for a fiction writer, philosopher, or artist than it is for a scientist who, if unable to sufficiently hide their non-rigor, will not get their paper published at all, and might start becoming discredited and losing grant money to do further research. In particular, I think the use of a wide temporal range of publishers is going to confound you a lot, because standards have changed and publication rates in general have gone way up in the last ~150 years.
Actually, I'm not even sure how a definition of "rigorous thought" that applies to scientific literature could apply cleanly to fiction-writing, unless it's the "General Degree of Socially-Accepted Credibility" heuristic discussed earlier.
Oh, I guess I misunderstood. I read it as "We should survey to determine whether terminal values differ (e.g. 'The tradeoff is not worth it') or whether factual beliefs differ (e.g. 'There is no tradeoff')"
But if we're talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it's not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
I'd argue that that little one-off comment was less patronizing and more... sarcastic and mean.
Yeah, not all that productive either way. My bad. I apologize.
But I think the larger point stands about how these ideological labels are super leaky and way too schizophrenically defined by way too many people to really even be able to meaningfully say something like "That's not a representative sample of conservatives!", let alone "You probably haven't met people like that, you're just confabulating your memory of them because you hate conservatism"
Because those vectors of argument are insufficiently patronizing, I'm guessing.
But in all seriousness, the "judging memeplexes from their worst members" issue is pretty interesting, because politicized ideologies and really any ideology that someone has a name for and integrates into their identity ("I am a conservative" or "I am a feminist" or "I am an objectivist" or whatever) are really fuzzily defined.
To use the example we're talking about: Is conservatism about traditional values and bolstering the nuclear family? Is conservatism about defunding the government and encouraging private industry to flourish? Is conservatism about biblical literalism and establishing god's law on earth? Is conservatism about privacy and individual liberties? Is conservatism about nationalism and purity and wariness of immigrants? I've encountered conservatives who care about all of these things. I've encountered conservatives who only care about some of them. I've encountered at least one conservative who has defined conservatism to me in terms of each of those things.
So when I go to my internal dictionary of terms-to-describe-ideologies, which conservatism do I pull? I know plenty of techie-libertarian-cluster people who call themselves conservatives who are atheists. I know plenty of religious people who call themselves conservatives who think that cryptography is a scary terrorist thing and should be outlawed. I know self-identified conservatives who think that the recent revelations about NSA surveillance are proof that the government is overreaching, and self-identified conservatives who think that if you have nothing to hide from the NSA then you have nothing to fear, so what's the big deal?
I do not identify as a conservative. I can steelman lots of kinds of conservatism extremely well. Honestly I have some beliefs that some of my conservative-identifying friends would consider core conservative tenets. I still don't know what the fuck a conservative is, because the term gets used by a ton of people who believe very strongly in its value but mean different things when they say it.
So I have no doubt that not only has Acty encountered conservatives who are stupid, but that their particular flavor of stupid are core tenets of what they consider conservatism. The problem is that this colors her beliefs about other kinds of conservatives, some of whom might only be in the same cluster in person-ideology-identity space because they use the same word. This is not an Acty-specific problem by any means. I know arguably no one who completely succeeds at not doing this, the labels are just that bad. Who gets to use the label? If I meet someone and they volunteer the information that they identify as a conservative, what conclusions should I draw about their ideological positions?
I think the problem has to stem from sticking the ideology-label onto one's identity, because then when an individual has opinions, it's really hard for them to separate their opinions from their ideology-identity-label, especially when they're arguing with a standard enemy of that ideology-label, and thus can easily view themselves as standing in for the ideology itself. The conclusion I draw is that as soon as an ideology is an identity-label, it quickly becomes pretty close to useless as a bit of information by itself, and that the speed at which this happens is somewhat correlated to the popularity of the label.
Um, I fail to see how people are making and doing less stuff than in previous generations. We've become obsessed with information technology, so a lot of that stuff tends to be things like "A new web application so that everyone can do X better", but it fuels both the economy and academia, so who cares? With things like maker culture, the sheer overwhelming number of kids in their teens and 20s and 30s starting SAAS companies or whatever, and media becoming more distributed than it's ever been in history, we have an absurd amount of productivity going on in this era, so I'm confused where you think we're "braking".
As for video games in particular (Which seems to be your go-to example for things characteristic of the modern era that are useless), games are just a computer-enabled medium for two kinds of things: Contests of will and media. The gamers of today are analogous in many ways to the novel-consumers or TV-consumers or mythology-consumers of yesterday and also today (Because rumors of the death of old kinds of media are often greatly exaggerated), except for the gamers that are more analogous to the sports-players or gladiators or chess-players of yesterday and also today. Also, the basically-overnight-gigantic indie game development industry is pretty analogous to other giant booms in some form of artistic expression. Video games aren't a new human tendency, they're a superstimulus that hijacks several (Storytelling, Artistic expression, Contests of will) and lowers entry barriers to them. Also, the advent of powerful parallel processors (GPUs), a huge part of the boom in AI research recently, has been driven primarily by the gaming industry. I think that's a win regardless.
Basically, I just don't buy any of your claims whatsoever. The "common sense" ideas about how society improving on measures of collaboration, nonviolence, and egalitarianism will make people lazy and complacent and stupid have pretty much never borne out on a large scale, so I'm more inclined to attribute their frequent repetition by smart people to some common human cognitive bias than some deep truth. As someone whose ancestors evolved in the same environment yours did, I too like stories of uber-competent tribal hero guys, but I don't think that makes for a better society, given the overwhelming evidence that a more pluralistic, egalitarian, and nonviolent society tends to correlate with more life satisfaction for more people, as well as the acceleration of technology.
I'm inclined to agree. Actually I've been convinced for a while that this is a matter of degrees rather than being fully one way or the other (Modules versus learning rules), and am convinced by this article that the brain is more of a ULM than I had previously thought.
Still, when I read that part the alternative hypothesis sprung to mind, so I was curious what the literature had to say about it (Or the post author.)
For e.g. the ferret rewiring experiments, tongue based vision, etc., is a plausible alternative hypothesis that there are more general subtypes of regions that aren't fully specialized but are more interoperable than others?
For example, (Playing devil's advocate here) I could phrase all of the mentioned experiments as "sensory input remapping" among "sensory input processing modules." Similarly, much of the work in BCI interfaces for e.g. controlling cursors or prosthetics could be called "motor control remapping". Have we ever observed cortex being rewired for drastically dissimilar purposes? For example, motor cortex receiving sensory input?
If we can't do stuff like that, then my assumption would be that at the very least, a lot of the initial configuration is prenatal and follows kind of a "script" that might be determined by either some genome-encoded fractal rule of tissue formation, or similarities in the general conditions present during gestation. Either way, I'm not yet convinced there's a strong argument that all brain function can be explained as working like a ULM (Even if a lot of it can)
That's not exactly true. You can volunteer for far less than the minimum wage (Some would say infinitely less) if you want to. What you can't do is employ someone for some non-zero amount of money that's lower than the minimum wage.