siodine comments on Intuitions Aren't Shared That Way - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (237)
I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category "dog" being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren't particulars at the concrete level. The specific bee hive in my mind that I'm imagining is a particular, or the "bee hive" that I'm seeing and interpreting into a bee hive in front of me is also a particular, but the bee hive is just a "pattern" of atoms.
I think that you're stuck in noun-land while I'm in verb-land, but I don't think noun-land is concrete (it's an abstraction).
Framing those concepts in terms of usefulness isn't helpful, I think. I'd simply say the laypeople are doing something different unless they're contributing to our body of knowledge. In which case, science as it is requires that those laypeople interact with science as it is (journals and such).
No, I mean thinking of someone as being scientific doesn't make sense if you think of science as it is because e.g. the sixth grader at the science fair that we all "scientific" isn't interacting with science as it is. We're taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
We can imagine an immortal human being on another planet replicating everything science has done on Earth thus far. So, yes I think it can occur in isolated individuals, but that's only because the individual has taken on everything that science is and not some like "carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance."
If I'm going to apply an abstraction to what I praise in science to individuals, it's not "being scientific" or "doing science", it's "working with feedback." It's what programmers do, it's what engineers do, it's what mathematicians, it's what scientists do, it's what people that effectively lose weight do, and so on. It's the kernel of thought most conducive to progress in any area.
I think we are approaching the problem at the same level. I think I have optimally defined the concepts, and I think "behave in a way that predictably makes you better and better at doing good stuff" is what needs to be communicated and not "science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance." If we're going to add more content, then we should talk about how to effectively measure self-improvement, how to get solid feedback and so on. With that knowledge, I think a bunch of kids working together could rebuild science from the ground up.
I'd pass on how important "behave in a way that predictably makes you better and better at doing good stuff" is.
That's problematic, first, because it leaves mind itself in a strange position. And second because, if mathematical platonism (for example) were true, then there would exist abstract objects that are mind-independent.
You seem to be assuming the pattern-matching of this sort is a vice. If it's useful to mark the pattern in question, and we recognize that we're doing so for utilitarian reasons and not because there's a transcendent Essence of Scienceyness, then the pattern-matching is benign. It's how humans think, and we can't become completely inhuman if our goal is to take the rest of mankind with us into the future. Not yet, anyway.
Religions are also feedback loops. The more I believe, the more my belief gets confirmed. Remarkable! The primary problem with this ultra-attenuated notion of what we want is that all the work is being done by the black-box normative terms like 'improvement' and 'better' and 'optimal.' Everything we're actually trying to concretely teach is hidden behind those words.
We also need more content than 'working with a feedback loop from reality'; that kind of metaphorical talk might fly on LessWrong, but it's really a summary of some implicit intuitions we already share, not instruction we could in those words convey to someone who doesn't already see what we're getting at. After all, everything exists in a back-and-forth with reality, and everything is for that matter part of reality. Perhaps my formulations of what we want are too concrete; but yours are certainly too abstract and underdetermined.