If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Do you think that someone who has linguistic genius, but mathematical competency, would come to different epistemological conclusions than someone who has mathematical genius, but linguistic competency?
Here I am making an implicit assumption that there is a qualitative difference between linguistic cognitive processes and mathematical cognitive processes.
As an example of the first type of person, I think Eliezer Yudkowsky is someone who is clearly a linguistic genius but not clearly a mathematical genius. Now, contrast his approach to AI risk with that of Paul Christiano, who is more likely to be a mathematical genius but not clearly a linguistic genius. (Not that either of them have low ability on the weaker trait, they are still likely to be highly competent in both).
Note that "linguistic" ability encompasses much more than having a rich vocabulary or the ability to write amazing poetry or something like that, but is also about the ability to operate on concepts and understand the complex interplay between the rules that govern the concepts. In that regard there is definitely overlap with mathematical ability, but interestingly, it seems that having genius is one domain does not guarantee genius in the other.
But I'm mostly interested in whether or not having mastery in one mode of thinking over the other would actually cause one to converge on different "truths." I only have a slight intuitive predilection that it does, based on reading various philosophers and noting that the more mathematically inclined seem to approach a different set of conclusions than the non-mathematically inclined.
I agree, but one might need large sample size if you want to apply this to specific issues, particularly subjects of active research like AI alignment. People vary a lot, and that variation is somewhat correlated but not perfectly, and also in research, different people are deliberately working on different solutions to try to explore the possibilities.