I'm reluctant to frame engineering and philosophy as adversarial disciplines in this conversation as AI and ML research have long drawn on both. As an example, Minsky and Papert's work on the "Society of Mind" and Minsky's "Perceptrons" are hands that wash each other then reach forward to underpin much of what is now accepted in neural network research.
Moreover, there aren't just two disciplines feeding this sport; learnings have been taken from computer science, philosophy, psychology and neuroscience over the fifty odd years of AI work. The more successful ML shops have been using the higher order language of psychology to describe and intervene on operational aspects (in game AlphaGo, i.e.) and neuroscience to create the models (Hassabis, 2009).
I will be surprised if biological models of neurotransmitters don't make an appearance as an nth anchor in the next decade or so. These may well take inspiration from Patricia Churchland's decades long cross disciplinary work in philosophy and neuroscience. They may also draw from the intersection of psychology and neuroscience that is informing mental health treatments; both chemical and experiential.
This is all without getting into those fjords of philosophy in which many spend their time prioritising happiness over truth; ethics and morality... which is what I think this blogpost is really talking about when it says philosophy. Will connectionist modelling learn from and contribute to deontological, utilitarian, consequentialist and hedonistic ethics? I don't see how it cannot.
I'm reluctant to frame engineering and philosophy as adversarial disciplines in this conversation as AI and ML research have long drawn on both. As an example, Minsky and Papert's work on the "Society of Mind" and Minsky's "Perceptrons" are hands that wash each other then reach forward to underpin much of what is now accepted in neural network research.
Moreover, there aren't just two disciplines feeding this sport; learnings have been taken from computer science, philosophy, psychology and neuroscience over the fifty odd years of AI work. The more successful ML shops have been using the higher order language of psychology to describe and intervene on operational aspects (in game AlphaGo, i.e.) and neuroscience to create the models (Hassabis, 2009).
I will be surprised if biological models of neurotransmitters don't make an appearance as an nth anchor in the next decade or so. These may well take inspiration from Patricia Churchland's decades long cross disciplinary work in philosophy and neuroscience. They may also draw from the intersection of psychology and neuroscience that is informing mental health treatments; both chemical and experiential.
This is all without getting into those fjords of philosophy in which many spend their time prioritising happiness over truth; ethics and morality... which is what I think this blogpost is really talking about when it says philosophy. Will connectionist modelling learn from and contribute to deontological, utilitarian, consequentialist and hedonistic ethics? I don't see how it cannot.