(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)
Feasibility Contextualism
A large amount of philosophy has been people trying to demonstrate how you can never really know anything [citation needed]. Various forms of skepticism take the stance, "No method of inquiry get get you Real KnowledgeTM, and so no method is justified."
Despite that, it seems like (at least in terms of watching how people act in the world) everyone gets that you need some level of pragmatism. "Well I've gotta do something, and this seems like the best idea, so I'm going to do it instead of doing nothing." No one is so skeptical of knowledge that they have stayed immobile until they starved to death [citation needed].
What Kelly aims to do is create a rigorous formalism for the sort of pragmatic attitude that people take all the time. It's very much inspired by how computer scientists do things. If someone proves that a problem can't be done any faster than quadratic time, and you figure out a quadratic time algorithm, you're happy. You don't refuse to use any algorithm that doesn't run in constant time.
In a sentence, this course was about how really cool formalism to talk about "How hard is a given scientific problem" and how that effects "the best possible performance you can get given the hardness of the problem".
Sneak Peak: Induction and Metaphysics
Over the ages people have postulated what qualities scientific hypothesis should have. The logical positivists asserted that only verifiable propositions should be the domain of science (if it's true, you can do some test to demonstrate it's true). Popper wanted hypothesis to be falsifiable (if it's false, you can do some test to demonstrate it's false). Verification and falsifiability have an important connection to two other notions that philosophers of science often talk about, the problem of induction and the problem of metaphysics.
You face the problem of induction if it's the case that even if you hypothesis is true, you'll never know for sure (will the sun rise tomorrow? You can never rule out that it just won't at some point).
You face the problem of metaphysics if it's the case that even if your hypothesis is false, you'll never get definitive evidence that it's false ("there's a teacup somewhere in the infinite expanse of space!")
Turns out almost all questions that have been the subject of actual science are neither verifiable or falsifiable, and you're going up against induction or metaphysics with most questions as well. Uh Oh. Looks like those normative ideas on what science should be rule out most of what science is. Oops.
In light of our new tack on things, this is sorta like if the only complexity classes people had were linear and quadratic, and thought that the only problems that should be in the domain of computer science should be ones that are solvable in linear or quadratic time. Fix: make a richer complexity hierarchy in which to locate problems, see what the complexity says about possible performance.
(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)
Feasibility Contextualism
A large amount of philosophy has been people trying to demonstrate how you can never really know anything [citation needed]. Various forms of skepticism take the stance, "No method of inquiry get get you Real KnowledgeTM, and so no method is justified."
Despite that, it seems like (at least in terms of watching how people act in the world) everyone gets that you need some level of pragmatism. "Well I've gotta do something, and this seems like the best idea, so I'm going to do it instead of doing nothing." No one is so skeptical of knowledge that they have stayed immobile until they starved to death [citation needed].
What Kelly aims to do is create a rigorous formalism for the sort of pragmatic attitude that people take all the time. It's very much inspired by how computer scientists do things. If someone proves that a problem can't be done any faster than quadratic time, and you figure out a quadratic time algorithm, you're happy. You don't refuse to use any algorithm that doesn't run in constant time.
In a sentence, this course was about how really cool formalism to talk about "How hard is a given scientific problem" and how that effects "the best possible performance you can get given the hardness of the problem".
Sneak Peak: Induction and Metaphysics
Over the ages people have postulated what qualities scientific hypothesis should have. The logical positivists asserted that only verifiable propositions should be the domain of science (if it's true, you can do some test to demonstrate it's true). Popper wanted hypothesis to be falsifiable (if it's false, you can do some test to demonstrate it's false). Verification and falsifiability have an important connection to two other notions that philosophers of science often talk about, the problem of induction and the problem of metaphysics.
You face the problem of induction if it's the case that even if you hypothesis is true, you'll never know for sure (will the sun rise tomorrow? You can never rule out that it just won't at some point).
You face the problem of metaphysics if it's the case that even if your hypothesis is false, you'll never get definitive evidence that it's false ("there's a teacup somewhere in the infinite expanse of space!")
Turns out almost all questions that have been the subject of actual science are neither verifiable or falsifiable, and you're going up against induction or metaphysics with most questions as well. Uh Oh. Looks like those normative ideas on what science should be rule out most of what science is. Oops.
In light of our new tack on things, this is sorta like if the only complexity classes people had were linear and quadratic, and thought that the only problems that should be in the domain of computer science should be ones that are solvable in linear or quadratic time. Fix: make a richer complexity hierarchy in which to locate problems, see what the complexity says about possible performance.