Posts

Sorted by New

Wiki Contributions

Comments

I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can't speak for the authors you cite, but the questions asked by philosophers are different than, "what is the best answer?" They are more along the lines of, "How do we generate our answers anyways?" and "What might follow?" This may lead to an admittedly harmful lack of urgency in updating beliefs.

Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy's sake; an error in efficient map design theory may take a generation or two to become immediately apparent.

Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl's work is equally well-guided in redeeming philosophers. I don't think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don't consider each other's viewpoints, then every article in which philosophers do sufficiently consider each other's viewpoints is weak evidence of the opposite.

It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).

Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, "The algorithmization of counterfactuals" contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to "Causes and Explanations - a sturctural model approach," which was published in 2005 in the British Journal for the PHILOSOPHY of Science.

It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others' frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn't prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.

I'm also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.

Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.

Well said again, and well-considered that ideas in minds can only move forwards through time (not a physical law). My initial reaction to this article was, "What about philosophy of science?" However, it seems my PoSc objections extend to other realms of philosophy as well. Thank you for leading me here.

Popper (or Popperism) predicted that falsifiable models would yield more information than non-falsifiable ones.

I don't think this is precisely testable, but it references precisely testable models. That is why I would categorize it as philosophy (of science), but not science.

Yes, I may have made an inferential leap here that was wrong or unnecessary. You and I agree very strongly on there being a distinction between Philosophy of Science and Experimental Philosophy. I wanted to draw a distinction between the kind of, "street philosophy" done by Socrates and the more rigorous, mathematical Philosophy of Science. "Experiment" may not have been the most appropriate verbiage.

I would be glad to reconsider my stance that this rationalist community privileges emotivist readings of ethics. I will begin looking into this. My reason for including this argument is the idea (from the article) that when philosophers ask questions about right and wrong or good and bad, they are really asking how people feel about these concepts.

I like your interpretation of philosophy as it pertains to ethics, aesthetics, and perhaps metaphysics. Your Socrates example, and LW in general, privileges emotivist ethics, but this is an interesting point and not a drawback. Looking at ethics as a cognitive science is not necessarily a flawed approach, but it is important to consider the potential alternative models.

Philosophy has a branch called "philosophy of science" where your dissolution falls apart. Popperian falsifiability, Kuhnian paradigm shifts, and Bayesian reasoning all fall into this domain. There is a great compendium by Curd and Cover; I recommend searching the table of contents for essays also available online. Here, philosophers experiment with the precision of testable models rather than hypotheses.

I don't mean to advocate an epiphany-driven model of discovery.

To use your Scientology example and terminology, what I am advocating is not that we find the "next big thing," but that we pursue refinement of the original, "genuinely useful material." Of course, it is much easier to advocate this than to put the work in, but that's why I'm using the open thread.

There are some legitimate issues with some of the Sequences (both resolved and unresolved). The comments represent a very nice start, but there may be some serious philosophical work to be done. There is a well of knowledge about pursuing wells of knowledge, and I would find it purposeful to refine the effective pursuit of knowledge.

What are your heuristics for telling whether posts/comments contain "high-quality opinions," or "LW mainstream"? Also, what did you think of Loosemore's recent post on fallacies in AI predictions?

I see that I used the word "growth" capriciously. I don't necessarily mean greater numbers, I mean the opposite of stagnation. Of course a call for action is easier and less effective than acting, but that's why we have open threads.

Load More