Comment author: hairyfigment 16 June 2015 01:27:15AM 0 points [-]

That first one I mentioned is the article Noûs told me to read first at some time or other, the best face that the journal could put forward (in someone's judgment).

Also, did my links just not load for you? One of them is an article in Noûs in 2005 saying Pearl had the right idea - from what I can see of the article its idea seems incomplete, but anyone who wasn't committed to modal realism should have seen it as important to discuss. Yet not only the 2010 article, but even the one I linked from Jan 2015 that explicitly claimed to discuss alternate approaches, apparently failed to mention Pearl, or the other 2005 author, or anything that looks like an attempted response to one of them. Why do you think that is?

Because while I could well have been wrong about the reason, it looks to me like the authors are in no way trying to find the best solution. And while scientists no doubt have the same incentives to publish original work, they also have incentives to accept the right answer that appear wholly lacking here - at least (and I no longer know if this is charitable or uncharitable) when the right answer comes from AI theory.

Comment author: ZacHirschman 16 June 2015 08:48:52AM *  0 points [-]

I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can't speak for the authors you cite, but the questions asked by philosophers are different than, "what is the best answer?" They are more along the lines of, "How do we generate our answers anyways?" and "What might follow?" This may lead to an admittedly harmful lack of urgency in updating beliefs.

Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy's sake; an error in efficient map design theory may take a generation or two to become immediately apparent.

Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl's work is equally well-guided in redeeming philosophers. I don't think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don't consider each other's viewpoints, then every article in which philosophers do sufficiently consider each other's viewpoints is weak evidence of the opposite.

Comment author: hairyfigment 15 June 2015 05:42:40PM 0 points [-]

Philosophers make great effort to undertand each others' frameworks.

This amused me, because I somewhat doubt the term "philosophy" would exist without Alexander the Great, and it appears to me that philosophers do not make great effort to understand relevant work they've classified as 'not philosophy'.

I recall the celebrated philosophy journal Noûs recommending an article, possibly this one, which talked a great deal about counterfactuals without once mentioning Judea Pearl, much less recognizing that he seemed to have solved the problems under direct discussion. (Logical uncertainty of course may still be open. I will be shocked if the solution involves talking about logically impossible "possible worlds" rather than algorithms.)

Now on second search, the situation doesn't seem quite as bad. Someone else mentioned Pearl in the pages of Noûs - before the previous article, yet oddly uncited therein. And I found a more recent work that at least admits probabilities (and Gaifman) exist. But I can see the references, and the list still doesn't include Pearl or even that 2005 article. Note that the abstract contains an explicit claim to address "other existing solutions to the problem."

Comment author: ZacHirschman 15 June 2015 08:25:31PM 0 points [-]

It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).

Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, "The algorithmization of counterfactuals" contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to "Causes and Explanations - a sturctural model approach," which was published in 2005 in the British Journal for the PHILOSOPHY of Science.

Comment author: ZacHirschman 14 June 2015 09:35:49AM 1 point [-]

It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others' frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn't prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.

I'm also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.

Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.

Comment author: TheAncientGeek 26 May 2015 07:12:25PM *  1 point [-]

street philosophy" done by Socrates and the more rigorous, mathematical Philosophy of Science.

PoSc done by analyticals is no more rigorous than other analytical philosophy, and PoSc done by continentals is no more rigorous that other continental.

Socrates and co were the analyticals of their day......let not the ungeometered enter the Academy.... with the role of the continentals being taken by the Sophists.

Comment author: ZacHirschman 26 May 2015 07:24:25PM 1 point [-]

Well said again, and well-considered that ideas in minds can only move forwards through time (not a physical law). My initial reaction to this article was, "What about philosophy of science?" However, it seems my PoSc objections extend to other realms of philosophy as well. Thank you for leading me here.

Comment author: ChristianKl 26 May 2015 05:21:59PM 0 points [-]

Popperian falsifiability, Kuhnian paradigm shifts, and Bayesian reasoning all fall into this domain. There is a great compendium by Curd and Cover; I recommend searching the table of contents for essays also available online. Here, philosophers experiment with the precision of testable models rather than hypotheses.

Could you explain to me in what extend Popper provided a precise model that's testable?

Comment author: ZacHirschman 26 May 2015 07:00:34PM 0 points [-]

Popper (or Popperism) predicted that falsifiable models would yield more information than non-falsifiable ones.

I don't think this is precisely testable, but it references precisely testable models. That is why I would categorize it as philosophy (of science), but not science.

Comment author: TheAncientGeek 26 May 2015 05:01:29PM *  1 point [-]

LW ... privileges emotivist ethics,

Nope.

philosophy of science" here philosophers experiments with precision of testable models rather than hypotheses.

You know Philosophy of Science is a different thing to Experimental Philosophy , right?

Comment author: ZacHirschman 26 May 2015 06:54:34PM 0 points [-]

Yes, I may have made an inferential leap here that was wrong or unnecessary. You and I agree very strongly on there being a distinction between Philosophy of Science and Experimental Philosophy. I wanted to draw a distinction between the kind of, "street philosophy" done by Socrates and the more rigorous, mathematical Philosophy of Science. "Experiment" may not have been the most appropriate verbiage.

I would be glad to reconsider my stance that this rationalist community privileges emotivist readings of ethics. I will begin looking into this. My reason for including this argument is the idea (from the article) that when philosophers ask questions about right and wrong or good and bad, they are really asking how people feel about these concepts.

Comment author: ZacHirschman 26 May 2015 03:11:37PM 1 point [-]

I like your interpretation of philosophy as it pertains to ethics, aesthetics, and perhaps metaphysics. Your Socrates example, and LW in general, privileges emotivist ethics, but this is an interesting point and not a drawback. Looking at ethics as a cognitive science is not necessarily a flawed approach, but it is important to consider the potential alternative models.

Philosophy has a branch called "philosophy of science" where your dissolution falls apart. Popperian falsifiability, Kuhnian paradigm shifts, and Bayesian reasoning all fall into this domain. There is a great compendium by Curd and Cover; I recommend searching the table of contents for essays also available online. Here, philosophers experiment with the precision of testable models rather than hypotheses.

Comment author: Vaniver 25 May 2015 10:44:40PM *  5 points [-]

I wouldn't mind reading if someone took a crack at a Sequences 2.0, or something completely different.

One way of looking at the failure mode of Scientology is that they lead with genuinely useful material, which hooks people and establishes them as a credible source of wisdom. They then have a progressive structure that convinces you new epiphanies are just around the corner, you just need to put in a little more effort / time / cash--but there is no epiphany waiting that will be as useful as the original epiphanies.

This happens lots of places. I recall reading about some Alexander Technique expert, who continued doing lessons in the hopes of recapturing the first moment when he experienced lightness in his body. He never could, because the thing that was shocking about the first time was the surprise, not the lightness, and no matter how light he got, he could not become as surprised by it.

The healthy approach is to have a purpose, to pursue a well of knowledge for as long as doing so enhances that purpose, and then to abandon that well of knowledge as soon as it no longer enhances that purpose.

But here we run into the issue that, while rationality may be the common interest of many causes, the "something new" is unlikely to be a specifically rationality thing. It's more likely to be something that some people find interesting and some people find boring, and so the people split into different taskforces to solve different problems. (That is, the Craft and the Community sequence really does anticipate lots of these issues.)

Comment author: ZacHirschman 26 May 2015 01:10:00PM 2 points [-]

I don't mean to advocate an epiphany-driven model of discovery.

To use your Scientology example and terminology, what I am advocating is not that we find the "next big thing," but that we pursue refinement of the original, "genuinely useful material." Of course, it is much easier to advocate this than to put the work in, but that's why I'm using the open thread.

There are some legitimate issues with some of the Sequences (both resolved and unresolved). The comments represent a very nice start, but there may be some serious philosophical work to be done. There is a well of knowledge about pursuing wells of knowledge, and I would find it purposeful to refine the effective pursuit of knowledge.

Comment author: estimator 25 May 2015 03:31:12PM 6 points [-]

Agreed that LW is in a kind of stagnation. However, I think that just someone writing a series of high-quality posts would suffice to fix it. Now, the amount of discussion in comments is quite good, the problem is that there aren't many interesting posts.

If a group said that they thought A was an important issue and the solution was X, most members would pay more attention than if a random individual said it. No-one would have to listen to anything they say, but I imagine that many would choose to. Furthermore if the exec were all actively involved in the projects, I imagine they'd be able to complete some themselves, especially if they choose smaller ones.

It isn't quite a good thing; many people noticed that LW is somewhat like Eliezer's echo chamber. Actually, we should endorse high-quality opinions different from LW mainstream.

Comment author: ZacHirschman 25 May 2015 08:18:01PM 1 point [-]

What are your heuristics for telling whether posts/comments contain "high-quality opinions," or "LW mainstream"? Also, what did you think of Loosemore's recent post on fallacies in AI predictions?

Comment author: [deleted] 25 May 2015 12:54:30PM *  1 point [-]

is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material.

I don't see any connection between growth and updating unless significant cognitive science breakthroughs were made since. But I think growth would depend on presenting it in a popular, digestible format. Not HPMOR, not one book and not a book for SF/F nerds anyway... but more going to say changemyview.reddit.com and engaging in debates and when people make cognitive mistakes linking to the article here. For example. Or people who have commit privileges to popular newspapers could write articles like "10 things LW taught me".

In response to comment by [deleted] on Open Thread, May 25 - May 31, 2015
Comment author: ZacHirschman 25 May 2015 01:50:01PM 0 points [-]

I see that I used the word "growth" capriciously. I don't necessarily mean greater numbers, I mean the opposite of stagnation. Of course a call for action is easier and less effective than acting, but that's why we have open threads.

View more: Next