Comment author: The_Jaded_One 29 March 2016 08:28:14PM *  1 point [-]

That's a very good point.

Though one would hope that the level of effort put into AGI safety will be significantly more than what they put into twitter bot safety...

Comment author: dlarge 30 March 2016 05:42:11PM 1 point [-]

One would hope! Maybe the Tay episode can serve as a cautionary example, in that respect.

Comment author: The_Jaded_One 28 March 2016 05:06:20PM 1 point [-]

Sure, but he point stands: failures of nattow AI systems aren't informative about likely faulures of superintelligent AGIs.

Comment author: dlarge 29 March 2016 05:54:00PM 5 points [-]

They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It's because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay's technology aren't the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.

Comment author: gjm 23 March 2016 02:54:49PM 0 points [-]

The big diagram is contingent

Yup. Is that supposed to make it not a counterexample, and if so why? (Note that, e.g., the processes affecting mood, tiredness, etc., are also contingent. You may wish to avoid stipulations that make my counterexample not a counterexample if they also make your leading example not an example :-).)

Again, I'm not disagreeing that many good ideas are simple and that simple ideas are worth pursuing even if you expect that they're never going to be more than useful approximations that may point in helpful directions.

And most neglected of all: Purpose.

My feeling is that if "purpose" is neglected in science it's because it's generally been found to be more misleading than helpful. We can ask, in evolutionary mode, "what if anything gave this a selective advantage?" or, relatedly, "why didn't this costly thing get selected out of existence?". And we can ask "what does this actually do?". What does talk of purpose add beyond these?

It adds something in cases where some actually purposeful agent is responsible for whatever-it-is. So, e.g., I expect it's useful from time to time in finance where the answer to "why do these prices move in this way?" may be "because the owners of these pension funds have these incentives and are acting accordingly", and it's certainly useful in politics or history. But in biology? It seems to me that if you find cases where the full-blown concept of purpose is genuinely better than the alternatives, you've found good evidence[1] for creationism, and so far alleged cases of good evidence for creationism have tended to evaporate on closer inspection.

[1] Of course good evidence is not necessarily anything like proof; sometimes there is good evidence for false things.

Comment author: dlarge 23 March 2016 09:55:16PM 1 point [-]

Maybe this is getting too far afield, but I would say that "Purpose" is not only a useful, but an essential heuristic in science when it's being practiced by a kind of entity (like human beings) who are hard-wired to think in terms of purposeful action. Making the first question "What is this for?" brings to bear the full power of uncounted generations of field-tested behaviors, rules of thumb, and search strategies.

It is awfully important, though, not to make it the last question. I guess that's where I'd say yes, a "full-blown concept of purpose" in the sense of an unexplained explanation, is unscientific.

Comment author: dlarge 26 March 2015 08:28:28PM 3 points [-]

Hello, everyone! I've been lurking for about a year and I've finally overcome the anxiety I encounter whenever I contemplate posting. More accurately, I'm experiencing enough influences at this very moment to feel pulled strongly to comment.

I've just tumbled to the fact that I may have an instinctive compulsion against the sort of signalling that's often discussed here and by Robin Hanson. In the last several hours alone I've gone far out of my way to avoid signalling membership in an ingroup or adherence to a specific cohort. Is this sort of compulsion common amongst LWers? (I'm aware that declaring myself an anti-signaller runs the risk of an accusation of signalling itself but whadayagonnado.)

I'm also very interested in how pragmatism, pragmaticism, and Charles Sanders Peirce form (if at all) the philosophical underpinnings of the sort of rationality that LW centers on. It seems like Peirce doesn't get nearly as much attention here as he should, but maybe there are good reasons for that.