Vladimir_Nesov comments on How SIAI could publish in mainstream cognitive science journals - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
The AGI researchers you're talking about are the people who read IEEE Intelligent Systems and Minds and Machines. That's where this kind of work is being published, except for that tiny portion of stuff produced by SIAI and by Ben Goertzel, who publishes in his own online "journal", Dynamical Psychology.
So if you want to communicate with AGI researchers and others working on Friendly AI, then you should write in the language of IEEE Intelligent Systems and Minds and Machines, which is the language I described above.
The papers in Journal of Artificial General Intelligence follow the recommendations given above, too - though as a brand new online journal with little current prestige, it's far less picky about those things than more established journals.
Moreover, if you want to communicate with others about new developments in deontic logic or decision theory for use in FAI, then those audiences are all over the philosophical terrain, in mainstream philosophy journals not focused on AI. (Deontic logic and decision theory discussions are particular prevalent in journals focused on formal philosophy.)
Also, it's not just a matter of rewriting things in philosophical jargon for the sake of talking to others. Often, the philosophical jargon has settled on a certain vocabulary because it has certain advantages.
Above, I gave the example of making a distinction between "extrapolating" from means to ends, and "extrapolating" current ends to new ends given a process of reflective equilibrium and other mental changes. That's a useful distinction that philosophers make because there are many properties of the first thing not shared by the second, and vice versa. Conflating the two doesn't carve reality at its joints terribly well.
And of course I agree that anything not assuming reductionism must be dismissed.
But then, it seems you are interested in publishing for mainstream academia anyway, right? I know SIAI is pushing pretty hard on that Singularity Hypothesis volume from Springer, for example. And of course publishing in mainstream academia will bring in funds and credibility and so on, as I stated. It's just that, as you said, you don't have many people who can do that kind of thing, and those people are tied up with other things. Yes?
Could you write up the relevant distinction, as applied to CEV, perhaps as a discussion post? I don't know the terminology, but expect that given the CEV ambition to get a long way towards the normative stuff, the distinction becomes far less relevant than when you discuss human decision-making.
(Prompted by the reference you made in this comment.)
Did you read the original discussion post to which the linked comment is attached? I go into more detail there.
Yes, I read it, and it's still not clear. Recent discussion made a connection with terminal/instrumental values, but it's not clear in what context they play a role.
I expect I could research this discussion in more detail and figure out what you meant, but that could be avoided and open the issue to a bigger audience if you make, say, a two-paragraph self-contained summary. I wouldn't mention this issue if you didn't attach some significance to it by giving it as an example in a recent comment.
I'm not sure what to say beyond what I said in the post. Which part is unclear?
In any case, it's kind of a moot point because Eliezer said that it is a useful distinction to make, he just chose not to include it in his CEV paper because his CEV paper doesn't deep enough into the detailed problems of implementing CEV where the distinction I made becomes particularly useful.